All Possible Views About Humanity's Future Are Wild

This is the first post in the Most Important Century sequence. For more info and a roadmap for the series, see the sequence introduction.

Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"

Summary

  • In a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.
  • This view seems "wild": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this "wildness" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem "wild" enough to be suspicious.)
  • But I don't think it's really possible to hold a non-"wild" view on this topic. I discuss alternatives to my view: a "conservative" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a "skeptical" view that thinks galaxy-scale expansion will never happen. Each of these views seems "wild" in its own way.
  • Ultimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation.

Before I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is "wild." I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.


My view

This is the first in a series of pieces about the hypothesis that we live in the most important century for humanity.

In this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a "technologically mature"1 civilization. That would mean that:

  • We'd be able to start sending spacecraft throughout the galaxy and beyond.
  • These spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our "digital descendants").
    • See Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require.
    • I'll also argue in a future piece that there is a chance of "value lock-in" here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.2

If that ends up happening, you might think of the story of our galaxy3 like this. I've marked major milestones along the way from "no life" to "intelligent life that builds its own computers and travels through space."

 

 

Thanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.4

??? That's crazy! According to me, there's a decent chance that we live at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. That out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.

I know what you're thinking: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher."5

But:

The "conservative" view

Let's say you agree with me about where humanity could eventually be headed - that we will eventually have the technology to create robust, stable settlements throughout our galaxy and beyond. But you think it will take far longer than I'm saying.

A key part of my view (which I'll write about more later) is that within this century, we could develop advanced enough AI to start a productivity explosion. Say you don't believe that.

  • You think I'm underrating the fundamental limits of AI systems to date.
  • You think we will need an enormous number of new scientific breakthroughs to build AIs that truly reason as effectively as humans.
  • And even once we do, expanding throughout the galaxy will be a longer road still.

You don't think any of this is happening this century - you think, instead, that it will take something like 500 years. That's 5-10x the time that has passed since we started building computers. It's more time than has passed since Isaac Newton made the first credible attempt at laws of physics. It's about as much time has passed since the very start of the Scientific Revolution.

Actually, no, let's go even more conservative. You think our economic and scientific progress will stagnate. Today's civilizations will crumble, and many more civilizations will fall and rise. Sure, we'll eventually get the ability to expand throughout the galaxy. But it will take 100,000 years. That's 10x the amount of time that has passed since human civilization began in the Levant.

Here's your version of the timeline:

The difference between your timeline and mine isn't even a pixel, so it doesn't show up on the chart. In the scheme of things, this "conservative" view and my view are the same.

It's true that the "conservative" view doesn't have the same urgency for our generation in particular. But it still places us among a tiny proportion of people in an incredibly significant time period. And it still raises questions of whether the things we do to make the world better - even if they only have a tiny flow-through to the world 100,000 years from now - could be amplified to a galactic-historical-outlier degree.

The skeptical view

The "skeptical view" would essentially be that humanity (or some descendant of humanity, including a digital one) will never spread throughout the galaxy. There are many reasons it might not:

  • Maybe something about space travel - and/or setting up mining robots, solar panels, etc. on other planets - is effectively impossible such that even another 100,000 years of human civilization won't reach that point.6
  • Or perhaps for some reason, it will be technologically feasible, but it won't happen (because nobody wants to do it, because those who don't want to block those who do, etc.)
  • Maybe it's possible to expand throughout the galaxy, but not possible to maintain a presence on many planets for billions of years, for some reason.
  • Maybe humanity is destined to destroy itself before it reaches this stage.
    • But note that if the way we destroy ourselves is via misaligned AI,7 it would be possible for AI to build its own technology and spread throughout the galaxy, which still seems in line with the spirit of the above sections. In fact, it highlights that how we handle AI this century could have ramifications for many billions of years. So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.
  • Maybe an extraterrestrial species will spread throughout the galaxy before we do (or around the same time).
    • However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy.
  • Maybe some extraterrestrial species already effectively has spread throughout our galaxy, and for some reason we just don't see them. Maybe they are hiding their presence deliberately, for one reason or another, while being ready to stop us from spreading too far.
    • This would imply that they are choosing not to mine energy from any of the stars we can see, at least not in a way that we could see it. That would, in turn, imply that they're abstaining from mining a very large amount of energy that they could use to do whatever it is they want to do,8 including defend themselves against species like ours.
  • Maybe this is all a dream. Or a simulation.
  • Maybe something else I'm not thinking of.

That's a fair number of possibilities, though many seem quite "wild" in their own way. Collectively, I'd say they add up to more than 50% probability ... but I would feel very weird claiming they're collectively overwhelmingly likely.

Ultimately, it's very hard for me to see a case against thinking something like this is at least reasonably likely: "We will eventually create robust, stable settlements throughout our galaxy and beyond." It seems like saying "no way" to that statement would itself require "wild" confidence in something about the limits of technology, and/or long-run choices people will make, and/or the inevitability of human extinction, and/or something about aliens or simulations.

I imagine this claim will be intuitive to many readers, but not all. Defending it in depth is not on my agenda at the moment, but I'll rethink that if I get enough demand.

Why all possible views are wild: the Fermi paradox

I'm claiming that it would be "wild" to think we're basically assured of never spreading throughout the galaxy, but also that it's "wild" to think that we have a decent chance of spreading throughout the galaxy.

In other words, I'm calling every possible belief on this topic "wild." That's because I think we're in a wild situation.

Here are some alternative situations we could have found ourselves in, that I wouldn't consider so wild:

  • We could live in a mostly-populated galaxy, whether by our species or by a number of extraterrestrial species. We would be in some densely populated region of space, surrounded by populated planets. Perhaps we would read up on the history of our civilization. We would know (from history and from a lack of empty stars) that we weren't unusually early life-forms with unusual opportunities ahead.
  • We could live in a world where the kind of technologies I've been discussing didn't seem like they'd ever be possible. We wouldn't have any hope of doing space travel, or successfully studying our own brains or building our own computers. Perhaps we could somehow detect life on other planets, but if we did, we'd see them having an equal lack of that sort of technology.

But space expansion seems feasible, and our galaxy is empty. These two things seem in tension. A similar tension - the question of why we see no signs of extraterrestrials, despite the galaxy having so many possible stars they could emerge from - is often discussed under the heading of the Fermi Paradox.

Wikipedia has a list of possible resolutions of the Fermi paradox. Many correspond to the skeptical view possibilities I list above. Some seem less relevant to this piece. (For example, there are various reasons extraterrestrials might be present but not detected. But I think any world in which extraterrestrials don't prevent our species from galaxy-scale expansion ends up "wild," even if the extraterrestrials are there.)

My current sense is that the best analysis of the Fermi Paradox available today favors the explanation that intelligent life is extremely rare: something about the appearance of life in the first place, or the evolution of brains, is so unlikely that it hasn't happened in many (or any) other parts of the galaxy.9

That would imply that the hardest, most unlikely steps on the road to galaxy-scale expansion are the steps our species has already taken. And that, in turn, implies that we live in a strange time: extremely early in the history of an extremely unusual star.

If we started finding signs of intelligent life elsewhere in the galaxy, I'd consider that a big update away from my current "wild" view. It would imply that whatever has stopped other species from galaxy-wide expansion will also stop us.

This pale blue dot could be an awfully big deal

Describing Earth as a tiny dot in a photo from space, Ann Druyan and Carl Sagan wrote:

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot ... Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light ... It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.

This is a somewhat common sentiment - that when you pull back and think of our lives in the context of billions of years and billions of stars, you see how insignificant all the things we care about today really are.

But here I'm making the opposite point.

It looks for all the world as though our "tiny dot" has a real shot at being the origin of a galaxy-scale civilization. It seems absurd, even delusional to believe in this possibility. But given our observations, it seems equally strange to dismiss it.

And if that's right, the choices made in the next 100,000 years - or even this century - could determine whether that galaxy-scale civilization comes to exist, and what values it has, across billions of stars and billions of years to come.

So when I look up at the vast expanse of space, I don't think to myself, "Ah, in the end none of this matters." I think: "Well, some of what we do probably doesn't matter. But some of what we do might matter more than anything ever will again. ...It would be really good if we could keep our eye on the ball. ...[gulp]"


Use this feedback form if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) 

New Comment
39 comments, sorted by Click to highlight new comments since:
[-]Ruby150

Curated. There's an argument which goes something like that there's a strong prior against us living in an unprecedentedly important time for universe and accuses anyone who thinks we are of self-aggrandizing.  This post lives up the title and makes a clear argument why thinking this time is important is no more crazy than thinking it isn't. This has helped me personally with my out-there beliefs.

I feel like this post is the best reference I know for explaining possibly the highest order bit of what drives most of my life-plans and daily actions, and I think the same is true for many other people working on existential and catastrophic risk stuff. I have a lot of specific beliefs about the future, but the core point of "look, we are clearly outside of the regime where you can simply apply the absurdity heuristic to stuff, something wild is going to happen, and many of those outcomes seem really important to bring by or to avoid" feels like one of the most central ones to my worldview.

I really appreciate this post for being both very approachable in covering this point, and also very charitable to critics of it. It feels like a particularly good post to send to people who haven't been super immersed in all the jargon on LW and in the broader Rationality/EA/X-Risk/Longtermist community, and even if they end up disagreeing with it, I expect it won't cause very strong defensive reactions. Which really isn't something I want all posts to optimize for (or even anything close to the majority of posts on LW), but it is something that is useful to have at least one instance off for every important argument, and this post fills that niche quite well.

[-][anonymous]10

Always beware of the spectre of anthropic reasoning though.

This is an interesting topic and I enjoyed reading your post. I'd like to nitpick one point, though, which is the "stable, galaxy-wide civilization". Specifically: I think a stable, galaxy-wide civilization would be surprising, for reasonable definitions of "stable civilization". (This might not be disagreeing with you, since you just say there's a decent chance of it, not that it's the most likely outcome.)

Before getting into why it would be surprising, a note about why it matters: if in fact the future involves an ecosystem of many unstable civilizations instead, then our present moment is less crucial (though perhaps still wild). That's because such a future would explore more possibilities for different types of civilizations, resulting in more Darwinian-like dynamics that give a result less dependent on starting conditions.

The problem for a galaxy-wide civilization is the speed of light. Assuming travel and communication are in fact limited by the speed of light (as present-day physics predicts), it will take ~100k years to send a message from one side of the Milky Way to the other. Even sending a message from here to Alpha Centauri takes ~4 years.

History makes it seem hard to maintain a coherent civilization with such long travel & communication delays. We've been living in a time of increasing civilizational coherency for the past few hundred years, but that could be caused by reduced travel & communication delays. In a galactic scenario, where travel & communication are limited by fundamental physics, the trend towards coherency would plausibly reverse.

I'll return to galaxy-wide coherency in a moment, but assuming it doesn't happen, that's a problem for stability of civilizations because of inter-civilizational competition. If it's possible to have a long-term stable civilization, I think it would require totalitarian control. But if your #1 priority is stability via totalitarian control, you may not do so well at competing with freer civilizations. As a contemporary example, North Korea is pretty good at maintaining a stable totalitarian regime, but it's sacrificing GDP, technological development, etc. to do that. So the civilizations in the ecosystem should not be expected to be perfectly stable.

Returning to galaxy-wide coherency, you hint at an argument for why it may be possible:

I'll also argue in a future piece that there is a chance of "value lock-in" here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.

It does seem at least conceivable that a sort of societal error-correction could maintain coherent values over long time-scales, and thus over long distances too, for similar reasons that we can store data digitally for long periods of time despite human memory being unstable.

But again I suspect this runs into problems with the speed of light. Assume perfect totalitarian control has not yet been achieved when galactic expansion starts. In that case, the initial phase of galactic expansion consists of frontier civilizations trying to expand as close as possible to c (the speed of light). If a particular civilization finds a way to expand at 0.3c when the next best is 0.2c, its descendants are going to dominate the galaxy, and splinter into many different civilizations themselves. This creates a Darwinian situation that selects for rapid expansion.

Since the most successful frontier civilizations are selected for rapid expansion, that's their top priority. In general, there can only be one top priority, and since it's not perfect totalitarian control, they probably won't be perfectly totalitarian. Thus, this initial expansion phase seeds the galaxy with many differing civilizations descended from the frontier civilizations.

Once you have civilizational diversity throughout the galaxy, there's no obvious way that can be reversed. I can't see how else it could continue from there other than an ecosystem of civilizations. Something fundamental would need to change.

My biggest uncertainty here is about what if perfect totalitarian control is achieved before interstellar expansion. For reasons described earlier, I'd expect such a civilization to be eventually outcompeted by non-totalitarian aliens if there are any out there, but maybe there aren't any of those. Maybe the whole thing depends on how doable perfect totalitarian control is relative to interstellar expansion.

The audio is very appreciated, I ended up listening instead of reading. 

I very much appreciated this write-up Holden.

Why do you believe that things will eventually stabilize? Perhaps we will always be on the verge of the next frontier, though it may not always be a spatial one. Yes there may be periods of lock-in, but even if we are locked-in to a certain religion or something for 100,000 years at a time, that still may look pretty dynamic over a long time horizon.

It seems that your claim ought to be that we will soon lock ourselves into something forever. This is a very interesting claim!

Footnotes Container

This comment is a container for our temporary "footnotes-as-comments" implementation that gives us hover-over-footnotes.

6. Spreading throughout the galaxy would certainly be harder if nothing like mind uploading (which I'll discuss in a future piece, and which is part of why I think future space settlements could have "value lock-in" as discussed above) can ever be done. I would find a view that "mind uploading is impossible" to be "wild" in its own way, because it implies that human brains are so special that there is simply no way, ever, to digitally replicate what they're doing. (Thanks to David Roodman for this point.)

4. A logarithmic version doesn't look any less weird, because the distances between the "middle" milestones are tiny compared to both the stretches of time before and after these milestones. More fundamentally, I'm talking about how remarkable it is to be in the most important [small number] of years out of [big number] of years - that's best displayed using a linear axis. It's often the case that weird-looking charts look more reasonable with logarithmic axes, but in this case I think the chart looks weird because the situation is weird. Probably the least weird-looking version of this chart would have the x-axis be something like the logged distance from the year 2100, but that would be a heck of a premise for a chart - it would basically bake in my argument that this appears to be a very special time period.

2. If we are able to create mind uploads, or detailed computer simulations of people that are as conscious as we are, it could be possible to put them in virtual environments that automatically reset, or otherwise "correct" the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). This could give the designers of these "virtual environments" the ability to "lock in" particular religions, rulers, etc. I'll discuss this more in a future piece.

3. I've focused on the "galaxy" somewhat arbitrarily. Spreading throughout all of the accessible universe would take a lot longer than spreading throughout the galaxy, and until we do it's still imaginable that some species from outside our galaxy will disrupt the "stable galaxy-scale civilization," but I think accounting for this correctly would add a fair amount of complexity without changing the big picture. I may address that in some future piece, though.

5. This is exactly the kind of thought that kept me skeptical for many years of the arguments I'll be laying out in the rest of this series about the potential impacts, and timing, of advanced technologies. Grappling directly with how "wild" our situation seems to ~undeniably be has been key for me. 

7. That is, advanced AI that pursues objectives of its own, which aren't compatible with human existence. I'll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one.

8. Thanks to Carl Shulman for this point.

I’m impressed with your footnotes container.

I think that humanity could definitively fail to colonize the galaxy this century.

I don't see how the non-renewable resources needed to reach space could last 100,000 years in light of business as usual climate scenarios that lock in to permanent failure (uninhabitable earth) at 500-1000 years in the future.

We may be in the beginnings of a kessler syndrome right now (meaning, the cascading collisions are in progress and cannot be stopped, but the timescale is long enough that we will not be locked out of good orbits for decades to centuries), and even if we are not, an incident with any of the 'small swarm sat' constellations (starlink/blue origin) could lock the door on large scale space projects for centuries to milennia, which is time we do not have.

Success requires us to master living (with reproduction) in microgravity and a fully self contained environment, the human-machine interfaces needed for virtual life, the large scale engineering required to build factories on celestial bodies or in space itself, and the energy to power it all. We're nowhere near any of that, and might have a few decades or as much as a century to solve those challenges, but probably not much more. Maybe a general intelligence could create a von neumann probe and working rocket that escapes earth with a minimum of resources in terms of time, manufacturing capacity, and material; but the minimum will never be 'zero' and increasing constraints are certain to increase the difficulty; possibly to the point of impossibility.

A few errors or incidents (massive solar flare, nuclear war, large accident in space, climate catastrophe, etc), or even just failure to reach certain technical milestones in time, could mean that at a certain point, (possibly today?), the spare capacity for this sort of massive scale investment of nonrenewables with limited to no return for the humans living on earth making the investment decision just simply won't be available.

I dont think we've lost the race against the clock yet, but it looks to me like there absolutely is one in progress, and while people alive today may not colonize the galaxy, they may find themselves in a position to definitively say that the human race and its' robots will not.

[-][anonymous]40

Regarding kesler: I understand that's just science press sensationalism. One method of dealing with it that the "math checks out on" is ground based laser brooms. High powered lasers would use photon pressure to deorbit each piece of debris, or at least enough debris to make spaceflight feasible. Theres a paper study on it if you are interested. Note also over a 100k period that most kesler debris will not be in a stable orbit. Small pieces of debris have high surface area to volume and deorbit quickly. Large pieces by definition are rare because humans have not launched very much mass into orbit and we can evade those pieces or deorbit them with the above mentioned lasers.

Humans as they are now wouldn't solve this problem. They would be heavily edited and modified, maybe to the point that no living cells are used anywhere. This appears feasible and again it's the 100k now not the "will the FDA approve a brain implant in 10 years" now.

As for the "fossil fuel trap" that appears to be more sensationalism, the math doesn't check out on that since 2018. Now that renewable is outright cheaper than fossil fuels economically this means the embodied energy ROI is highly positive (or it could not be outright cheaper unless you believe the equipment manufacturers have a magical source of free energy). I can link sources on this as well. Shortages of lithium and rare earths and copper turn out to be more sensationalism, there are now available on the market, in large quantities, alternatives. (Sodium ion batteries, rare earth free motors, aluminum wiring and motor windings). The alternatives are not quite as good, of course, but they are close in performance.

As for the "fossil fuel trap" that appears to be more sensationalism, the math doesn't check out on that since 2018. Now that renewable is outright cheaper than fossil fuels economically this means the embodied energy ROI is highly positive (or it could not be outright cheaper unless you believe the equipment manufacturers have a magical source of free energy). I can link sources on this as well. Shortages of lithium and rare earths and copper turn out to be more sensationalism, there are now available on the market, in large quantities, alternatives. (Sodium ion batteries, rare earth free motors, aluminum wiring and motor windings). The alternatives are not quite as good, of course, but they are close in performance.

You’re missing the crux here - say a substantial part of humanity dies and we lose most knowledge and access to the technologies that we use to extract fossil fuels in the ways that we currently do. This creates a “missing stair” for the next group of humans populating the Earth.

Our progrsess:

Burning wood, plants and poo -> burning of fossil fuels -> nuclear and renewables and whatever.

If fossil fuels cannot be extracted by a society powered by wood (lol):

Burning wood, plants and poo —> (how to use wood-burning machines to extract oil from the beneath the ocean floor ???) —> still burning wood, plants and poo forever.

They would have no way to climb the “energy stair case.”

(Edits: clarity)

[-][anonymous]10

That's not even the correct staircase though.  It was heating fires -> wind/water mills -> steam engines -> internal combustion engines.  But we still use hydroelectric to produce 17% of all electricity used on earth.  

In a hypothetical world with zero fossil fuels in concentrated, easily combusted form the tech tree would have been:

wind/water powering factories near rivers -> electricity -> well positioned factories powered by remote wind/water.  Cities would need to be denser and to use electric busses and trolleys and elevators for all the transport.  Most long distance transport would have to be by train, where all the major links use overhead electric wiring.

The industrial revolution might have taken longer but the outcome would have been the same, and obviously once that civilization figured out efficient windmills, effective battery chemistries, solar PV, and advanced elements of electrical engineering they would have a growth rate similar to ours.

You’re focused more on technology and less on fuel sources.

Given what goes in to constructing a modern windmill, I don’t see it being viably done with a wood-burning stream engine. Consider all of the materials that go in to make a pencil and what parts of the world they come from, the multiply it by at least 1000.

[-][anonymous]10

17 percent of the total electricity is still a lot of energy. You aren't taking the question seriously when you assume someone would make a pencil the same way in a world without fossil fuels. (And implicitly the same problems with nuclear we have now)

Focusing on the technology let's you develop a gears level model and predict how industrial and supply chains could adapt to more scarce energy and little in portable forms.

I’m not sure what the 17 percent of total electricity figure is related to.

I’m assuming that building a wind turbine would be a lot more difficult than building a pencil.

Imagine it’s 1783, but all coal, oil, natural gas and rare Earth metals on Earth only exists in the places where they’re now in 2021.

How do you build something like the Deep Water Horizon using 1783 technology?

How do you build a the Smokey Hills Wind Farm using 1783 technology?

How do you build a lithium ion battery using 1783 technology?

How do you then Chicago Pile-1 using 1783 technology?

And, yes, you have to think about the whole supply chain. We use fossil fuel burning machines to move parts around, to log, etc. You can log a bit and move them down rivers, then those trees are gone and what do you do?

The problem is there’s only so much energy concentrated in wood, and it would be the most energy-dense material available. You’d burn it all and then you’d be done. The population would ultimately be limited by the amount of energy we have available to us, and there would be nothing we could to about it.

[-][anonymous]10

The percentage of global electricity provided by hydroelectric power.

With 1783 technology you obviously don't need to build the things you mentioned. Your needs are much smaller, for textiles and to drive machinery. You have a vastly smaller population and cities so wood is sufficient for heating and metal forging, as it was in the real 1783.

You cannot grow as fast but in 1783 you have developed and are using the critical technologies that changed everything, the scientific method and the printing press. The printing press means that as people tinker and find ways to make progress despite the obstacles, many copies of what they learn can be printed. And scientific method let's you filter to knowledge more likely to be useful.

To get to the Chicago pile will probably take longer without fossil fuel but the intermediate steps are mostly done with hydroelectric power. Wind might be used to pump water backwards to store it for later use. Populations probably have to be smaller, trains used everywhere, fields with tractors probably use long cables or overhead wires, basically an extension cord. It's more difficult to make steel without much or any coal, maybe charcoal and electric furnaces are used. Maybe more use of aluminum.

I think you're making a great case for optimism. Based on your last line, I don't think our positions are too far apart.

Laser brooms on the ground are a heavier infrastructure investment than just the rocket, and they haven't been built yet. Rockets with no brooms are cheaper and easier. So needing the broom raises the threshold, perhaps the raised threshold is still in reach, but at some theoretical point, it will not be.

The fossil fuel comment was more in the direction of 'if we insist on burning everything currently in the ground, the runaway greenhouse effect is lethal to the species at 500-1000 year timelines'.

I assert that we could screw ourselves permanently, in this century, by digging into a hole (through inadequate investment of non renewable resources like helium or failure to solve engineering challenges) which we cannot exit before we wreck our habitat (plenty of non co2 scenarios for this). I'm not sure how much pessimism is warranted, I certainly don't think that failure is inevitable, but I absolutely do think it's on the table.

[-][anonymous]-30

Well the fossil fuel scenario has the issue that as the earth gets hotter it would be more and more expensive and obviously a bad idea to extract and burn more fossil fuels. Moreover more and more of the earth would be uninhabitable and also difficult to drill or mine for hydrocarbons.

The other scenarios, we are very close I think far closer that most realize to self replicating machinery. All tasks involved to manufacture machinery are vulnerable to already demonstrated machine learn algorithms it is just a matter of scale and iterative improvement. (By self replicating I mean robots made of parts in robots now with gears and motors and circuit boards and wiring bundles. And all tasks involved to manufacture the parts and assemble them are done by other such robots. This is likely possible with a few hierarchial general agents similar to the recent deepmind paper, https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play

By hierarchial I mean use an agent made of several subagents trained with a state of the art method like this or better, and each subagents is specialized. Iike a perception agent then a planning agent.

Self replicating machinery would trivially blow past any of these traps and make them moot. The problems we would have are ones we might not have thought of.

I found the last bit very compelling. David Deutsch's The Beginning of Infinity is the only place where I have seen a similar sort of anthropocentrism regarding the future discussed. Could anyone point me in the right direction to more on the topic (or keywords I should be searching for)?   

Update: Since I think this series of posts should be considered together, I've just written a new review of the entire "The Most Important Century" series. So I'm effectively retracting the review I wrote of the All Possible Views About Humanity's Future Are Wild post below, which is now redundant, and in which I think I also confused some of the content between this post and This Can't Go On (another post in the same series).

I found this post early in my reading on AI risk and EA topics. It persuaded me about how the world as we know know it is completely unsustainable - even though it feels normal and "business as usual" to us. This post shaped my views, specifically in making me more sympathetic to the "hinge of history" hypothesis and seeing more clearly the tremendous impact that transformative AI could have on the world/future.

The theory that makes the most sense to me is the idea that we are in fact in a galactic preschool.

GalacticPreschoolTheory.com

[-][anonymous]-30

Note that this "immortal robots will take over the galaxy and it's inevitable" is extremely recent thinking. Arguably it has been less than 10 years that neural networks became more than toys. That it's definitely possible to exceed human intelligence in every dimension with machine intelligence, once we get the details correct.

My point is that we have constructed this world view from a large pile of assumptions. Assumptions that each today seem solid but may simply be wrong.

  1. That the laws of physics are as we know them with a few significant figure and missing theory integrations from being complete
  2. That the laws of physics are the same everywhere
  3. That other stars and galaxies are even real
  4. That alien species have to expand per the laws of nature as we understand them, that there is no way to cheat or exploit to get a better outcome than endless Dyson swarms
  5. That there are no hidden rules that would end everything, like exceeding the complexity the universe allows causing a 'chunk reset' in an area of space.

And these are just a few of the many low probability possibilities. Point is, yeah, when the outcome of your estimate seems to violate any prior history it calls into question the assumptions you don't have direct evidence for.

No. Not to take away Holden's thunder but Von Neumann already postulated the possibility of self-replicating probes colonizing the galaxy. Indeed, I might be wrong here but it is exactly this kind of thinking that drove Fermi to pose his famous question. 

Most of this stuff was fairly common in extropian/hard scifi Usenet circles in the 90's I believe. 

[-][anonymous]10

My point was that during von Neumanns time there was plenty of reason to think such probes might never be possible, or far in the future. The exponential nature of certain types of improvements wasn't yet known.

We can't build Von Neumann probes in the real world - though we can in the digital world. 
What kind of significant (!) new information have we obtained about the feasibility of galaxywide colonization through Von Neumann probes?

[-][anonymous]20

We made computers with billions of times as much compute and memory from the 1960s. Previously intractable problems like machine perception and machine planning to resolve arbitrary failures - were only really begun to be solved with neural networks around 2014ish.

Previously they were theoretical. Now it's just a matter of money and iterations.

See previously to define a subtask for a von neuman machibe like "mine rocks from the asteroid you landed on in other tasks and haul them to the smelter" could have a near infinite number of failures. And with previous robotics each failure had to be explicitly handled by a programmer who anticipated the specific failure or a worker on site to resolve it.

With machine planning algorithms you can have the machine estimate the action that has a (better than random chance, ideally close to the true global maxima) probability of scoring well on a heuristic of success. And you need neural networks to even perceive the asteroid surface in arbitrary scenarios and lighting conditions. And you need realistic computer graphics and simulated physics to even model what the robot will see.

It's still many generations of technology away but we can actually in concrete terms specify how to do this, and how we could iterate to a working system if we wanted to.

[+][comment deleted]10