Rationalism before the Sequences

I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community.  It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed.

My goal in writing this essay is to give the LW community a sense of the prehistory of their movement.  It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive.  I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique.

My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read.  I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.

My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks.  On the one hand, I was deeply impressed by its clarity and courage.  On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.

Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way.  I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.

Before I get to those specifics, I want to try to convey that sense of what it was like.  I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us!  Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.

Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly.

When I found non-fiction sources on rationality and intelligence increase I devoured them.  Alas, most were useless junk. But in a few places I found gold.  Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation.

Eliezer and I were not unique.  We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined.

One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s.  Her instant reaction?  "Full of stuff I knew already."

Around the time Nancy and I first met, some years before Eliezer Yudkowsky was born, my maternal grandfather gave me a book called "People In Quandaries". It was an introduction to General Semantics. I don't know, because I didn't know enough to motivate the question when he was alive, but I strongly suspect that granddad was a member of one of the early GS study groups, probably the same one that included Robert Heinlein (they were near neighbors in Southern California in the early 1940s).

General Semantics is going to be a big part of my story.  Twelve Virtues speaks of "carrying your map through to reflecting the territory"; this is a clear, obviously intentional callback to a central GS maxim that runs "The map is not the territory; the word is not the thing defined."

I'm not going to give a primer on GS here.  I am going to affirm that it rocked my world, and if the clue in Twelve Virtues weren't enough Eliezer has reported in no uncertain terms that it rocked his too.  It was the first time I encountered really actionable advice on the practice of rationality.

Core GS formulations like cultivating consciousness of abstracting, remembering the map/territory distinction, avoiding the verb "to be" and the is-of-identity, that the geometry of the real world is non-Euclidean, that the logic of the real world is non-Aristotelian; these were useful.  They helped.  They reduced the inefficiency of my thinking.

For the pre-Sequences rationalist, those of us stumbling around in that fog, GS was typically the most powerful single non-fictional piece of the available toolkit.  After the millennium I would find many reflections of it in the Sequences.

This is not, however, meant to imply that GS is some kind of supernal lost wisdom that all rationalists should go back and study.  Alfred Korzybski, the founder of General Semantics, was a man of his time, and some of the ideas he formulated in the 1930s have not aged well. Sadly, he was an absolutely terrible writer; reading "Science and Sanity", his magnum opus, is like an endless slog through mud with occasional flashes of world-upending brilliance.

If Eliezer had done nothing else but give GS concepts a better presentation, that would have been a great deal. Indeed, before I read the Sequences I thought giving GS a better finish for the modern reader was something I might have to do myself someday - but Eliezer did most of that, and a good deal more besides, folding in a lot of sound thinking that was unavailable in Korzybski's day.

When I said that Eliezer's sources are probably more difficult to back-read today than they were in 2006, I had GS specifically in mind. Yudkowskian-reform rationalism has since developed a very different language for the large areas where it overlaps GS's concerns.  I sometimes find myself in the position of a native Greek speaker hunting for equivalents in that new-fangled Latin; usually present but it can take some effort to bridge the gap.

Next I'm going to talk about some more nonfiction that might have had that kind of importance if a larger subset of aspiring rationalists had known enough about it.  And that is the analytic tradition in philosophy.

I asked Eliezer about this and learned that he himself never read any of what I would consider core texts: C.S. Peirce's epoch-making 1878 paper "How To Make Our Ideas Clear", for example, or W.V. Quine's "Two Dogmas of Empiricism". Eliezer got their ideas through secondary sources.  How deeply pre-Sequences rationalists drew directly from this well seems to be much more variable than the more consistent theme of early General Semantics exposure.

However: even if filtered through secondary sources, tropes originating in analytic philosophy have ended up being central in every formulated version of rationalism since 1900, including General Semantics and Yudkowskian-reform rationalism. A notable one is the program of reducing philosophical questions to problems in language analysis, seeking some kind of flaw in the map rather than mysterianizing the territory.  Another is the definition of "truth" as predictive power over some range of future observables.

But here I want to focus on a subtler point about origins rather than ends: these ideas were in the air around every aspiring rationalist of the last century, certainly including both myself and the younger Eliezer.  Glimpses of light through the fog...

This is where I must insert a grumble, one that I hope is instructive about what it was like before the Sequences.  I'm using the term "rationalist" retrospectively, but those among us who were seeking a way forward and literate in formal philosophy didn't tend to use that term of ourselves at the time.  In fact, I specifically avoided it, and I don't believe I was alone in this.

Here's why. In the history of philosophy, a "rationalist" is one who asserts the superiority of a-priori deductive reasoning over grubby induction from mere material facts. The opposing term is "empiricist", and in fact Yudkowskian-reform "rationalists" are, in strictly correct terminology, skeptical empiricists.  

Alas, that ship has long since sailed.  We're stuck with "rationalist" as a social label now; the success of the Yudkowskian reform has nailed that down. But it's worth remembering that in this case not only is our map not the territory, it's not even immediately consistent with other equally valid maps.

Now we get to the fun part, where I talk about science fiction.

SF author Greg Bear probably closed the book on attempts to define science fiction as a genre in 1994 when he said "the branch of fantastic literature which affirms the rational knowability of the universe". It shouldn't be surprising, then, that ever since the Campbellian Revolution in 1939 invented modern science fiction there has been an important strain in it of fascination with rationalist self-improvement.

I'm not talking about transhumanism here.  The idea that we might, say, upload to machines with vastly greater computational capacity is not one that fed pre-Yudkowskian rationalism, because it wasn't actionable.  No; I'm pointing at more attainable fictions about learning to think better, or discovering a key that unlocks a higher level of intelligence and rationality in ourselves.  "Ultrahumanist" would be a better term for this, and I'll use it in the rest of this essay.

I'm going to describe one such work in some detail, because (a) wearing my SF-historian hat I consider it a central exemplar of the ultrahumanist subgenre, and (b) I know it had a large personal impact on me.

"Gulf", by Robert A. Heinlein, published in the October–November 1949 Astounding Science Fiction.  A spy on a mission to thwart an evil conspiracy stumbles over a benign one - people who call themselves "Homo Novis" and have cultivated techniques of rationality and intelligence increase, including an invented language that promotes speed and precision of thought.  He is recruited by them, and a key part of his training involves learning the language.

At the end of the story he dies while saving the world, but the ostensible plot is not really the point.  It's an excuse for Heinlein to play with some ideas, clearly derived in part from General Semantics, about what a "better" human being might look and act like - including, crucially, the moral and ethical dimension.  One of the tests the protagonist doesn't know he's passing is when he successfully cooperates in gentling a horse.

The most important traits of the new humans are that (a) they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat; and (b) they're not some kind of mutation or artificial superrace. They are human beings who have chosen to pool their efforts to make themselves more reliably intelligent.

There was a lot of this sort of GS-inspired ultrahumanism going around in Golden Age SF between 1940 and 1960.  Other proto-rationalists may have been more energized by other stories in that current.  Eliezer remembers and acknowledges "Gulf" as an influence but reports having been more excited by "The World of Null-A" (1946). Isaac Asimov's "Foundation" novels (1942-1953) were important to him as well even though there was not much actionable in them about rationality at the individual level.

As for me, "Gulf" changed the direction of my life when I read it sometime around 1971.  Perhaps I would have found that direction anyway, but...teenage me wanted to be homo novis. More, I wanted to deserve to be homo novis.  When my grandfather gave me that General Semantics book later in the same decade, I was ready.

That kind of imaginative fuel was tremendously important, because we didn't have a community.  We didn't have a shared system. We didn't have hubs like Less Wrong and Slate Star Codex. Each of us had to bootstrap our own rationality technique out of pieces like General Semantics, philosophical pragmatism, the earliest most primitive research on cognitive biases, microeconomics, and the first stirrings of what became evolutionary psych.

Those things gave us the materials. Science fiction gave us the dream, the desire that it took to support the effort of putting it together and finding rational discipline in ourselves.

Last I'm going to touch on Zen Buddhism. Eliezer likes to play with the devices of Zen rhetoric; this has been a feature of his writing since Twelve Virtues.  I understood why immediately, because that attraction was obviously driven by something I myself had discovered decades before in trying to construct my own rationalist technique.

Buddhism is a huge, complex cluster of religions. One of its core aims is the rejection of illusions about how the universe is. This has led to a rediscovery, at several points in its development, of systematic theories aimed at stripping away attachments and illusions. And not just that; also meditative practices intended to shift the practitioner into a mental stance that supports less wrongness.

If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.

One of the most recent periods of such rediscovery followed the 18th-century revival of Japanese Buddhism by Hakuin Ekaku. There's a fascinating story to be told about how Euro-American culture imported Zen in the early 20th century and refined it even further in the direction Hakuin had taken it, a direction scholars of Buddhism call "ultimatism".  I'm not going to reprise that story here, just indicate one important result of it that can inform a rationalist practice.

Here's the thing that Eliezer and I and other 20th-century rationalists noticed; Zen rhetoric and meditation program the brain for epistemic skepticism, for a rejection of language-driven attachments, for not just knowing that the map is not the territory but feeling that disjunction.

Somehow, Zen rhetoric's ability to program brains for epistemic skepticism survives not just disconnection from Japanese culture and Buddhist religious claims, but translation out of its original language into English. This is remarkable - and, if you're seeking tools to loosen the grip of preconceptions and biases on your thinking, very useful.

Alfred Korzybski himself noticed this almost as soon as good primary sources on Zen were available in the West, back in the 1930s; early General Semantics speaks of "silence on the objective level" in a very Zen-like way.

No, I'm not saying we all need to become students of Zen any more than I think we all need to go back and immerse ourselves in GS. But co-opting some of Zen's language and techniques is something that Eliezer definitely did. And I did, and other rationalists before the Yudkowskian reformation tended to find their way to.

If you think about all these things in combination - GS, analytic philosophy, Golden Age SF, Zen Buddhism - I think the roots of the Yudkowskian reformation become much easier to understand.  Eliezer's quest and the materials he assembled were not unique.  His special gift was the same ambition as Alfred Korzybski's; to form from what he had learned a teachable system for becoming less wrong. And, of course, the intellectual firepower to carry that through - if not perfectly, at least well enough to make a huge difference.

If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times.  I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.

Some of you, recognizing my name, will know that I ended up changing the world in my own way a few years before Eliezer began to write the Sequences.  That this ensued after long struggle to develop a rationalist practice is not coincidence; if you improve your thinking hard enough over enough time I suspect it's difficult to avoid eventually getting out in front of people who aren't doing that.

That's what Eliezer did, too. In the long run, I rather hope that his reform movement will turn out to have been more important than mine.

Selected sources follow.  The fiction list could have been a lot longer, but I filtered pretty strongly for works that somehow addressed useful models of individual rationality training. Marked with * are those Eliezer explicitly reports he has read.

Huikai, Wumen: "The Gateless Barrier" (1228)

Peirce, Charles Sanders: "How To Make Our Ideas Clear" (1878)

Korzybski, Alfred: "Science and Sanity" (1933)

Chase, Stuart: "The Tyranny of Words" (1938)

Hayakawa, S. I: "Language in Thought and Action" (1939) *

Russell, Bertrand: "A History of Western Philosophy" (1945)

Orwell, George: "Politics and the English Language" (1946) *

Johnson, Wendell: "People in Quandaries: The Semantics of Personal Adjustment" (1946)

Van Vogt, A. E: "The World of Null-A" (1946) *

Heinlein, Robert Anson: "Gulf" (1949) *

Quine, Willard Van Orman: "Two Dogmas of Empiricism" (1951)

Heinlein, Robert Anson: "The Moon Is A Harsh Mistress" (1966) *

Williams, George: "Adaptation and Natural Selection" (1966) *

Pirsig, Robert M.: "Zen and the Art of Motorcycle Maintenance" (1974) *

Benares, Camden: "Zen Without Zen Masters" (1977)

Smullyan, Raymond: "The Tao is Silent" (1977) *

Hill, Gregory & Thornley, Kerry W.: "Principia Discordia (5th ed.)" (1979) *

Hofstadter, Douglas: "Gödel, Escher, Bach: An Eternal Golden Braid" (1979) *

Feynman, Richard: "Surely You're Joking, Mr. Feynman!" (1985) *

Pearl, Judea: "Probabilistic Reasoning in Intelligent Systems" (1988) *

Stiegler, Marc: "David's Sling" (1988) *

Zindell, David: "Neverness" (1988) *

Williams, Walter John: "Aristoi" (1992) *

Tooby & Cosmides: "The Adapted Mind: Evolutionary Psychology and the Generation of Culture" (1992) *

Wright, Robert: "The Moral Animal" (1994) *

Jaynes, E.T.: "Probability Theory: The Logic of Science" (1995) *

The assistance of Nancy Lebovitz, Eliezer Yudowsky, Jason Azze, and Ben Pace is gratefully acknowledged. Any errors or inadvertent misrepresentations remain entirely the author's responsibility.
 

New Comment
82 comments, sorted by Click to highlight new comments since: Today at 9:38 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The most important traits of the new humans are that... they prize rationality under all circumstances - to be accepted by them you have to retain clear thinking and problem-solving capability even when you're stressed, hungry, tired, cold, or in combat

Interestingly, as a LessWronger, I don't think of myself in quite this way. I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments. Knowing your limits, and using that knowledge when making plans.

One that I've dealt with, that I think is pertinent for a lot of people, is being aware of how social media can destroy my attention and leave me feeling quite socially self-conscious. Bringing them into my environment damages my ability to think

On the one hand, becoming able to think clearly and make good decisions while using social media is valuable and for many necessary. Here are some of the ways I try to do that, in the style of the Homo Novis:

... (read more)

It would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"

Great point. A few (related) examples come to mind:

  • Paul Graham's essay The Top Idea in Your Mind. "I realized recently that what one thinks about in the shower in the morning is more important than I'd thought. I knew it was a good time to have ideas. Now I'd go further: now I'd say it's hard to do a really good job on anything you don't think about in the shower."
  • Trying to figure out dinner is the worst when I'm already hungry. I still haven't reached a level of success where I'm satisfied, but I've had some success with 1) planning out meals for the next ~2 weeks, that way instead of deciding what to make for dinner, I just pick something off the list, 2) meal prepping, 3) having Meal Squares as a backup.
  • Grooming meetings vs. (I guess you can call it) asynchronous grooming. In scrum, you have meetings where ~15 people get in a room (*"room"), look at the tasks that need to be done, go through each of them, and try to plan each task out + address any questions about the task. With so many people + a fast pace, things can get a little chaotic, and I find it difficult to add much value contributing. However, we're trying something new where tickets are assigned to people before
... (read more)
8Kaj_Sotala3y
I think this comment would make for a good top-level post almost as it is.

This post of mine feels closely related: https://www.lesswrong.com/posts/xhE4TriBSPywGuhqi/integrity-and-accountability-are-core-parts-of-rationality 

  • I have come to believe that people's ability to come to correct opinions about important questions is in large part a result of whether their social and monetary incentives reward them when they have accurate models in a specific domain. This means a person can have extremely good opinions in one domain of reality, because they are subject to good incentives, while having highly inaccurate models in a large variety of other domains in which their incentives are not well optimized.
  • People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".

    One is strongly predictiv
... (read more)

Hah, I was thinking of replying to say I was largely just repeating things you said in that post.

Nonetheless, thanks both Kaj and Eric, I might turn it into a little post. It's not bad to have two posts saying the same thing (slightly differently).

8Eric Raymond3y
Agreed.
7Alex_Altair3y
Similarly, for instrumental rationality, I've been trying to lean harder on putting myself in environments that induce me to be more productive, rather than working on strategies to stay productive when my environment is making that difficult.
3ryan_b3y
I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis: While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art's power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure. I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on. Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category - the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn't sca

Tl;dr for last two comments:

  1. Know your limits.
  2. Expand your limits.

As a note on terminology, I don't think that (Yudkowskian) rationalists use the word "rationalism" to describe our worldview/practice. It's a natural modification of "rationalist", and I've seen a few people outside the rationalist community use it to refer to our worldview, but e.g. no one ever comes up to me at a party and says, "Have any thoughts about rationalism lately?" We tend to just say "rationality" or "the art of rationality".

I'd also strongly advocate that we not start using the word "rationalism" for it. Mostly this is because I share your grumble about how the word "rationalist" already has a well-defined meaning to the rest of the world, and I don't want to extend that overloading and inevitable confusion by using the word "rationalism" alongside it.

I'm tempted to try to come up with better names for our worldview, but there are actually some advantages to not having a clear proper-noun-type name. One is that everyone immediately gets the gist of what "rationalists" are about. Stereotypes aside, it's an advantage over being called "the Frobnitzists" or something else inscrutable. Another is that, as described in the virtue of the void, we don't know exactly what the name is for what we want; we're trying to move toward that which cannot be named. If we give our current best-guess a proper noun like the Debiasers or the Bayesian Conspiracy, then we might be stuck with that even after we shift to a better understanding, or worse yet, we might think we've found the ultimate answer and become stuck to it through the name.

5Rob Bensinger3y
I ~agree with this comment. If we do ever want a noun, I've proposed error-reductionism. Or maybe we want something more Anglophone... lessening-of-mistake-ism, or something......

One minor note is that, among the reasons I haven't looked especially hard into the origins of "verificationism"(?) as a theory of meaning, is that I do in fact - as I understand it - explicitly deny this theory.  The meaning of a statement is not the future experimental predictions that it brings about, nor isomorphic up to those predictions; all meaning about the causal universe derives from causal interactions with us, but you can have meaningful statements with no experimental consequences, for example:  "Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us."  For my actual theory of meaning see the "Physics and Causality" subsequence of Highly Advanced Epistemology 101 For Beginners.

That is: among the reasons why I am not more fascinated with the antecedents of my verificationist theory of meaning is that I explicitly reject a verificationist account of meaning.

 "Galaxies continue to exist after the expanding universe carries them over the horizon of observation from us" trivially unpacks to  "If we had methods to make observations outside our light cone, we would pick up the signatures that galaxies after the expanding universe has carried them over the horizon of observation from us defined by c."

You say "Any meaningful belief has a truth-condition".  This is exactly Peirce's 1878 insight about the meaning of truth claims, expressed in slightly different language - after all, your "truth-condition" unpacks to a bundle of observables, does it not?

The standard term of art you are missing when you say "verificationist" is "predictivist".

I can grasp no way in which you are not a predictivist other than terminological quibbles, Eliezer. You can refute me by uttering a claim that you consider meaningful, e.g. having a "truth-condition", where the truth condition does not implicitly cash out as hypothetical-future observables - or, in your personal terminology, "anticipated experiences"

Amusingly, your "anticipated experiences" terminology is actually closer to the language of Peirce 1878 than the way I would normally express it, which is influenced by later philosophers in the predictivist line, notably Reichenbach.

I reiterate the galaxy example; saying that you could counterfactually make an observation by violating physical law is not the same as saying that something's meaning cashes out to anticipated experiences.  Consider the (exact) analogy between believing that galaxies exist after they go over the horizon, and that other quantum worlds go on existing after we decohere them away from us by observing ourselves being inside only one of them.  Predictivism is exactly the sort of ground on which some people have tried to claim that MWI isn't meaningful, and they're correct in that predictivism renders MWI meaningless just as it renders the claims "galaxies go on existing after we can no longer see them" meaningless.  To reply "If we had methods to make observations outside our quantum world, we could see the other quantum worlds" would be correctly rejected by them as an argument from within predictivism; it is an argument from outside predictivism, and presumes that correspondence theories of truth can be defined meaningfully by imagining an account from outside the universe of how the things that we've observed have their own causal processes generating those observations, such that having thus identified the causal processes through observation, we may speak of unobservable but fully identified variables with no observable-to-us consequences such as the continued existence of distant galaxies and other quantum worlds.

9Eric Raymond3y
It seems to me that you've been taking your model of predictivism from people who need to read some Kripke. In Peirce's predictivism,  to assert that a statement is meaningful is precisely to assert that you have a truth condition for it, but that doesn't mean you necessarily have the capability to test the condition. Consider Russell's teapot.  "A teapot orbits between Earth and Mars" is a truth claim that must unambiguously have a true or false value.  There is a truth condition on on it; if you build sufficiently powerful telescopes and perform a whole-sky survey you will find it. It would be entirely silly to claim that the claim is meaningless because the telescopes don't exist.  The claim "Galaxies continue to exist when they exit our light-cone" has exactly the same status. The fact that you happen to to believe the right sort of telescope not only does not exist but cannot exist is irrelevant - you could after all be mistaken in believing that sort of observation is impossible.  I think it is quite likely you are mistaken, as nonlocal realism seems the most likely escape from the bind Bell's inequalities put us in. MWI presents a a subtler problem, not like Russell's Teapot, because we haven't the faintest idea what observing another quantum world would be like.  In the case of the overly-distant galaxies, I can sketch a test condition for the claim that involves taking a superluminal jaunt 13 billion light-years thataway and checking all around me to see if the distribution of galaxies has a huge NOT THERE on the side away from Earth.  I think a predictivist would be right to ask that you supply an analogous counterfactual before the claim "other quantum worlds exist" can be said to have a meaning.

Just jaunt superquantumly to another quantum world instead of superluminally to an unobservable galaxy.  What about these two physically impossible counterfactuals is less than perfectly isomorphic?  Except for some mere ease of false-to-fact visualization inside a human imagination that finds it easier to track nonexistent imaginary Newtonian billiard balls than existent quantum clouds of amplitude, with the latter case, in reality, covering both unobservable galaxies distant in space and unobservable galaxies distant in phase space.

One big difference is that there are theoretical cracks in the lightspeed wall that don't have any go-to-another-quantum-world analog.  The Alcubierre solution to the field equations is a thing, after all. More importantly for this discussion, we can construct thought experiments about superluminal travel that have truth conditions because we know what a starfield would look like from N lightyears thataway. Quantumporting doesn't have analogues of either of those things.

But that's kind of a distraction.  The interesting question for this discussion is how, if at all, the two claims "galaxies receding outside our light cone continue to exist" and "Russell's teapot exists" are different.  I think we agree that there is a predictivist account of "teapot". 

You assert that a predictivist definition of meaning and truth value cannot sustain an account of the "galaxies" claim, and that predictivism is therefore insufficient. I, a predictivist, deny your assertion - you have smuggled in an assumption that predictivists somehow aren't allowed to assign meaning to counterfactuals that violate physical law, which I (a predictivist) am quite willing to do as long as hypotheically violating that physical law would not bar us from being able to cash out a truth claim in expected experiences.

I believe I am a predictivist who understands predictivism correctly and consistently.  I believe you are a predictivist in practice who has failed to understand predictivism in theory.

How can we investigate, confirm, or refute these claims?

6ChristianKl3y
In that case the conclusion would be that we don't know whether or not galaxies outside of the light cone exist and whether or not they exist depend on whether the theoretical cracks actually allowing faster-then-light travel. Eliezers position seems to be that they exist whether or not faster-then-light travel is possible.  Or are you saying that in a world where a person is certain about all physical laws that exist and there's no faster-then-light travel, the other galaxies don't exist for that person while they do exist for people with less knowledge about physics?
[-]jdp3y250

As a fellow "back reader" of Yudkowsky, I have a handful of books to add to your recommendations:

Engines Of Creation by K. Eric Drexler

Great Mambo Chicken and The Transhuman Condition by Ed Regis

EY has cited both at one time or another as the books that 'made him a transhumanist'. His early concept of future shock levels is probably based in no small part on the structure of these two books. The Sequences themselves borrow a ton from Drexler, and you could argue that the entire 'AI risk' vs. nanotech split from the extropians represented an argument about whether AI causes nanotech or nanotech causes AI.

I'd also like to recommend a few more books that postdate The Sequences but as works of history help fill in a lot of context:

Korzybski: A Biography by Bruce Kodish

A History Of Transhumanism by Elise Bohan

Both of these are thoroughly well researched works of history that help make it clearer where LessWrong 'came from' in terms of precursors. Kodish's biography in particular is interesting because Korzybski gets astonishingly close to stating the X-Risk thesis in Manhood of Humanity:

At present I am chiefly concerned to drive home the fact that it is the great disparity between the

... (read more)
6Eric Raymond3y
Great Mambo Chicken and Engines of Creation were in my reference list for a while, until I decided to cull the list for more direct relevance to systems of training for rationality.  It was threatening to get unmanageably long otherwise.  I didn't know there was a biography of Korzybski.  Thanks!

Thank you for writing this. Having read both your writings and Eliezer's, and many of the books listed, the story is as I expected it to be, but it is good to see the history laid out.

Mod note: I moved this to frontpage despite it being a bit similar to things we've historically left on people's personal blog. Usually there are three checks I run for deciding whether to put something on the frontpage: 

  1. Is it not timeless? 
  2. Is it trying to sell you something, or persuade you, or leverage a bunch of social connections to get you to do something?  (e.g. eliciting donations usually falls in this category)
  3. Is it about community inside-baseball that makes it hard to participate in if you aren't part of the social network?

For this essay, I think the answer is "No" for basically all three (with the last one maybe being a bit true, but not really), so overall I decided to move this to the frontpage.

Heh. Come to think of it from that angle, "a bit true, but not really" would have been exactly my assessment if I were in your shoes. Thanks, I appreciate the nuanced judgment. 

This was not just informationally useful but also just plain well-written and enjoyable. I think you succeeded in communicating some of the feel. Thank you.

Thanks Eric for writing this post, I found it fascinating.

I imagine that there are are lot of lessons from General Semantics or analytic philosophy that might not have made it into rational-sphere, so if you ever find time to share some of that with us, I imagine it would be well-received.

This is great, strong upvoted!

Offtopic but I've really enjoyed your work over the years (CATB & Hacker's Dictionary from before I was a Less Wronger; Dancing With the Gods since). Glad to see you on LW, and thanks for the pointer to Heinlein's Gulf which I hadn't read, but was a solid read (though very clearly from the 1950s in its attitude - feels very outdated now).

As a teenager totally unattached to the larger software community (and open source, until years later), the New Hacker's DIctionary and the appended stories, along with Stoll's Cuckoo's Egg were formative for me. I had absolutely no contact with this culture, but I knew I wanted in. Finding that it overlaps with LessWrong, which I found independently later on, honestly feels bizarre.

Now I'm wondering if it's less that hacker culture as presented in those stories was attractive to me in itself, than if there was a common factor shining through. Interesting people, reasonable people...!

Probably, but there is something else more subtle.

Both the cultures you're pointing at are, essentially, engines to support achieving right mindset. It's not quite the same right mindset, but in either case you have to detach for "normal" thinking and its unquestioned assumptions in order to be efficient at the task around which the culture is focused.

Thus, in both cultures there's a kind of implicit mysticism.  If you recoil from that word because you associate it with anti-rationality I can't really blame you, but I ask you to consider the idea of mysticism as "techniques for consciousness alteration" detached from any particular beliefs about the universe.

This is why both cultures a have a use for Zen. It is  a very well developed school of mystical technique whose connection to religious belief has become tenuous.  You can take the Buddhism out of it and the rest is still coherent and interesting.

Perhaps this implicit mysticism is part of the draw for you. It is for me.

You have an outside view of my writing, so I'm curious. On a scale of 0 = "But of course" to 5 = "Wow, that was out of left field", how surprising did you find it that I would write this essay?

If you can find anything more specific to say along these lines (why it's surprising/unsurprising) I would find that interesting.

I was slightly surprised, mostly because I had the expectation that if you've known about LW for a while, then I would have thought that you'd end up contributing either early or not at all. Curious what caused it to happen in 2021 in particular.

I don't really have an interesting answer, I'm afraid. Busy life, lots of other things to pay attention to, never got around to it before.

Now that I've got the idea, I may re-post some rationality-adjacent stuff from my personal blog here so the LW crowd can know it exists.

The way I have set this up for writers in the past has been to setup crossposting from an RSS feed under a tag (e.g. crossposting all posts tagged 'lesswrong').

I spent a minute trying and failed to figure out how to make an RSS feed from your blog under a single category. But if you have such an rss feed, and you make a category like 'lesswrong' then I'll set up a simple crosspost, and hopefully save you a little time in expectation. This will work if you add the category old posts as well as new ones.

5Eric Raymond3y
There's a technical problem.  My blog is currently frozen due to a stuck database server; I'm trying to rehost it.  But I agree to your plan in principle and will discuss it with you when the blog is back up.
4Ben Pace3y
Sounds good.
1localdeity3y
I recently learned of a free (donation-funded) service, siftrss.com, wherein you can take an RSS feed and do text-based filtering on any of its fields to produce a new RSS feed.  (I've made a few feeds with it and it seems to work well.)  I suspect you could filter based on the "category" field.
4David Hornbein3y
Please do.
6Alexei3y
5 for me. I read Dancing with Gods a long time ago and it’s very memorable. But had no idea about anything else.
5dominicq3y
For me, probably 2. I read "How to become a hacker" several years ago and it shaped many of my career-related choices. The writing/reasoning style is very similar to the ratsphere, so I was not too surprised that I would also find you here.
3madasario3y
0 or 1.  I saw this post and thought "finally!  I've been a fan since the early 90's.  I'm most surprised that it took you this long, and excited that you finally got around to it.  :) The ratsphere is ripe for some of the same treatment you gave the fossphere back in the day.  (It's under attack by forces of darkness; it's adherents tend to be timid and poorly funded while its attackers are loud, charismatic, and throw a lot of money around; it revolves around a few centers of gravity ("projects") that are fundamental building blocks of the future - the Big Problems; etc.) I haven't thought this through a ton, but if I squint a bit I can see Jaynes &etc filling the role of, like, Knuth and K&R and etc - hard engineering; and The Sequences/LW/SSC filling the role of, say, GNU and Lions and etc - a way for the masses to participate and contribute and absorb knowlege and gel into a tribe and a movement.  I paint that vague hand-wavy picture for you, hoping you'll understand when I say that this post feels like it should be expanded and become TAOUP but for the ratsphere.
3Zian3y
3 My knowledge before reading the article and comments could be summarized as : * These are some really great articles by ESR. I wonder why no one had taken them super seriously yet... * somewhat of an outsider perspective as FeepingCreature described * I wonder why some people have such strong opinions about this person
[-]gjm3y220

I think the main reason some people have strong opinions about ESR is that he has some strong opinions, some of which are highly controversial, and he states some of those controversial opinions openly. In particular, much in US politics is super-divisive, and in five minutes on Eric's blog you can readily find five things that some (otherwise?) reasonable people will get very angry about.

... I thought I should actually test that, so I went over to have a look. His blog has been a bit less political lately than at some other times. But in exactly five minutes I found the following assertions (all the following are my paraphrases; I have no intent to distort but error is always possible, especially when reading quickly, so if you are minded to be angry at Eric you should first go and check what he actually wrote): the US has a problem with Communist oppression, Kyle Rittenhouse is a hero, white people at BLM protests should be assumed to be communists and shot at will [EDITED to add: as habryka points out in a reply, this paraphrase is potentially misleading; more below], an armed storming of the Michigan State House was an appropriate response to stay-at-home orders. (That's April ... (read more)

[-]habryka3yModerator Comment200

Woah, at least one of those summaries seems really quite inaccurate. Bad enough that like, I feel like I should step in as a moderator and be like "wait, this doesn't seem OK". 

I am not very familiar with ESR's opinions, but your summary of "white people at BLM protests should be assumed to be communists and shot at will" is really misrepresenting the thing he actually said. What he actually said was "White rioters, on the other hand, will be presumed to be Antifa Communists attempting to manipulate this tragedy for Communist political ends;", with the key difference being "white rioters" instead of "white people". While there is still plenty to criticize in that sentence, this seems like a really crucial distinction that makes that sentence drastically less bad.

Topics like this tend to get really politicized and emotional, which I think means it's reasonable to apply some extra scrutiny and care to not misrepresent what other people said, and generally err on the side of quoting verbatim (ideally while giving substantial additional context).

7gjm3y
Yeah, "rioters" would have been more accurate than "people", though I don't know exactly what Eric considers the boundary between protesting and rioting. My apologies. As I said, mistakes get made when doing things quickly, and doing it quickly was much of the point. [EDITED to add:] I have edited my original comment to point out the mistake; I also found a comment from Eric on the original blogpost that clarifies where he draws the line between "rioters" and mere protestors, and have quoted that there too.
7gjm3y
Looking at voting patterns in this subthread, I have the impression that readers generally have the impression that I'm attempting to mount some sort of attack on Eric. Obviously I can't prove anything about my intentions here, but I promise that that was not in any way my purpose; I was answering Zian's puzzlement about how ESR could possibly be controversial by pointing out some controversial things. I don't think Eric would disagree with my identification of those things as things some people might get angry about. If my purpose had been an unscrupulous political attack, I wouldn't have provided links to let everyone check whether my brief summaries were accurate, and I wouldn't have gone out of my way to point out that I might have made errors and explain why they were particularly likely in this instance. (I don't object to being downvoted; if you think something I write is of low quality then you should downvote it. But it looks to me as if some wrong assumptions may be being made about my motives here.) [EDITED to add:] Things look more "normal" now; dunno whether that means that the earlier state was some sort of statistical anomaly, or that some people read the above and agreed, or what. I mention this just in case anyone's reading this and wonders why in this comment I'm expressing concern about something that's not there :-).
1ChristianKl3y
I would expect the bar to be pretty clear and as habryka said "intent to commit crimes against persons or property". I would expect Eric to have the bar somewhere where he thinks that the law that allows private citizens to use force to prevent crimes from happening would protect him.
[-]gjm3y110

As you'll see from the edit to my original comment, I found something Eric said in the discussion on his blog that drew a fairly explicit boundary between rioters and mere protestors. My impression is that if Eric actually acts strictly according to the principles stated there, the law will not protect him and he will end up in jail (thinking that someone has intent to commit crimes is not generally sufficient justification in law for shooting them); several commenters on his blog expressed the same concern.

I worry that we may be getting into arguing about Eric's opinions themselves, rather than merely answering the question "why do some people have such strong opinions about him", and I think that's not a useful topic for discussion here. Of course that's mostly my fault for not getting my summaries perfectly accurate, for which once again I apologize.

2philh3y
For me, like 1 maybe 2? (That you would write it; it's a little more surprising that you did.) I knew you'd read at least some of the sequences because I think I first found them through you, and I think you've called yourself a "fellow traveler". Oh, and I remember you liked HPMOR. But I didn't know if you were particularly aware of the community here.
2lincolnquirk3y
Hmm, maybe a 2. I didn’t know you had read the Sequences, but it seems like the sort of thing that would appeal to you based on the writing in Dancing, etc.
1mruwnik2y
For me the main surprise was to think "Eric Raymond. Huh. Just like the CatB author. Wait - really?! Here?" after which was an "of course! Now it all makes sense!". I'd previously noticed the similarities between the hacker ethos and rationality, to a large extent because they were what were attractive to me in the first place. The GS part was new info for me, but both the SF and Zen influences are obvious (though it's nice to see it so explicitly explained). It feels like, in a certain sense, the hacker ethos is a special case of rationality. Hackers seemed from the outside to be these mystical creatures that used logic and intuition to get closer to a better understanding of computer systems in order to get them to do interesting things. With a focus on clarity, elegance, practicality etc. My understanding of a beisutsukai is someone who does just that, but in all matters, not just computery things. So rationality is a natural extension of being a hacker. Ditto with the mystical aspects which you mentioned earlier. I get the impression that both your writing and the Sequences have the same feel to them, for lack of a better expression. p.s. - I'd like to thank you for the hacker howto. The "formative" in the earlier comment is spot on. Apart from the general hacker stuff, I also started to learn LISP. For which I'm eternally grateful.
6Kaj_Sotala3y
I also quite liked both the Jargon File (which I found before or around the same time as LW) and Dancing With the Gods (which I found through LW).

I've curated this essay[1].

Getting a sense of one's own history can be really great for having perspective. The primary reason I've curated this is because the post really helped give me perspective on the history of this intellectual community, and I imagine also for many other LWers.

I wouldn't have been able to split it into "General Semantics, analytic philosophy, science fiction, and Zen Buddhism" as directly as you did, nor would I know which details to pick out. (I would've been able to talk about sci-fi, but I wouldn't quite know how to relate the r... (read more)

Eliezer was more influenced by probability theory, I by analytic philosophy, yes.  These variations are to be expected.  I'm reading Jaynes now and finding him quite wonderful.  I was a mathematician at one time, so that book is almost comfort food for me - part of the fun is running across old friends expressed in his slightly eccentric language.

I already had a pretty firm grasp on Feynman's "first-principles approach to reasoning" by the time I read his autobiographical stuff.  So I enjoyed the books a lot, but more along the lines of "Great physicist and I think alike! Cool!" than being influenced by him.  If I'd been able to read them 15 years earlier I probably would have been influenced.

One of the reasons I chose a personal, heavily narratized mode to write the essay in was exactly so I could use that to organize what would otherwise have been a dry and forbidding mass of detail. Glad to know that worked - and, from what you don't say, that I appear to have avoided the common "it's all about my feelings" failure mode of such writing.

If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times.  I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.

Thank you for introducing us to those who built this basilica. Just in looking up General Semantics, I've learned more about the culture wars that preceded the ones we now fight, and I learned who a few of the generals were on both sides.

If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you're likely to find some techniques that actually do help you pay better attention to reality - even if it is difficult to dig them out of the surrounding religious encrustations afterwards.

Interestingly, this is how I often feel about western philosophy; my early experience of philosophy classes and books was very much about 'who said what', and a sort of intellectual territorialism that seemed disconnected from any ultrahumanist project to think better. [... (read more)

Ironically, I disagree a bit with lukeprog here - one of the few flaws I think I detect in the Sequences is due to Eliezer not having read enough philosophy.  He does arrive at a predictivist theory of confirmation eventually, but it takes more effort and gear-grinding than it would have if he had understood Peirce's 1878  demonstration and expressed it in clearer language.

Ah well.  It's a minor flaw.

4romeostevensit3y
I really wish there was a techniques focused history of European philosophy. I suspect anyone capable of a decent shot at such is busy doing more important things.

Wow, this was quite a surprise seeing your post here, and finding out that you've been reading Less Wrong for all of these years !

(On the other hand, probably not, an English speaker with similar intellectual tendencies and Silicon Valley tropism would probably have quickly found about it, my case not being very typical ?)

I hope that you are well ?

4Eric Raymond3y
To be fair, I haven't followed Less Wrong all that closely over the years. It's more that I've known some of the key people for a while, notably Eliezer himself and Scott Alexander.

(Here are some of my thoughts, reading through.)

Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora.

It's strange, I don't feel the fog much in my life. I wonder if this a problem. It doesn't seem like I should feel like "I and everyone around me basically know what's going on".

I can imagine certain people for whom talking to them would feel like a flash of light in the fog. I probably ... (read more)

I'm only 23 - probably younger than most people here - but I imagine my father must have read many of the same books, as he raised me to think in a way which I now understand to be very much like Yudkowsky's version of rationality. As with what you quoted from Nancy, it all seemed really obvious to me when I read the Sequences, except for the mathematical components (Bayesianism still confuses me, but I'll get there eventually).

The main way I differ here though is that I have had lots of "mystical experiences" due to probably schizotypal or dissociative te... (read more)

8gjm3y
In this you differ from the average rationalist but maybe not so much from Eric; see e.g. his essay "Dancing with the Gods".
5MSRayne3y
Yes, yes, yes! This is it, this is exactly it! > Rituals are programs written in the symbolic language of the unconscious mind. Religions are program libraries that share critical subroutines. And the Gods represent subsystems in the wetware being programmed. All humans have potential access to pretty much the same major gods because our wetware design is 99% shared. I've come to the same conclusion in the past. Meme theory plus multiagent models of mind, plus the shared structure of the human unconscious (though another layer of what is shared, which is often overlooked, is mountains of cultural context), equals spirits as AIs on a distributed operating system run with human brains as the substrate. Failing to recognize their existence is a mistake. Being enslaved to the fragmented, defiled forms of them which arise when direct theophanic contact is lost (such as faith based religions are ruled by) is another mistake. The middle way is the best. I'm glad to know I'm not the only person here who strives both for rationalism and for gnosis.
3peak.singularity3y
Heh, this reminds me of last week's jab from John Michael Greer : https://www.ecosophia.net/a-sense-of-deja-vu/ As for me, I've been really into transhumanism in the noughties : mostly I'd say that the interest came from the Anglophone science fiction (Foundation, Accelerando, Diamond Age...), but then also from Soviet science fiction - it's interesting to look at the parallels between that "Homo Novis", the official "New Soviet Man", its representation in the early works of the Strugatsky brothers, and then later their slow slide from progressive utopia to progressive dystopia starting with the novels about their "Institute of experimental history" - which I now realize parallels my own intellectual path - circa 2010 I switched from transhumanism to "peak oilism" - hence this nickname : Energy Bulletin (now Resilience.org), Peak Oil Barrel, Archdruid Report (now Ecosophia), Tom Murphy's Do the Math, Cassandra's Legacy... So I completely missed Less Wrong at it's peak - only discovered it (and SSC) in the mid 2010's - though since I was animated by a similar quest, in parallel I've took some (current, skeptical) Zetetic classes. Also, despite liking the mandatory philosophy classes in high school, I was so put off by having to study Condillac's Le Traité des animaux in superior education, that my interest in philosophy pretty much disappeared... and only started growing back again through the epistemology of Physics. And, having finally decided that my grasp of English language was good enough (and having been dismissed enough times for my amateurish knowledge of philosophy), I've been recently reading Russell's History of Western Philosophy - though I kind of hit a hard wall with Spinoza's & Leibniz' metaphysics... In parallel, through Greer I've stopped completely dismissing occultism (though astrology is still a hard pass), but I haven't really followed through once he started getting into the very specific details of USA's history of Occultism - it's just t

Thanks for making that connection to Zen Buddhism. I never thought of it as a central theme of The Sequences before this.

I'm still not sure if I'm convinced that it actually is a central theme. In the preface to Rationality From AI to Zombies, Eliezer writes:

It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and

... (read more)
5Eric Raymond3y
I actually wouldn't call Zen a "central theme".  More "a recurring rhetorical device".  It's not Zen Buddhist content that the Sequences use, it's the emulation of Zen rhetoric as a device to subtly shift the reader's mental stance. 
5gilch3y
Not being an expert in Zen, I'm not sure what "Zen rhetoric" means. Could you provide examples quoted from the Sequences of what you are talking about and what makes it "Zen"?

I think a collection of examples and analysis would be a post in itself.

But I can give you one suggestive example from Twelve Virtues itself: "If you speak overmuch of the Way you will not attain it."

It is a Zen idea that the essence of enlightenment cannot be discovered by talking about enlightenment; rather one must put one's mind in the state where enlightenment is.  Moreover, talk and chatter - even about Zen itself - drives that state away.

Eliezer is trying to say here that the the center of rationalist practice is not in what you know about rationality or how much cleverness you can demonstrate to others but in achieving a mental stance that processes evidence correctly and efficiently.

He is borrowing the rhetoric of Zen to say that because, as with Zen, the center of our Way is found in silence and non-attachment.  The Way of Zen wants you to lose your attachment to desires; the Way of rationality wants you to lose your attachment to beliefs.

2Adam Zerner3y
I see. Thanks for clarifying.

This post was personally meaningful to me, and I'll try to cover that in my review while still analyzing it in the context of lesswrong articles.

I don't have much to add about the 'history of rationality' or the description of interactions of specific people.

Most of my value from this post wasn't directly from the content, but how the content connected to things outside of rationality and lesswrong.  So, basically, i loved the citations.

Lesswrong is very dense in self-links and self-citations, and to a lesser degree does still have a good number of li... (read more)

I like this post for reinforcing a point that I consider important about intellectual progress, and for pushing against a failure mode of the Sequences-style rationalists.

As far as I can tell, intellectual progress is made bit by bit with later building on earlier Sequences. Francis Bacon gets credit for landmark evolution of the scientific method, but it didn't spring from nowhere, he was building on ideas that had built on ideas, etc.

This says the same is true for our flavor of rationality. It's built on many things, and not just probability theory.

The f... (read more)

Fascinating and enjoyable read. I put a few of the recommended books onto my to-read list. Thank you.

In your journey, I wonder if you've come across Buckminster Fuller and, if yes, what's your opinion on his ideas?

I ask this because I found Fuller's works at the same time I found Korzybski's. And while vastly different in theme and scope, they seemed to be underpinned by the same spirit--positive, human-centered, problem-solving--one I would label as "humanism."

3Eric Raymond3y
I have run across Bucky Fuller of course.  Often brilliant, occasionally cranky, geodesic domes turned out to suck because you can't seal all those joints well enough.  We could use more like him.

I also was a rationalist before Eliezer, but of Eric's four sources of information the only one I shared is science fiction. I had the advantage of growing up in a family where the relevance of reason to the world was taken for granted.

At one point, long after I had become an adult, my parents asked me whether it would have been better if they had brought me up in their parents' (Jewish) religion. I replied that I preferred having been brought up in the one they believed in — 18th century rationalism, the ideology of Adam Smith and David Hume.

Wonderful article. 

I especially liked the part of Zen rhetoric surviving in the English language, and the part about feeling an epistemic disjunction. 

The real question is, is there a historical precursor to /r/SneerClub? Perhaps an SF zine run by someone who didn't like Korzybski and Van Vogt... 

3degsy3y
a lot of the new wave stuff feels like a SneerClub sensibility w r t golden age SF
1peak.singularity3y
Well, Pulp & Golden Age sci-fi was "discredited" by us actually landing a probe on Venus and realizing that it was not a likely place to find a lush jungle... https://www.ecosophia.net/the-worlds-that-never-were/ Meanwhile SneerClub is a bit too current to LessWrong for that parallel to work ? The above author has followed through on his project of resurrecting classic science fiction, "Vintage Worlds" is already on its 3rd volume : https://www.solarsystemheritage.com/anthology-project-2017.html
[+][comment deleted]3y20