Summary for regular readers: The epistemic content of The Sequences — i.e., the advice on finding true beliefs — has a different epistemic status than the instrumental content — i.e., the advice on how to behave.  Specifically, the epistemic content is based upon techniques from logic, probability, statistics, and causal inference that have already been heavily explored and vetted, both formally and empirically, in a wide range of disciplines and application areas.  This is not true of the how-to-behave content of the sequences, such as the stances presented on individual heroism and what does or not not constitute humble behavior.  Indeed, the analogous underpinning fields — namely, decision theory, game theory, ethics, meta-ethics, and political theory — are not nearly as well explored-and-vetted as the epistemic underpinnings. As such, the epistemic content of the sequences should be made available in a separate compendium, curated for its special epistemic status.   This post is one attempt at that, which I'd be happy to have replaced or superseded by a more official attempt from, say, the LessWrong team, or Eliezer Yudkowsky.

Followed by: What's next for instrumental rationality?

Introduction for newcomers

The good

The "Epistemic Sequences" curated below are taken from the writings of Eliezer Yudkowsky.  Yudkowsky's writings cover some of the best introductions I've seen for helping individual people to start forming true beliefs about the world, or at least, beliefs that become progressively less wrong over time.  The Epistemic Sequences describe processes to follow to form true beliefs, rather than merely conclusions to reach, and the prescribed processes are well backed by logic, statistics, and causal inference.  They draw inspiration from well-written, tried-and-true textbooks like these:

(* J. Pearl: Causal Inference in Statistics: A Primer is a more recent and friendlier intro.)

The questionable

Epistemic status: broadly verifiable claims about fields of theory and practice and their relation to the LessWrong Sequences.

For better or worse, the broader LessWrong Sequences and Sequence Highlights also contain a lot of strong stances on how to behave, i.e., what decisions and actions to take given one's beliefs, including considerations of ethics, meta-ethics, and human values.  There are strong themes of unilateral heroism, how to be humble, and how to best protect and serve humanity.  Compared to the epistemic aspects of the sequences, these how-to-behave aspects are not founded in well-established technical disciplines like logic and statistics, even though they have a tendency to motivate people to form true beliefs and attempt to help the world.  The analogous underpinning fields — including decision theory, game theory, ethics, meta-ethics, political theory — are nowhere near as well explored-and-vetted as logic and statistics.

My take

Epistemic status: my opinion as a researcher professionally dedicated to helping the world.

Feel free to skip this part if you're not particularly interested in my opinion.

That said: I think the how-to-behave themes of the LessWrong Sequences are at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration."  I'm not going to try to argue this position here, though, because I think it would distract from the goodness of the epistemic content, which I'd like to see promoted in its purest possible form.  

Also, I'd like to add that Eliezer Yudkowsky can't be "blamed" for the absence of ideal technical underpinnings for the how-to-behave aspects of the sequences.  In fact, he and his colleagues at MIRI have made world-class attempts to improve these foundations through "Agent Foundations" research.  E.g., Scott Garrabrant's discovery of Logical Inductors (on which I had the privilege of serving as his co-author) was a research direction that was inspired by Yudkowsky and funded by MIRI.  

Importantly, the strength of my dislike for what I consider the 'toxic' aspects of the sequences is also not based on tried-and-true technical underpinnings, any more than Eliezer's original writings.  Like Eliezer, I've tried to advance research on foundations of individual and collective decision-making — e.g., negotiable reinforcement learning, fair division algorithms, Lobian cooperation, expert aggregation criteria, and equilibria of symmetric games — but even new technical insights I've made along this journey do not have the tried-everywhere-and-definitely-helpful properties that logic, probability, statistics, and causal inference have. 

Nonetheless, for the purpose of building a community of people who collectively pursue the truth and sometimes attempt to take collective actions, I think it's important to call out the questionability of the LessWrong Sequences for its how-to-behave content, and promote the epistemic content as being valuable despite this critique.

The Epistemic Sequences, list v.0.1

The content below is taken from the Sequence Highlights created by the LessWrong team, with only strikethroughs and italicized interjections from me marked by carets (^).

Thinking Better on Purpose

Part 1 of 6 from the Sequence Highlights

Humans can not only think, but think about our own thinking. This makes it possible for us to recognize the shortcomings of our default reasoning and work to improve it – the project of human rationality. 

<33 min read

Pitfalls of Human Cognition

Part 2 of 6 from the Sequence Highlights.

A major theme of the Sequences is the ways in which human reasoning goes astray. This sample of essays describes a number of failure modes and invokes us to do better.

34 min read

The Laws Governing Belief

Part 3 of 6 from the Sequence Highlights.

While beliefs are subjective, that doesn't mean that one gets to choose their beliefs willy-nilly. There are laws that theoretically determine the correct belief given the evidence, and it's towards such beliefs that we should aspire.

82 min read

Science Isn't Enough

Part 4 of 6 from the Sequence Highlights.

While far better than what came before, "science" and the "scientific method" are still crude, inefficient, and inadequate to prevent you from wasting years of effort on doomed research directions.

18 min read

  • When Science Can't Help
    • ^ Flag: almost removed for a strong overtone of encouraging cryonics (a behavior) , but kept because the explicit content is about whether it works or not (a belief).
  • Faster Than Science
  • Science Doesn't Trust Your Rationality
    • ^ Removed for not drawing actionable belief-forming advice from logic / statistics / causality, while arguably being a pro-libertarian how-to-behave piece.
  • No Safe Defense, Not Even Science
    • ^ Removed for not drawing actionable belief-forming advice from logic / statistics / causality, while encouraging emotional distrust in the sanity of other people.
    • ^ Worth reading as a contextualization of Yukdowsky's other writings.
       

Connecting Words to Reality

Part 5 of 6 from the Sequence Highlights.

To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.

33 min read

Why We Fight

^ Flag: This (Part 6) was almost removed entirely due to not carrying much in the way of belief-formation advice.  Perhaps it should be removed from a more curated "Epistemic Sequences" compendium.

Part 6 of 6 from the Sequence Highlights.

The pursuit of rationality and that of doing better on purpose, can in fact be rather hard. You have to get the motivation for that from somewhere.

31 min read

  • Something to Protect
    • ^ Flag: almost removed for encouraging unilateral heroism (a behavior), and for not carrying much belief-formation advice, but kept because many people find it motivating.  Perhaps this should still be removed from a more curated "Epistemic Sequences" compendium.
  • The Gift We Give To Tomorrow
    • ^ Flag: almost removed for not carrying much belief-formation advice, but kept because many people find it motivating.  Perhaps this should still be removed from a more curated "Epistemic Sequences" compendium.
  • On Caring
    • ^ Removed for not carrying much belief-formation advice, and for explicitly advising on how to feel about about relate to other people (emotions are not quite behavior, but intermediate between belief and behavior).
  • Tsuyoku Naritai! (I Want To Become Stronger)
    • ^ Removed for not carrying much belief formation advice, and for explicitly advising on how to relate to others (behavior).
  • A Sense That More Is Possible
    • ^ Flag: almost removed for not carrying much belief-formation advice, but kept because many people find it motivating.

What's next?

I'm not sure!  Perhaps an official "Epistemic Sequences" compendium could someday be be produced that focusses entirely on epistemics, with the potential upsides of

  • more strongly emphasizing the most-well-founded aspects of the Sequences;
  • avoiding turning off readers who find the how-to-behave aspects of the sequences off-putting (either because the Sequences are wrong and those readers can sense it, or just because the arguments aren't strong enough, or both); and
  • yielding a larger and broader community of people who can agree on beliefs and good belief formation practices, even if they don't (yet) agree on how to treat each other or the rest of the world.

For now, I'll just content myself to link interested readers to this post if they ask me which parts of the LessWrong sequences are most worth reading and why.

Followed by: What's next for instrumental rationality?

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 9:20 PM
[-]gjm2y1311

I like this, but would also like to register that I would be very interested to read more about your opinion that the how-to-behave bits of the Sequences are

at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration."

Thanks for doing this! I do think it'd be good to have an Epistemic Rationality textbook that focuses on the well-vetted stuff. 

I think it'd probably make sense to start with all of R:A-Z rather than the Sequences Highlights for generating it – The Sequence Highlights are deliberately meant to be an overview of both epistemic/instrumental/motivation content, and for the sake of fitting it into 50 posts we probably skipped over stuff that might have made sense to incorporate into an Epistemics 101 textbook).

(That said I think someone interested in epistemics and just getting started would do well to start with the stuff here)

I'd be more interested in a project to review the Sequences, have savvy people weigh in on which parts they think are more probable vs. less probable (indexed to a certain time, since views can change), and display their assessments in a centralized, easy-to-navigate way.

I want to say that trying to purify the Sequences feels... boring? Compared to engaging with the content, debating it, vetting it, updating it, improving it, etc.

And I worry that attempts to cut out the less-certain parts will also require cutting out the less-legible parts, even where these contain important content to interact with. The Sequences IMO are a pretty cohesive thing; many insights can be extracted without swallowing the thing wholesale, but a large portion of it will be harder to understand if you don't read the full thing at all. (Or don't read enough of the core 'kinda-illegible' pieces.)

Maybe what I want is more like a new 'Critiques of the Sequences' sequence, or a 'Sequences + (critiques and responses)' alt-sequence. Since responding to stuff and pointing at problems seems obviously productive to me, whereas I'm more wary of purification.

To be clear, my current epistemic state is not at all that a curated "epistemic sequences" should replace the existing things. The thing I see as potentially valuable here is to carve them into different subsections focusing on different things that can serve people with different backgrounds and learning goals. (I hadn't at all thinking of this as "purifying" anything, or at least that's not my interest in it)

Ruby/I listened to some of Critch's feedback earlier, and still decided to have have the Sequence Highlights cover a swatch of motivational/instrumental posts, because they seemed important part of the Sequences experience.

I think my own take (not necessarily matching Ruby or Habryka's) is that I disagree with Critch's overall assessment of "the how to behave" parts of the sequences are "toxic." But I do think there is something about the motivation-orientation of the sequences that is... high variance, at least. I see it getting at something that feels important to engage with. (My guess is, if I reflected a bunch of double cruxed with Critch about it, the disagreement here would be less about concrete claims the sequences make and more about a vibe, sorta like how I think disgreements with Post Rationalists are not actually about claims and are more about vibe). 

I haven't gotten into that in this comment section since Critch went out of his way to avoid making that the topic here. (Also somewhat because it feels like a big discussion and I'm kinda busy atm)

I think the how-to-behave themes of the LessWrong Sequences are at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration."

I broadly agree with this. I stopped referring people to the Sequences because of it.

One other possible lens to filter a better Sequences: is it a piece relying on Yudkowsky citing current psychology at the time? He was way too credulous, when the correct amount to update on most social science research of that era was: lol.

Concretely to your project above though: I think you should remove all of Why We Fight series: Something to Protect is Yudkowsky typical minding about where your motivation comes from (and is wrong, lots of people are selfishly motivated as if Tomorrow is The Gift I Give Myself), and I've seen A Sense That More is Possible invoked as Deep Wisdom to justify anything that isn't the current status quo. Likewise, I think Politics is the Mind Killer should also be removed for similar reasons. Whatever its actual content, the phrase has taken on a life of its own and that interpretation is not helpful.

I think it also makes sense to reorder the entire sequences- all 333 of them- in order from most valuable to least valuable, and perhaps make multiple different lists according to different values. That way, when someone feels like the last 20 or 40 have not been very helpful, that means the time is right to move on to other things, then and only then.

I think - and I've considered trying to do this partly in order to teach myself and get all the insights to sink in - that it would also be desirable to rewrite some or all of the sequences / write entirely new stuff inspired by them in simpler language, with a more neutral tone (Eliezer has a certain style that not everyone would appreciate) and mathematical parts made as visual and easy to follow as possible, for normal audiences who aren't... well, nerds like the rest of us. I think improving the rationality of ordinary people would be worth it.

Seems like it could be a core LW book, just like how The Precipice was the big book for EA. I definitely think that, one way or another, the CFAR handbook should be taken into account (since it's explicitly more optimized to train executives and other clients a wider variety of people vrom various backgrounds).

the CFAR handbookshould be taken into account (since it's explicitly optimized to train executives and other clients).

what leads you to think it’s optimized in this way?

I made a ton of assumptions based off of this though, and I never checked to see whether the CFAR handbook was stated to help those particular people. So I retracted parts of my comment that were based on assumptions that I should have checked before stating that it was clearly for executives.

Ah wow, yeah as Zach notes that's a totally different CFAR.

An important thing about the CFAR handbook is that it was mostly optimized as a companion to workshops. For a longtime, the first chapter in the CFAR handbook warned you "this was not actually designed to give you any particular experience, we have no idea what reading this book will do if not accompanied by a workshop."

The current CFAR Handbook publishing that Duncan is doing has some additional thought put into it as a standalone series of essays, but I don't think it's optimized the way you're imagining.