The Best of LessWrong

Here you can find the best posts of LessWrong. When posts turn more than a year old, the LessWrong community reviews and votes on how well they have stood the test of time. These are the posts that have ranked the highest for all years since 2018 (when our annual tradition of choosing the least wrong of LessWrong began).

For the years 2018, 2019 and 2020 we also published physical books with the results of our annual vote, which you can buy and learn more about here.
Sort by:
curatedyear
+

Rationality

Eliezer Yudkowsky
Local Validity as a Key to Sanity and Civilization
Buck
"Other people are wrong" vs "I am right"
Mark Xu
Strong Evidence is Common
johnswentworth
You Are Not Measuring What You Think You Are Measuring
johnswentworth
Gears-Level Models are Capital Investments
Hazard
How to Ignore Your Emotions (while also thinking you're awesome at emotions)
Scott Garrabrant
Yes Requires the Possibility of No
Scott Alexander
Trapped Priors As A Basic Problem Of Rationality
Duncan Sabien (Deactivated)
Split and Commit
Ben Pace
A Sketch of Good Communication
Eliezer Yudkowsky
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Duncan Sabien (Deactivated)
Lies, Damn Lies, and Fabricated Options
Duncan Sabien (Deactivated)
CFAR Participant Handbook now available to all
johnswentworth
What Are You Tracking In Your Head?
Mark Xu
The First Sample Gives the Most Information
Duncan Sabien (Deactivated)
Shoulder Advisors 101
Zack_M_Davis
Feature Selection
abramdemski
Mistakes with Conservation of Expected Evidence
Scott Alexander
Varieties Of Argumentative Experience
Eliezer Yudkowsky
Toolbox-thinking and Law-thinking
alkjash
Babble
Kaj_Sotala
The Felt Sense: What, Why and How
Duncan Sabien (Deactivated)
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)
Ben Pace
The Costly Coordination Mechanism of Common Knowledge
Jacob Falkovich
Seeing the Smoke
Elizabeth
Epistemic Legibility
Daniel Kokotajlo
Taboo "Outside View"
alkjash
Prune
johnswentworth
Gears vs Behavior
Raemon
Noticing Frame Differences
Duncan Sabien (Deactivated)
Sazen
AnnaSalamon
Reality-Revealing and Reality-Masking Puzzles
Eliezer Yudkowsky
ProjectLawful.com: Eliezer's latest story, past 1M words
Eliezer Yudkowsky
Self-Integrity and the Drowning Child
Jacob Falkovich
The Treacherous Path to Rationality
Scott Garrabrant
Tyranny of the Epistemic Majority
alkjash
More Babble
abramdemski
Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems
Raemon
Being a Robust Agent
Zack_M_Davis
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Benquo
Reason isn't magic
habryka
Integrity and accountability are core parts of rationality
Raemon
The Schelling Choice is "Rabbit", not "Stag"
Diffractor
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
Raemon
Propagating Facts into Aesthetics
johnswentworth
Simulacrum 3 As Stag-Hunt Strategy
LoganStrohl
Catching the Spark
Jacob Falkovich
Is Rationalist Self-Improvement Real?
Benquo
Excerpts from a larger discussion about simulacra
Zvi
Simulacra Levels and their Interactions
abramdemski
Radical Probabilism
sarahconstantin
Naming the Nameless
AnnaSalamon
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
Eric Raymond
Rationalism before the Sequences
Owain_Evans
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
+

Optimization

sarahconstantin
The Pavlov Strategy
johnswentworth
Coordination as a Scarce Resource
AnnaSalamon
What should you change in response to an "emergency"? And AI risk
Zvi
Prediction Markets: When Do They Work?
johnswentworth
Being the (Pareto) Best in the World
alkjash
Is Success the Enemy of Freedom? (Full)
jasoncrawford
How factories were made safe
HoldenKarnofsky
All Possible Views About Humanity's Future Are Wild
jasoncrawford
Why has nuclear power been a flop?
Zvi
Simple Rules of Law
Elizabeth
Power Buys You Distance From The Crime
Eliezer Yudkowsky
Is Clickbait Destroying Our General Intelligence?
Scott Alexander
The Tails Coming Apart As Metaphor For Life
Zvi
Asymmetric Justice
Jeffrey Ladish
Nuclear war is unlikely to cause human extinction
Spiracular
Bioinfohazards
Zvi
Moloch Hasn’t Won
Zvi
Motive Ambiguity
Benquo
Can crimes be discussed literally?
Said Achmiz
The Real Rules Have No Exceptions
Lars Doucet
Lars Doucet's Georgism series on Astral Codex Ten
johnswentworth
When Money Is Abundant, Knowledge Is The Real Wealth
HoldenKarnofsky
This Can't Go On
Scott Alexander
Studies On Slack
johnswentworth
Working With Monsters
jasoncrawford
Why haven't we celebrated any major achievements lately?
abramdemski
The Credit Assignment Problem
Martin Sustrik
Inadequate Equilibria vs. Governance of the Commons
Raemon
The Amish, and Strategic Norms around Technology
Zvi
Blackmail
KatjaGrace
Discontinuous progress in history: an update
Scott Alexander
Rule Thinkers In, Not Out
Jameson Quinn
A voting theory primer for rationalists
HoldenKarnofsky
Nonprofit Boards are Weird
Wei Dai
Beyond Astronomical Waste
johnswentworth
Making Vaccine
jefftk
Make more land
+

World

Ben
The Redaction Machine
Samo Burja
On the Loss and Preservation of Knowledge
Alex_Altair
Introduction to abstract entropy
Martin Sustrik
Swiss Political System: More than You ever Wanted to Know (I.)
johnswentworth
Interfaces as a Scarce Resource
johnswentworth
Transportation as a Constraint
eukaryote
There’s no such thing as a tree (phylogenetically)
Scott Alexander
Is Science Slowing Down?
Martin Sustrik
Anti-social Punishment
Martin Sustrik
Research: Rescuers during the Holocaust
GeneSmith
Toni Kurz and the Insanity of Climbing Mountains
johnswentworth
Book Review: Design Principles of Biological Circuits
Elizabeth
Literature Review: Distributed Teams
Valentine
The Intelligent Social Web
jacobjacob
Unconscious Economics
eukaryote
Spaghetti Towers
Eli Tyre
Historical mathematicians exhibit a birth order effect too
johnswentworth
What Money Cannot Buy
Scott Alexander
Book Review: The Secret Of Our Success
johnswentworth
Specializing in Problems We Don't Understand
KatjaGrace
Why did everything take so long?
Ruby
[Answer] Why wasn't science invented in China?
Scott Alexander
Mental Mountains
Kaj_Sotala
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms
johnswentworth
Evolution of Modularity
johnswentworth
Science in a High-Dimensional World
zhukeepa
How uniform is the neocortex?
Kaj_Sotala
Building up to an Internal Family Systems model
Steven Byrnes
My computational framework for the brain
Natália
Counter-theses on Sleep
abramdemski
What makes people intellectually active?
Bucky
Birth order effect found in Nobel Laureates in Physics
KatjaGrace
Elephant seal 2
JackH
Anti-Aging: State of the Art
Vaniver
Steelmanning Divination
Kaj_Sotala
Book summary: Unlocking the Emotional Brain
+

AI Strategy

Ajeya Cotra
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Daniel Kokotajlo
Cortés, Pizarro, and Afonso as Precedents for Takeover
Daniel Kokotajlo
The date of AI Takeover is not the day the AI takes over
paulfchristiano
What failure looks like
Daniel Kokotajlo
What 2026 looks like
gwern
It Looks Like You're Trying To Take Over The World
Andrew_Critch
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
paulfchristiano
Another (outer) alignment failure story
Ajeya Cotra
Draft report on AI timelines
Eliezer Yudkowsky
Biology-Inspired AGI Timelines: The Trick That Never Works
HoldenKarnofsky
Reply to Eliezer on Biological Anchors
Richard_Ngo
AGI safety from first principles: Introduction
Daniel Kokotajlo
Fun with +12 OOMs of Compute
Wei Dai
AI Safety "Success Stories"
KatjaGrace
Counterarguments to the basic AI x-risk case
johnswentworth
The Plan
Rohin Shah
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
lc
What an actually pessimistic containment strategy looks like
Eliezer Yudkowsky
MIRI announces new "Death With Dignity" strategy
evhub
Chris Olah’s views on AGI safety
So8res
Comments on Carlsmith's “Is power-seeking AI an existential risk?”
Adam Scholl
Safetywashing
abramdemski
The Parable of Predict-O-Matic
KatjaGrace
Let’s think about slowing down AI
nostalgebraist
human psycholinguists: a critical appraisal
nostalgebraist
larger language models may disappoint you [or, an eternally unfinished draft]
Daniel Kokotajlo
Against GDP as a metric for timelines and takeoff speeds
paulfchristiano
Arguments about fast takeoff
Eliezer Yudkowsky
Six Dimensions of Operational Adequacy in AGI Projects
+

Technical AI Safety

Andrew_Critch
Some AI research areas and their relevance to existential safety
1a3orn
EfficientZero: How It Works
elspood
Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment
So8res
Decision theory does not imply that we get to have nice things
TurnTrout
Reward is not the optimization target
johnswentworth
Worlds Where Iterative Design Fails
Vika
Specification gaming examples in AI
Rafael Harth
Inner Alignment: Explain like I'm 12 Edition
evhub
An overview of 11 proposals for building safe advanced AI
johnswentworth
Alignment By Default
johnswentworth
How To Go From Interpretability To Alignment: Just Retarget The Search
Alex Flint
Search versus design
abramdemski
Selection vs Control
Mark Xu
The Solomonoff Prior is Malign
paulfchristiano
My research methodology
Eliezer Yudkowsky
The Rocket Alignment Problem
Eliezer Yudkowsky
AGI Ruin: A List of Lethalities
So8res
A central AI alignment problem: capabilities generalization, and the sharp left turn
TurnTrout
Reframing Impact
Scott Garrabrant
Robustness to Scale
paulfchristiano
Inaccessible information
TurnTrout
Seeking Power is Often Convergently Instrumental in MDPs
So8res
On how various plans miss the hard bits of the alignment challenge
abramdemski
Alignment Research Field Guide
paulfchristiano
The strategy-stealing assumption
Veedrac
Optimality is the tiger, and agents are its teeth
Sam Ringer
Models Don't "Get Reward"
johnswentworth
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Buck
Language models seem to be much better than humans at next-token prediction
abramdemski
An Untrollable Mathematician Illustrated
abramdemski
An Orthodox Case Against Utility Functions
johnswentworth
Selection Theorems: A Program For Understanding Agents
Rohin Shah
Coherence arguments do not entail goal-directed behavior
Alex Flint
The ground of optimization
paulfchristiano
Where I agree and disagree with Eliezer
Eliezer Yudkowsky
Ngo and Yudkowsky on alignment difficulty
abramdemski
Embedded Agents
evhub
Risks from Learned Optimization: Introduction
nostalgebraist
chinchilla's wild implications
johnswentworth
Why Agent Foundations? An Overly Abstract Explanation
zhukeepa
Paul's research agenda FAQ
Eliezer Yudkowsky
Coherent decisions imply consistent utilities
paulfchristiano
Open question: are minimal circuits daemon-free?
evhub
Gradient hacking
janus
Simulators
LawrenceC
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
TurnTrout
Humans provide an untapped wealth of evidence about alignment
Neel Nanda
A Mechanistic Interpretability Analysis of Grokking
Collin
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
evhub
Understanding “Deep Double Descent”
Quintin Pope
The shard theory of human values
TurnTrout
Inner and outer alignment decompose one hard problem into two extremely hard problems
Eliezer Yudkowsky
Challenges to Christiano’s capability amplification proposal
Scott Garrabrant
Finite Factored Sets
paulfchristiano
ARC's first technical report: Eliciting Latent Knowledge
Diffractor
Introduction To The Infra-Bayesianism Sequence
#7

In early 2020, COVID-19 was spreading rapidly, but many people seem hesitant to take precautions or prepare. Jacob Falkovich explores why people often wait for social permission before reacting to potential threats, even when the evidence is clear. He argues we should be willing to act on our own judgment rather than waiting for others. 

0DirectedEvolution
The central point of this article was that conformism was causing society to treat COVID-19 with insufficient alarm. Its goal was to give its readership social sanction and motivation to change that pattern. One of its sub-arguments was that the media was succumbing to conformity. This claim came with an implication that this post was ahead of the curve, and that it was indicative of a pattern of success among rationalists in achieving real benefits, both altruistically (in motivating positive social change) and selfishly (in finding alpha). I thought it would be useful to review 2020 COVID-19 media coverage through the month of February, up through Feb. 27th, which is when this post was published on Putanumonit. I also want to take a look at the stock market crash relative to the publication of this article. Let's start with the stock market. The S&P500 fell about 13% from its peak on Feb. 9th to the week of Feb. 23rd-Mar. 1st, which is when this article was published. Jacob sold 10% of his stocks on Feb. 17th, which was still very early in the crash. The S&P500 went on to fall a total of 32% from that Feb. 9th peak until it bottomed out on Mar. 15th. At least some gains would be made if stocks had been repurchased in the 5 months between Feb. 17th and early August 2020. I don't know how much profit Jacob realized, presuming he eventually reinvested. But this looks to me like a convincing story of Jacob finding alpha in an inefficient market, rather than stumbling into profits by accident. He didn't do it via insider knowledge or obsessive interest in some weird corner of the financial system. He did it by thinking about the basic facts of a situation that had the attention of the entire world, and being right where almost everybody else was making the wrong bet. Let's focus on the media. The top US newspapers by circulation and with a national primary service area are USA Today, the Wall Street Journal, and the New York Times. I'm going to focus on coverage in
#10

Zvi explores the four "simulacra levels" of communication and action, using the COVID-19 pandemic as an example: 1) Literal truth. 2) Trying to influence behavior 3) Signaling group membership, and 4) Pure power games. He examines how these levels interact and different strategies people use across them.

#22

Most Prisoner's Dilemmas are actually Stag Hunts in the iterated game, and most Stag Hunts are actually "Schelling games." You have to coordinate on a good equilibrium, but there are many good equilibria to choose from, which benefit different people to different degrees. This complicates the problem of cooperating.

6Bucky
A short note to start the review that the author isn’t happy with how it is communicated. I agree it could be clearer and this is the reason I’m scoring this 4 instead of 9. The actual content seems very useful to me. AllAmericanBreakfast has already reviewed this from a theoretical point of view but I wanted to look at it from a practical standpoint. *** To test whether the conclusions of this post were true in practice I decided to take 5 examples from the Wikipedia page on the Prisoner’s dilemma and see if they were better modeled by Stag Hunt or Schelling Pub: * Climate negotiations * Relationships * Marketing * Doping in sport * Cold war nuclear arms race Detailed analysis of each is at the bottom of the review. Of these 5, 3 (Climate, Relationships, Arms race) seem to me to be very well modeled by Schelling Pub.  Due to the constraints on communication allowed between rival companies it is difficult to see marketing (where more advertising = defect) as a Schelling Pub game. There probably is an underlying structure which looks a bit like Schelling Pub but it is very hard to move between Nash Equilibria. As a result I would say that Prisoner’s Dilemma is a more natural model for marketing. The choice of whether to dope in sport is probably best modeled as a Prisoner’s dilemma with an enforcing authority which punishes defection. As a result, I don’t think any of the 3 games are a particularly good model for any individual’s choice. However, negotiations on setting up the enforcing authority and the rules under which it operates are more like Schelling Pub. Originally I thought this should maybe count as half a point for the post but thinking about it further I would say this is actually a very strong example of what the post is talking about – if your individual choice looks like a Prisoner’s Dilemma then look for ways to make it into a Schelling Pub. If this involves setting up a central enforcement agency then negotiate to make that happen. So I
8DirectedEvolution
The goal of this post is to help us understand the similarities and differences between several different games, and to improve our intuitions about which game is the right default assumption when modeling real-world outcomes. My main objective with this review is to check the game theoretic claims, identify the points at which this post makes empirical assertions, and see if there are any worrisome oversights or gaps. Most of my fact-checking will just be resorting to Wikipedia. Let’s start with definitions of two key concepts. Pareto-optimal: One dimension cannot improve without a second worsening. Nash equilibrium: No player can do better by unilaterally changing their strategy. Here’s the payoff matrix from the one-shot Prisoner’s Dilemma and how it relates to these key concepts.  B stays silentB betraysA stays silentPareto-optimal A betrays Nash equilibrium         This article outlines three possible relationships between Pareto-optimality and Nash equilibrium. 1. There are no Pareto-optimal Nash equilibria. 2. There is a single Pareto-optimal Nash equilibrium, and another equilibrium that is not Pareto-optimal. 3. There are multiple Pareto-optimal Nash equilibria, which benefit different players to different extents. The author attempts to argue which of these arrangements best describes the world we live in, and makes the best default assumption when interpreting real-world situations as games. The claim is that real-world situations most often resemble iterated PDs, which have multiple Pareto-optimal Nash equilibria benefitting different players to different extents. I will attempt to show that the author’s conclusion only applies when modeling superrational entities, or entities with an unbounded lifespan, and give some examples where this might be relevant. Iterated Prisoner’s Dilemma is a little more complex than the author states. If the players know how many turns the game will be played for, or if the game has a known upper limit of t
#26

Dogmatic probabilism is the theory that all rational belief updates should be Bayesian updates. Radical probabilism is a more flexible theory which allows agents to radically change their beliefs, while still obeying some constraints. Abram examines how radical probabilism differs from dogmatic probabilism, and what implications the theory has for rational agents.

#27
There are two kinds of puzzles: "reality-revealing puzzles" that help us understand the world better, and "reality-masking puzzles" that can inadvertently disable parts of our ability to see clearly. CFAR's work has involved both types as it has tried to help people reason about existential risk from AI while staying grounded. We need to be careful about disabling too many of our epistemic safeguards. [key_quote]
0Zvi
This is a long and good post with a title and early framing advertising a shorter and better post that does not fully exist, but would be great if it did.  The actual post here is something more like "CFAR and the Quest to Change Core Beliefs While Staying Sane."  The basic problem is that people by default have belief systems that allow them to operate normally in everyday life, and that protect them against weird beliefs and absurd actions, especially ones that would extract a lot of resources in ways that don't clearly pay off. And they similarly protect those belief systems in order to protect that ability to operate in everyday life, and to protect their social relationships, and their ability to be happy and get out of bed and care about their friends and so on.  A bunch of these defenses are anti-epistemic, or can function that way in many contexts, and stand in the way of big changes in life (change jobs, relationships, religions, friend groups, goals, etc etc).  The hard problem CFAR is largely trying to solve in this telling, and that the sequences try to solve in this telling, is to disable such systems enough to allow good things, without also allowing bad things, or to find ways to cope with the subsequent bad things slash disruptions. When you free people to be shaken out of their default systems, they tend to go to various extremes that are unhealthy for them, like optimizing narrowly for one goal instead of many goals, or having trouble spending resources (including time) on themselves at all, or being in the moment and living life, And That's Terrible because it doesn't actually lead to better larger outcomes in addition to making those people worse off themselves. These are good things that need to be discussed more, but the title and introduction promise something I find even more interesting. In that taxonomy, the key difference is that there are games one can play, things one can be optimizing for or responding to, incentives one can creat
#33

The path to explicit reason is fraught with challenges. People often don't want to use explicit reason, and when they try to use it, they fail. Even if they succeed, they're punished socially. The post explores various obstacles on this path, including social pressure, strange memeplexes, and the "valley of bad rationality".

0Yoav Ravid
I remember this post very fondly. I often thought back to it and it inspired some thoughts of my own about rationality (which I had trouble writing down and are waiting in a draft to be written fully some day). I haven't used any of the phrases introduced here (Underperformance Swamp, Sinkholes of Sneer, Valley of Disintegration...), and I'm not sure whether it was the intention. The post starts with the claim that rationalists "basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts". Since it's not the point of the post I won't review this claim in depth, but it seems basically true to me. Elizabeth's review here gives a few examples. This post is about the difficulty and even danger in becoming a rationalist, or more generally, in using explicit reasoning (Intuition and Social Cognition being the alternatives). The first difficulty is that explicit reasoning alone often fails to outperform intuition and social cognition where those perform well. I think this is true, and as the rationality community evolved it came to appreciate intuition and social cognition more, without devaluing explicit reason. The second is persevering through the sneer and social pressure that comes from trying to use explicit reason to do things, often coming to very different approaches from other people, and often also failing. The third is navigating the strange status hierarchy in the community, which mostly doesn't depend on regular things like attractiveness and more often on our ability to apply explicit reason effectively, as well as being scared by strange memes like AI risk and cryonics. I don't know to what extent the first part is true in the physical communities, but it definitely is in the virtual community.  The fourth is where the danger comes in. When you're in the Valley of Bad Rationality your life can get worse, and if you don't get out of it some way it might stay worse. So
#37

The felt sense is a concept coined by psychologist Eugene Gendlin to describe a kind of a kind of pre-linguistic, physical sensation that represents some mental content. Kaj gives examples of felt senses, explains why they're useful to pay attention to, and gives tips on how to notice and work with them.

4Raymond Arnold
This post feels like an important part of what I've referred to as The CFAR Development Branch Git Merge. Between 2013ish and 2017ish, a lot of rationality development happened in person, which built off the sequences. I think some of that work turned out to be dead ends, or a bit confused, or not as important as we thought at the time. But a lot of it was been quite essential to rationality as a practice. I'm glad it has gotten written up. The felt sense, and focusing, have been two surprisingly important tools for me. One use case not quite mentioned here – and I think perhaps the most important one for rationality, is for getting a handle on what I actually think. Kaj discusses using it for figuring out how to communicate better, getting a sense of what your interlocutor is trying to understand and how it contrasts with what you're trying to say. But I think this is also useful in single-player mode. i.e. I say "I think X", and then I notice "no, there's a subtle wrongness to my description of what X is". This is helpful both for clarifying my beliefs about subtle topics, or for following fruitful trails of brainstorming.
#38

If you know nothing about a thing, the first example or sample gives you a disproportionate amount of information, often more than any subsequent sample. It lets you locate the idea in conceptspace, get a sense of what domain/scale/magnitude you're dealing with, and provides an anchor for further thinking.