Concepts Portal

This page displays the concepts which are foci of LessWrong discussion. 

The page has three sections:

  •     Tag Portal - manually curated, structured tags
  •     Tag Details - tags with descriptions and top posts
  •     Tags List - alphabetical list of all existing tags

 

The Library | Tag Activity Page | Tagging FAQ | Discussion Thread | LW1.0 Imported Wiki

Wiki-Tag Dashboard (New! Help us improve the wiki-tags!)


RATIONALITY

Theory / Concepts

Anticipated Experiences
Aumann's Agreement Theorem
Bayes Theorem
Bounded Rationality
Conservation of Expected
Contrarianism
Decision Theory
Epistemology
Game Theory
Gears-Level
Hansonian Pre-Rationality
Infra-Bayesianism
Law-Thinking
Map and Territory
Newcomb's Problem
Occam's razor
Robust Agents
Solomonoff Induction
Truth, Semantics, & Meaning
Utility Functions
 

Applied Topics

Alief
Betting
Cached Thoughts
Calibration
Dark Arts
Empiricism
Epistemic Modesty
Forecasting & Prediction
Group Rationality
Identity
Inside/Outside View
Introspection
Intuition
Practice & Philosophy of Science
Scholarship & Learning
Taking Ideas Seriously
Value of Information
 

Failure Modes

Affect Heuristic
Aversion/Ugh Fields
Bucket Errors
Compartmentalization
Confirmation Bias
Fallacies
Goodhart’s Law
Groupthink
Heuristics and Biases
Mind Projection Fallacy
Motivated Reasoning
Pica
Pitfalls of Rationality
Rationalization 
Self-Deception
Sunk-Cost Fallacy

Communication

Common Knowledge
Conversation
Decoupling vs Contextualizing
Disagreement
Distillation & Pedagogy
Double-Crux
Good Explanations (Advice)
Ideological Turing Tests
Inferential Distance
Information Cascades
Memetic Immune System
Philosophy of Language
Steelmanning

Techniques

Double-Crux
Fermi Estimation
Focusing
Goal Factoring
Internal Double Crux
Hamming Questions
Murphyjitsu
Noticing
Techniques
Trigger Action Planning/Patterns

Models of the Mind

Consciousness
Dual Process Theory (System 1 & 2)
General Intelligence
Subagents
Predictive Processing
Perceptual Control Theory
Zombies
 

Other

Center for Applied Rationality
Curiosity
Rationality A-Z (discussion and meta)
Rationality Quotes
Updated Beliefs (examples of)

ARTIFICIAL INTELLIGENCE

Basic Alignment Theory

AIXI
Coherent Extrapolated Volition
Complexity of Value
Corrigibility
Decision Theory
Embedded Agency
Fixed Point Theorems
Goodhart's Law
Infra-Bayesianism
Inner Alignment
Instrumental Convergence
Logical Induction
Logical Uncertainty
Mesa-Optimization
Myopia
Newcomb's Problem
Optimization
Orthogonality Thesis
Outer Alignment
Paperclip Maximizer
Solomonoff Induction
Utility Functions

Engineering Alignment

AI Boxing (Containment)
Debate
Factored Cognition
Humans Consulting HCH
Impact Measures
Inverse Reinforcement Learning
Iterated Amplification
Mild Optimization
Tool AI
Transparency / Interpretability
Value Learning
 

Organizations

CHAI (UC Berkeley)
FHI (Oxford)
MIRI
OpenAI
Ought

Strategy

AI Governance
AI Risk
AI Services (CAIS)
AI Takeoff
AI Timelines

 Other

Alpha-
GPT
Research Agendas 

 

WORLD MODELING

Mathematical Sciences

Abstraction
Anthropics
Category Theory
Causality
Computer Science
Game Theory
Decision Theory
Information Theory
Logic & Mathematics
Probability & Statistics

Specifics
Prisoner's Dilemma
Sleeping Beauty Paradox
 

General Science & Eng

Machine Learning
Nanotechnology
Physics
Programming
Space Exploration & Colonization

Specifics
Simulation Hypothesis
The Great Filter

Meta / Misc

Academic Papers
Book Reviews
Counterfactuals
Distillation & Pedagogy
Fact Posts
Research Agendas
Scholarship & Learning

Social & Economic

Economics
Financial Investing
History
Politics
Progress Studies
Social and Cultural Dynamics

Specifics
Conflict vs Mistake Theory
Cost Disease
Efficient Market Hypothesis
Industrial Revolution
Moral Mazes
Signaling
Social Reality
Social Status

Biological & Psychological

Aging
Biology
Consciousness
Evolution
Evolutionary Psychology
Medicine
Neuroscience
Qualia

Specifics
Coronavirus
General Intelligence
IQ / g-factor
Neocortex

The Practice of Modeling

Epistemic Review
Expertise
Gears-Level Models
Falsifiability
Fermi Estimation
Forecasting & Prediction
Forecasts (Lists of)
Inside/Outside View
Intellectual Progress (Society-Level)
Intellectual Progress (Individual-Level)
Jargon (meta)
Practice and Philosophy of Science
Prediction Markets
Reductionism
Replicability
 

 

WORLD OPTIMIZATION

Moral Theory

Altruism
Consequentialism
Deontology
Ethics & Morality
Metaethics
Moral Uncertainty
Trolley Problem

 

 

Causes / Interventions

Aging
Animal Welfare
Climate Change
Existential Risk
Futurism
Intellectual Progress
Mind Uploading
Life Extension
S-risks
Transhumanism
Voting Theory

Working with Humans

Coalitional Instincts
Common Knowledge
Coordination / Cooperation
Game Theory
Group Rationality
Institution Design
Moloch
Organizational Design and Culture
Signaling
Simulacrum Levels
Social Status

Applied Topics

Acausal Trade
Blackmail
Censorship
Chesterton's Fence
Death
Deception
Honesty
Hypocrisy
Information Hazards
Meta-Honesty
Pascal's Mugging
Privacy
War

Value & Virtue

Ambition
Art
Aesthetics
Complexity of Value
Courage
Fun Theory
Principles
Suffering
Superstimuli
Wireheading

Meta

80,000 Hours
Cause Prioritization
Center for Long-term Risk
Effective Altruism
GiveWell
Heroic Responsibility
 

 

PRACTICAL

Domains of Well-being

Careers
Emotions
Exercise (Physical)
Financial Investing
Gratitude
Happiness
Human Bodies
Nutrition
Parenting
Slack
Sleep
Well-being

Skills & Techniques

Cryonics
Emotions
Goal Factoring
Habits
Hamming Questions
Intellectual Progress (Individual-Level)
Life Improvements
Meditation
More Dakka
Note-Taking
Planning & Decision-Making
Sabbath
Self Experimentation
Skill Building
Software Tools
Spaced Repetition
Virtues (Instrumental)

Productivity

Akrasia
Attention
Motivations
Prioritization
Procrastination
Productivity
Willpower

Interpersonal
Circling
Conversation (topic)
Communication Cultures
Relationships

 

COMMUNITY

All

Bounties (active)
Grants & Fundraising
Growth Stories
Online Socialization
Petrov Day
Public Discourse
Reading Group
Research Agendas
Ritual
Solstice Celebration
 

LessWrong

Events (Community)
Site Meta
GreaterWrong Meta
Intellectual Progress via LessWrong
LessWrong Events
LW Moderation
Meetups (topic)
Moderation (topic)
The SF Bay Area
Tagging

 

OTHER

Content-Type

Art
Checklists
Dialogue (form)
Eldritch Analogies
Exercises / Problem-Sets
Humor
Fiction
Open Problems
Paradoxes
Poetry
Postmortems & Retrospectives
Summaries

Format

Book Reviews
Interviews
List of Links
Newsletters
Open Thread
Q&A (format)
Surveys
Transcripts

Cross-Category

Cooking
Education
Narratives (stories)
Religion
Writing

 

Miscellaneous

Fiction (topic)
Gaming (videogames/tabletop)
HPMOR (discussion & meta)

Tag Details

No Filters

Rationality is the art of thinking in ways that result in accurate beliefs and good decisions. It is the primary topic of LessWrong.

Rationality is not only about avoiding the vices of self-deception and obfuscation, but also about the virtue of curiosity, seeing the world more clearly than before, and achieving things previously unreachable to you. The study of rationality on LessWrong includes a theoretical understanding of ideal cognitive algorithms, as well as building a practice that uses these idealized algorithms to inform heuristics, habits, and techniques, to successfully reason and make decisions in the real world.

Artificial Intelligence is the study of creating intelligence in algorithms. On LessWrong, the primary focus of AI discussion is to ensure that as humanity builds increasingly powerful AI systems, the outcome will be good. The central concern is that a powerful enough AI, if not designed and implemented with sufficient understanding, would optimize something unintended by its creators and pose an existential threat to the future of humanity. This is known as the AI alignment problem.

World Modeling is getting curious about how the world works. It’s diving into wikipedia, it’s running a survey to get data from your friends, it’s dropping balls from different heights and measuring how long they take to fall. Empiricism, scholarship, googling, introspection, data-gathering, science. Applying your epistemology and curiosity, finding out how the damn thing works, and writing it down for the rest of us.

World Optimization is the full use of our agency. It is extending the reach of human civilization. It is building cities and democracies and economic systems and computers and flight and science and space rockets and the internet. World optimization is about adding to that list. 

But it’s not just about growth, it’s also about preservation. We are still in the dawn of civilization, with most of civilization in the billions of years ahead. We mustn’t let this light go out.

Practical posts give direct, actionable advice on how to achieve goals and generally succeed. The art of rationality would be useless if it did not connect to the real world; we must take our ideas and abstractions and collide them with reality. Many places on the internet will give you advice; here, we value survey data, literature reviews, self-blinded trials, quantitative estimates, and theoretical models that aim to explain the phenomena.

The LessWrong Community is the people who write on LessWrong and who contribute to its mission of refining the art of human rationality. This tag includes community events, analysis of the health, norms and direction of the community, and space to understand communities in general.

LessWrong also has many brothers and sisters like the Berkeley Rationality Community, SlateStarCodex, Rational Fiction, Effective Altruism, AI Alignment, and more, who participate here. To see upcoming LessWrong events, go to the community section.

Site Meta is the category for discussion about LessWrong website. It includes technical updates. It applies to team announcements such as updates, features, events, moderation activity and policy, downtime, requests for feedback, as well as site documentation,  and the team’s writings about site philosophy/strategic thinking.

Open Threads are informal discussion areas, where users are welcome to post comments that didn't quite feel big enough to warrant a top-level post, nor fit in other posts.

Fiction isn't literal truth, but when done well it captures truths and intuitions that are difficult to explain directly. (It’s also damn fun to read.)

Decision theory is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals. Every action at least implicitly represents a decision under uncertainty: in a state of partial knowledge, something has to be done, even if that something turns out to be nothing (call it "the null action"). Even if you don't know how you make decisions, decisions do get made, and so there has to be some underlying mechanism. What is it? And how can it be done better? Decision theory has the answers.

Book Reviews on LessWrong are different from normal book reviews; they summarize and respond to a book's core ideas first, and judge whether you should read it second. A good book review sometimes distills the book's ideas so well that you no longer need to read the book.

Logic and Mathematics are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.

AI Risk is analysis of the risks associated with building powerful AI systems.

Heuristics and Biases are the ways human reasoning differs from a theoretical ideal agent, due to reasoning shortcuts that don't always work (heuristics) and systematic errors (biases).

All Tags (573)

AI (1114)
AIXI (25)
Aging (31)
Alief (16)
Art (41)
Death (40)
GAN (1)
GPT (69)
Humor (81)
Slack (27)
Sleep (19)
War (37)