AI ALIGNMENT FORUM
AF

Wikitags

World Optimization

Edited by Ruby, jimrandomh, et al. last updated 6th Oct 2022

World Optimization is the full use of our agency. It is extending the reach of human civilization. It is building cities and democracies and economic systems and computers and flight and science and space rockets and the internet. World optimization is about adding to that list. 

But it’s not just about growth, it’s also about preservation. We are still in the dawn of civilization, with most of civilization in the billions of years ahead. We mustn’t let this light go out.


World Optimization Sub-Topics

Moral Theory

Altruism
Consequentialism
Deontology
Ethics & Morality
Metaethics
Moral Uncertainty

 

 

Causes / Interventions

Aging
Animal Welfare
Existential Risk
Futurism
Mind Uploading
Life Extension
S-risks
Transhumanism
Voting Theory

Working with Humans

Coalitional Instincts
Common Knowledge
Coordination / Cooperation
Game Theory
Group Rationality
Institution Design
Moloch
Signaling
Simulacrum Levels
Social Status

Applied Topics

Blackmail
Censorship
Chesterton's Fence
Death
Deception
Honesty
Hypocrisy
Information Hazards
Meta-Honesty
Pascal's Mugging
War

Value & Virtue

Ambition
Art
Aesthetics
Complexity of Value
Courage
Fun Theory
Principles
Suffering
Superstimuli
Wireheading

Meta

Cause Prioritization
Center for Long-term Risk
Effective Altruism
Heroic Responsibility
 


Content which describes how the world is that directly bears upon choices one makes to optimize the world fall under this tag. Examples include discussion of the moral patienthood of different animals, the potential of human civilization, and the most effective interventions against a global health threat.

Some material has both immediate relevance to world optimization decisions but also can inform broader world models. This material might be included under both World Modeling tag and this tag.

Subscribe
2
Subscribe
2
Discussion0
Discussion0
Posts tagged World Optimization
106What failure looks like
paulfchristiano
6y
28
95. Moral Value for Sentient Animals? Alas, Not Yet
RogerDearnaley
2y
0
73. Uploading
RogerDearnaley
2y
0
66. The Mutable Values Problem in Value Learning and CEV
RogerDearnaley
2y
0
54. A Moral Case for Evolved-Sapience-Chauvinism
RogerDearnaley
2y
0
65Some AI research areas and their relevance to existential safety
Andrew_Critch
5y
34
40Possible takeaways from the coronavirus pandemic for slow AI takeoff
Vika
5y
27
25Don't Share Information Exfohazardous on Others' AI-Risk Models
Thane Ruthenis
2y
1
18Life and expanding steerable consequences
Alex Flint
4y
1
10Reading the ethicists 2: Hunting for AI alignment papers
Charlie Steiner
3y
1
4Do yourself a FAVAR: security mindset
lemonhope
3y
0
42Nearcast-based "deployment problem" analysis
HoldenKarnofsky
3y
0
64Discussion with Eliezer Yudkowsky on AGI interventions
Rob Bensinger, Eliezer Yudkowsky
4y
106
71Fun with +12 OOMs of Compute
Daniel Kokotajlo
5y
45
67Thoughts on sharing information about language model capabilities
paulfchristiano
2y
20
Load More (15/81)
Add Posts