AI ALIGNMENT FORUM
AF

Wikitags

Anthropics

Edited by habryka, Vladimir_Nesov, Yoav Ravid last updated 12th Feb 2021

Anthropics is the study of how the fact that we succeed in making observations of a given kind at all gives us evidence about the world we are living, independently of the content of the observations. As an example, for living beings, making any observations at all is only possible in a universe with physical laws that support life.

Related Pages: Sleeping Beauty Paradox, Filtered Evidence, Great Filter

Subscribe
2
Subscribe
2
Discussion0
Discussion0
Posts tagged Anthropics
180th Person and 1st Person Logic
Adele Lopez
1y
7
16UDT can learn anthropic probabilities
cousin_it
7y
0
21The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument
Mark Xu, CarlShulman
4y
6
16Are You More Real If You're Really Forgetful?
Q
Thane Ruthenis, Charlie Steiner
9mo
Q
4
14Anthropics is pretty normal
Stuart_Armstrong
7y
0
4SUDT: A toy decision theory for updateless anthropics
Benya
12y
0
3Anthropic decision theory I: Sleeping beauty and selflessness
Stuart_Armstrong
14y
0
9Anthropic paradoxes transposed into Anthropic Decision Theory
Stuart_Armstrong
7y
0
7Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception?
Q
David Scott Krueger (formerly: capybaralet)
1y
Q
5
11Reflective AIXI and Anthropics
Diffractor
7y
14
5Anthropic probabilities and cost functions
Stuart_Armstrong
7y
0
9Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens
Stuart_Armstrong
4y
0
8Practical anthropics summary
Stuart_Armstrong
4y
0
7Anthropic Effects in Estimating Evolution Difficulty
Mark Xu
4y
1
4Anthropic uncertainty in the Evidential Blackmail problem
Johannes Treutlein
8y
0
Load More (15/25)
Add Posts