AI ALIGNMENT FORUM
AF

86
Wikitags
Main
3
Intro Dialogue (Math 2)
11
LW Wiki

Solomonoff induction

Edited by Eliezer Yudkowsky, et al. last updated 17th Dec 2015

Solomonoff induction is an ideal answer to questions like "What probably comes next in the sequence 1, 1, 2, 3, 5, 8?" or "Given the last three years of visual data from this webcam, what will this robot probably see next?" or "Will the sun rise tomorrow?" Solomonoff induction requires infinite computing power, and is defined by taking every computable algorithm for giving a probability distribution over future data given past data, weighted by their algorithmic simplicity, and updating those weights by comparison to the actual data.

E.g., somewhere in the ideal Solomonoff distribution is an exact copy of you, right now, staring at a string of 1s and 0s and trying to predict what comes next - though this copy of you starts out with a very low weight in the mixture owing to its complexity. Since a copy of you is present in this mixture of computable predictors, we can prove a theorem about how well Solomonoff induction does compared to an exact copy of you; namely, Solomonoff induction commits only a bounded amount of error relative to you, or any other computable way of making predictions. Solomonoff induction is thus a kind of perfect or rational ideal for probabilistically predicting sequences, although it cannot be implemented in reality due to requiring infinite computing power. Still, considering Solomonoff induction can give us important insights into how non-ideal reasoning should operate in the real world.

Additional reading:

  • https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
  • http://lesswrong.com/lw/dhg/an_intuitive_explanation_of_solomonoff_induction/
  • http://wiki.lesswrong.com/wiki/Solomonoff_induction
Parents:
Inductive prior
Methodology of unbounded analysis
Children:
Solomonoff induction: Intro Dialogue (Math 2)
Subscribe
Discussion
3
Subscribe
Discussion
3
Posts tagged Solomonoff induction
71The Solomonoff Prior is Malign
Mark Xu
5y
26
32A Semitechnical Introductory Dialogue on Solomonoff Induction
Eliezer Yudkowsky
4y
2
17When does rationality-as-search have nontrivial implications?
nostalgebraist
7y
4
34From SLT to AIT: NN generalisation out-of-distribution
Lucius Bushnaq
20d
0
18Computational Model: Causal Diagrams with Symmetry
johnswentworth
6y
9
17Proof idea: SLT to AIT
Lucius Bushnaq
7mo
0
10Clarifying Consequentialists in the Solomonoff Prior
Vlad Mikulik
7y
13
11Reflective AIXI and Anthropics
Diffractor
7y
14
8Sleeping Experts in the (reflective) Solomonoff Prior
Daniel C, Cole Wyeth
25d
0
9Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach
Scott Garrabrant
10y
0
19Response to "What does the universal prior actually look like?"
michaelcohen
4y
31
17Excerpt from Arbital Solomonoff induction dialogue
Richard_Ngo
5y
2
10Prediction can be Outer Aligned at Optimum
Lukas Finnveden
5y
12
7Summary of the Acausal Attack Issue for AIXI
Diffractor
4y
2
Add Posts