Asymptotic Logical Uncertainty: Self Reference

by Scott Garrabrant1 min read24th Aug 2015No comments

3

Personal Blog

Part of the Asymptotic Logical Uncertainty series. Here, I will start discussing the connection between asymptotic logical uncertainty and self reference.

For this post, I will switch back to my old model of asymptotical logical uncertainty. Let be a probabilistic Turing machine which outputs an infinite string of bits. We want to construct a Turing machine which when input with the code to must run quickly, (e.g. must output its th bit by time ^2) and is trying to match the output of .

We want to satisfy properties like the Benford Test, where if outputs a simple subsequence that looks like it is (or actually is) coming from a -coin, we would like the probability that outputs a 1 for bits in that subsequence to converge to .

We have seen one failed approach to this inspired by Solomonoff induction, and one partially successful approach, the Benford learner. (The Benford learner is in a different context, but can be easily ported over.) I will present some other programs in this context soon, including some possible fixes to the Solomonoff approach. However, right now I want to present another desired property for .

Let be the Turing machine which quines to run on , and negates the output. The the probability that the th bit output by on input is a should converge to .

The reason this is a desired property is that the probability of getting each bit correct is maximized when that probability is

The Benford learner very likely gets this problem wrong. That is because it is not trying to maximize its accuracy, but instead has a pattern recognition algorithm hard coded in. An algorithm that has this desired property probably looks a lot more like Solomonoff induction. The failed Solomonoff induction inspired approach might actually gets this specific problem correct, in spite of failing the Benford Test. I have not checked.

There is another more deterministic version of the problem. Let be the Turing machine which quines to compute the probability that on input outputs a 1, and outputs a 1 if and only if that probability is less than . The the probability that the th bit output by on input is a should converge to .

The purpose of the deterministic version is to make it work when represents a list of logical sentences.

There are many ways we can generalize both of these desired properties. For example, we can add noise, change the probabilities, or put the reflexive sentences in subsequences where the rest of the sequence has other properties.

Personal Blog

3

New Comment