The purpose of this post is to provide a short fully mathematically specified conjecture which can be worked on with very little background, but which has an important consequence in logical uncertainty. Not too many man-hours have been put into this question yet, so it is plausible that a MIRIx team could solve this problem.

Let be a be an envirnoment which is a function from to .

Let be an algorithm which on input is given oracle access to for all , and which outputs a probability .

**Definition:** If
then is bad. Similarly, if
then is bad. Otherwise, is good. (Note that if there is no limit, this does not mean is bad.)

**Conjecture: For every algorithm , there exists an environment such that is bad.**

Intuitively, is slowly seeing bits from , and much more quickly making predictions about . If is bad in the first sense, that means that it has made much worse predictions than if it just output 2/3. If is bad in the second sense, that means that it has made much worse predictions than if it just output 1/3. It seems easy enough to avoid the first failure mode; we just have to switch to output 2/3 if we cross some threshhold where 2/3 has been doing better. Adding the second failure mode makes this strategy stop working, because an evil environment could cause us to lock in to 2/3, and then switch to giving probability 1/3 forever.

We would like an which can take a countable set of advisors, and do at least as well as all of them, even when there is a delay in the observations. However, I believe that can't even do at least as well as two constant advisors. The above conjecture implies that cannot even balance the predictions of two constant advisors, and therefore also cannot balance countably many advisors. Note that a disproof would also be very interesting.