All of Dmitry Vaintrob's Comments + Replies

Sorry, I misread this. I read your question as O outputting some function T that is most likely to answer some set of questions you want to know the answer to (which would be self-referential as these questions depend on the output of T). I think I understand your question now.

What kind of ability do you have to know the "true value" of your sequence B?

If the paperclip maximizer P is able to control the value of your turing machine, and if you are a one-boxing AI (and this is known to P) then of course you can make deals/communicate with P. In ... (read more)

1Donald Hobson4y
I don't think that these Arthur merlin proofs are relevant. Here A has a lot more compute than P. A is simulating P and can see and modify P however A sees fit.

Looks like you're making a logical error. Creating a machine that solves the halting problem is prohibited by logic. For many applications assuming a sufficiently powerful and logically consistent oracle is good enough but precisely these kinds of games you are playing, where you ask a machine to predict its own output/the output of a system involving itself, are where you get logically inconsistent. Indeed, imagine asking the oracle to simulate an equivalent version of itself and to output the the opposite answer to what its simulation outputs. This may s... (read more)

1Donald Hobson4y
There is precisely one oracle, O. A and T and P are computable. And crucially, the oracles answer does not depend on itself in any way. This question is not self referential. P might try to predict A and O, but there is no guarantee it will be correct. P has a restricted amount of compute compared to A, but still enough to be able to reason about A in the abstact. We are asking how we should design A. If you have unlimited compute, and want to predict something, you can use solomnov induction. But some of the hypothesis you might find are AI's that think they are a hypothesis in your induction, and are trying to escape.