4y0

Looks like you're making a logical error. Creating a machine that solves the halting problem is prohibited by logic. For many applications assuming a sufficiently powerful and logically consistent oracle is good enough but precisely these kinds of games you are playing, where you ask a machine to predict its own output/the output of a system involving itself, are where you get logically inconsistent. Indeed, imagine asking the oracle to simulate an equivalent version of itself and to output the the opposite answer to what its simulation outputs. This may s...

14y

There is precisely one oracle, O. A and T and P are computable. And crucially, the oracles answer does not depend on itself in any way. This question is not self referential. P might try to predict A and O, but there is no guarantee it will be correct.
P has a restricted amount of compute compared to A, but still enough to be able to reason about A in the abstact.
We are asking how we should design A.
If you have unlimited compute, and want to predict something, you can use solomnov induction. But some of the hypothesis you might find are AI's that think they are a hypothesis in your induction, and are trying to escape.

Sorry, I misread this. I read your question as O outputting some function T that is most likely to answer some set of questions you want to know the answer to (which would be self-referential as these questions depend on the output of T). I think I understand your question now.

What kind of ability do you have to know the "true value" of your sequence B?

If the paperclip maximizer P is able to control the value of your turing machine, and if you are a one-boxing AI (and this is known to P) then of course you can make deals/communicate with P. In ... (read more)