Posts

Sorted by New

Wiki Contributions

Comments

I think the issue presented in the post is that the Solomonoff hypothesis cannot be sampled from, even though we can determine the probability density function computationally. If we were to compute the expected value of the reward based on our action, we run into the curse of dimensionality: there is a single point contributing most of the reward. A Solomonoff inductor would correctly find the probability density function that h(s_2)=s_1 with high probability.

However, I think that if we ask the Solomonoff predictor to predict the reward directly, then it will correctly arrive at a model that predicts the rewards. So we can fix the presented agent.