Scott Alexander asked things related to this, but still seems worth being more explicit about what this perfect 1.69 loss model would be like in practice if we got there?
Things I instinctively observed slash that my model believes that I got while reading that seem relevant, not attempting to justify them at this time:
I am deeply confused how someone who is taking decision theory seriously can accept Guaranteed Payoffs as correct. I'm even more confused how it can seem so obvious that anyone violating it has a fatal problem.
Under certainty, this is assuming CDT is correct, when CDT seems to have many problems other than certainty. We can use Vaniver's examples above, or use a reliable insurance agent to remove any uncertainty, or we also can use any number of classic problems without any uncertainty (or remove it), and see that such an agent loses - e.g. Parfit's Hitchhiker in the case where he has 100% accuracy.
This is great. I notice I very much want a version that is aimed at someone with essentially no technical knowledge of AI and no prior experience with LW - and this is seems like it's much better at that then par, but still not where I'd want it to be. Whether or not I manage to take a shot, I'm wondering if anyone else is willing to take a crack at that?