David Simmons

1-1

-"For example, I could imagine laws requiring anyone scraping the internet to ensure that they are not collecting data from people who have denied consent to have their data scraped."

In practice this is already the case, anyone who doesn't want their data scraped can put up a robots.txt file saying so, and I imagine big companies like OpenAI respect robots.txt. I guess there could be advantages in making it a legal rule but I don't think it matters too much.

20

Sure, in that case there is a 0% counterfactual chance of heads, your words aren't going to flip the coin.

30

The question "how would the coin have landed if I had guessed tails?" seems to me like a reasonably well-defined physical question about how accurately you can flip a coin without having the result be affected by random noise such as someone saying "heads" or "tails" (as well as quantum fluctuations). It's not clear to me what the answer to this question is, though I would guess that the coin's counterfactual probability of landing heads is somewhere strictly between 0% and 50%.

20

I'm confused, isn't the "objective probability" of heads 1/2 because that is the probability of heads in the definition of the setup? The halver versus thirder debate is about subjective probability, not objective probability, as far as I can tell. I'm not sure why you are mentioning objective probability at all, it does not appear to be relevant. (Though it is also possible that I do not know what you mean by "objective probability".)

10

This argument seems to depend on the fact that Sleeping Beauty is not actually copied, but just dissociated from her past self and so that from her perspective it seems like she is copied. If you deal with actual copies then it is not clear what is the sensible way for them to all pass around a science journal to record their experiences, or all keep their own science journals, or all keep their own but then recombine somehow, or whatever. Though if this thought experiment gives you SIA intuitions on the Sleeping Beauty problem then maybe those intuitions will still carry over to other scenarios.

10

I don't know what you mean by "should be allowed to put whatever prior I want". I mean, I guess nobody will stop you. But if your beliefs are well approximated by a particular prior, then pretending that they are approximated by a different prior is going to cause a mismatch between your beliefs and your beliefs about your beliefs.

[Nitpick: The Kelly criterion assumes not only that you will be confronted with a large number of similar bets, but also that you have some base level of risk-aversion (concave utility function) that repeated bets can smooth out into a logarithmic utility function. If you start with a linear utility function then repeating the bets still gives you linear utility, and the optimal strategy is to make every bet all-or-nothing whenever you have an advantage. At least, this is true before taking into account the resource constraints of the system you are betting against.]

20

OK, that makes sense.

10

MUH doesn't imply the existence of halting oracles. Indeed, the Computable Universe Hypothesis is supposed to be an extension of the Mathematical Universe Hypothesis, but CUH says that halting oracles do not exist.

10

I will have to think more about the issue of continuity vs uniform continuity. I suppose my last remaining argument would be the fact that Bishop--Bridges' classic book on constructive analysis uses uniform continuity on bounded sets rather than continuity, which suggests that it is probably better for constructive analysis at least. But maybe they did not analyze the issue carefully enough, or maybe the relevant issues here are for some reason different.

To fix the argument that every locally compact Polish space admits a proper metric, let be as before and let if and if . Next, let , where is a countable dense sequence. Then is continuous and everywhere finite. Moreover, if , then and thus is compact. It follows that the metric is proper.

Anyway I have fixed the typo in my previous post.

The infinite autoresponse example seems like it would be solved in practice by rational ignorance: after some sufficiently small number of autoresponses (say 5) people would not want to explicitly reason about the policy implications of the specific number of autoresponses they saw, so "5+ autoresponses" would be a single category for decisionmaking purposes. In that case the induction argument fails and "both people go to the place specified in the message as long as they observe 5+ autoresponses" is a Nash equilibrium.

Of course, this assumes people haven't already accepted and internalized the logic of the induction argument, since then no further explicit reasoning would be necessary based on the observed number of autoresponses. But the induction argument presupposes that rational ignorance does not exist, so it is not valid when we add rational ignorance to our model.