Chris_Leong

Chris_Leong's Comments

The Counterfactual Prisoner's Dilemma

We can assume that the coin is flipped out of your sight.

Vanessa Kosoy's Shortform

Yeah, I agree that the objective descriptions can leave out vital information, such as how the information you know was acquired, which seems important for determining the counterfactuals.

Vanessa Kosoy's Shortform

"The key point is, "applying the counterfactual belief that the predictor is always right" is not really well-defined" - What do you mean here?

I'm curious whether you're referring to the same as or similar to the issue I was referencing in Counterfactuals for Perfect Predictors. The TLDR is that I was worried that it would be inconsistent for an agent that never pays in Parfait's Hitchhiker to end up in town if the predictor is perfect, so that it wouldn't actually be well-defined what the predictor was predicting. And the way I ended up resolving this was by imagining it as an agent that takes input and asking what it would output if given that inconsistent input. But not sure if you were referencing this kind of concern or something else.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

Some people want to act as though a simulation of you is automatically you and my argument is that it is bad practise to assume this. I'm much more open to the idea that some simulations might be you in some sense than the claim that all are. This seems compatible with a fuzzy cut-off.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

"I actually don't think that there is a general procedure to tell what is you, and what is a simulation of you" - Let's suppose I promise to sell you an autographed Michael Jackson CD. But then it turns out that the CD wasn't signed by Michael, but by me. Now I'm really good at forgeries, so good in fact that my signature matches his atom to atom. Haven't I still lied?

Open-Box Newcomb's Problem and the limitations of the Erasure framing

Not at all. Your comments helped me realise that I needed to make some edits to my post.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

In other words, the claim isn't that your program is incorrect, it's that it requires more justification than you might think in order to persuasively show that it correctly represents Newcomb's problem. Maybe you think understanding this isn't particularly important, but I think knowing exactly what is going on is key to understanding how to construct logical-counterfactuals in general.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

I actually don't know Haskell, but I'll take a stab at decoding it tonight or tomorrow. Open-box Newcomb's is normally stated as "you see a full box", not "you or a simulation of you sees a full box". I agree with this reinterpretation, but I disagree with glossing it over.

My point was that if we take the problem description super-literally as you seeing the box and not a simulation of you, then you must one-box. Of course, since this provides a trivial decision problem, we'll want to reinterpret it in some way and that's what I'm providing a justification for.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

Okay, I have to admit that that's kind of cool; but on the other hand, that also completely misses the point.

I think we need to backtrack. A maths proof can be valid, but the conclusion false if at least one premise is false right? So unless a problem has already been formally defined it's not enough to just throw down a maths proof, but you also have to justify that you've formalised it correctly.

Open-Box Newcomb's Problem and the limitations of the Erasure framing

I've already addressed this in the article above, but my understanding is as follows: This is one of those circumstances where it is important to differentiate between you being in a situation and a simulation of you being in a situation. I really should write a post about this - but in order for a simulation to be accurate it simply has to make the same decisions in decision theory problems. It doesn't have to have anything else the same - in fact, it could be an anti-rational agent with the opposite utility function.

Note, that I'm not claiming that an agent can ever tell whether it is in the real world or in a simulation, but that's not the point. I'm adopting the viewpoint of an external observer which can tell the difference.

I think the key here is to think about what is happening both in terms of philosophy and mathematics, but you only seem interested in the former?

Load More