Chris_Leong

Chris_Leong's Comments

[Meta] Do you want AIS Webinars?

I would be keen run a webinar on Logical Counterfactuals

Reference Post: Trivial Decision Problem

"I think the next place to go is to put this in the context of methods of choosing decision theories - the big ones being reflective modification and evolutionary/population level change. Pretty generally it seems like the trivial perspective is unstable is under these, but there are some circumstances where it's not." - sorry, I'm not following what you're saying here

The Counterfactual Prisoner's Dilemma

We can assume that the coin is flipped out of your sight.

Vanessa Kosoy's Shortform

Yeah, I agree that the objective descriptions can leave out vital information, such as how the information you know was acquired, which seems important for determining the counterfactuals.

Vanessa Kosoy's Shortform

"The key point is, "applying the counterfactual belief that the predictor is always right" is not really well-defined" - What do you mean here?

I'm curious whether you're referring to the same as or similar to the issue I was referencing in Counterfactuals for Perfect Predictors. The TLDR is that I was worried that it would be inconsistent for an agent that never pays in Parfait's Hitchhiker to end up in town if the predictor is perfect, so that it wouldn't actually be well-defined what the predictor was predicting. And the way I ended up resolving this was by imagining it as an agent that takes input and asking what it would output if given that inconsistent input. But not sure if you were referencing this kind of concern or something else.

Transparent Newcomb's Problem and the limitations of the Erasure framing

Some people want to act as though a simulation of you is automatically you and my argument is that it is bad practise to assume this. I'm much more open to the idea that some simulations might be you in some sense than the claim that all are. This seems compatible with a fuzzy cut-off.

Transparent Newcomb's Problem and the limitations of the Erasure framing

"I actually don't think that there is a general procedure to tell what is you, and what is a simulation of you" - Let's suppose I promise to sell you an autographed Michael Jackson CD. But then it turns out that the CD wasn't signed by Michael, but by me. Now I'm really good at forgeries, so good in fact that my signature matches his atom to atom. Haven't I still lied?

Transparent Newcomb's Problem and the limitations of the Erasure framing

Not at all. Your comments helped me realise that I needed to make some edits to my post.

Transparent Newcomb's Problem and the limitations of the Erasure framing

In other words, the claim isn't that your program is incorrect, it's that it requires more justification than you might think in order to persuasively show that it correctly represents Newcomb's problem. Maybe you think understanding this isn't particularly important, but I think knowing exactly what is going on is key to understanding how to construct logical-counterfactuals in general.

Transparent Newcomb's Problem and the limitations of the Erasure framing

I actually don't know Haskell, but I'll take a stab at decoding it tonight or tomorrow. Open-box Newcomb's is normally stated as "you see a full box", not "you or a simulation of you sees a full box". I agree with this reinterpretation, but I disagree with glossing it over.

My point was that if we take the problem description super-literally as you seeing the box and not a simulation of you, then you must one-box. Of course, since this provides a trivial decision problem, we'll want to reinterpret it in some way and that's what I'm providing a justification for.

Load More