philh

60

In order for the standard rationality assumptions used in game theory to apply, the payouts of a game must be utilities, not resources such as money, power, or personal property. Zero-sum transfer of resources is often far from zero-sum in utility.

Hm, I feel like when I talk about game theory I don't usually use those assumptions? Admittedly I've never studied game theory in depth. But in particular, the concept of a Nash equilibrium only seems to rely on "each player has a preference order for payouts".

Actually, I'm not really sure what assumptions you mean. I assume "the players are indifferent between a certain payout of x and a 50% chance of 2x" is one, but I don't know if there's anything missing. More questions about these assumptions:

IIUC, if utility is logarithmic in a resource, then it's roughly linear in small changes of that resource. If I have £100 then I value a 50% chance of an extra £100 noticeably differently from a certain chance of an extra £50, but if I have £10000 it's about the same. Is it mostly reasonable to act as though the axioms work for resources, provided the amounts at stake are "small" for all players? (And when people talk about game theory over resources, does that tend to be the case, implicitly or explicitly?)

What do you lose if the assumptions are violated? Broadly speaking I assume many theorems about mixed and iterated games no longer apply.

It might be worth being explicit that ⊢ has lower precedence than the other operators (which in some sense are part of a different language). I, like maybe Gurkenglas, spent some time wondering why (⊢□A)→(□□A) wasn't just a special case of necessitation.

I'm confused by your use of the deduction theorem. It's only used in the forward implication argument, and seems unnecessary to me. (The linked wiki article doesn't mention it.) More precisely, it only seems necessary to move things left-to-right across a turnstyle, because you've previously moved them right-to-left in a way that isn't obviously-to-me valid.

(I mean, it's obvious to me that if we can prove that A implies B, then assuming A lets us prove B. But it would also be obvious to me that if assuming A lets us prove B, we can prove that A implies B, yet you dedicated a theorem to allowing that. And I don't trust things that are obvious to me to be actually "true", let alone valid moves here.)

We can just make the whole argument without doing that and it seems fine to me? Looking at it in more depth:

(This does require that we can do logic with the things we've proved. E.g. if we have ⊢x and ⊢x→y, we can conclude ⊢y. I think that's okay, but seems worth being explicit about.)

This is just doing the thing I question.

The assumption is ⊢Ψ↔(□Ψ→p), i.e. ⊢A↔B. Weaken it and use necessitation to get ⊢□(A→B). Box distributivity is ⊢□(A→B)→(□A→□B). From those two, we can get ⊢□A→□B. Expanding again, this is ⊢□Ψ→□(□Ψ→p).

Box distributivity lets us go from ⊢□(□Ψ→p) to ⊢□□Ψ→□p. So using 2, we have ⊢□Ψ→(□□Ψ→□p). And internal necessitation gives us ⊢□Ψ→□□Ψ, so combining those, ⊢□Ψ→□p.

No need for this, we already finished.