Previously: Book report: Theory of Games and Economic Behavior (von Neumann & Morgenstern)

Summary: Von Neumann and Morgenstern defined the notion of stable set in order to characterize some sort of equilibrium in cooperative games. Lucas showed that stable sets don't always exist, so they're inadequate for telling us what happens in games. I propose a generalization of stable sets that always exists, in which players can have different beliefs.

Epistemic status: This is me stretching VNM's framework and groping towards cooperative open-source game theory. The novel definitions aren't that exciting, but I think they point in the right direction. I believe the proof is solid. I've only dipped my toe into the literature, so it's likely someone has done this already.

VNM stable sets

We recall some definitions from (VNM 1944): An imputation is a vector of payoffs, one for each player . A coalition is a subset of players. A game is characterized by the maximin value for each coalition . The function is called the characteristic function and it satisfies and if . Imputation dominates imputation , or , if there's a nonempty coalition such that for all and . A VNM stable set is a set of imputations such that every imputation not in is dominated by an imputation in , and no imputation in is dominated by an imputation in .

VNM called stable sets "solutions", and they were supposed to represent equilibria in some unspecified dynamics of negotiation. They didn't really spell out why stable sets correspond to equilibria, but I think I figured it out.

For the dynamics, imagine that there's an "imputation under consideration" that every player can see. A coalition of unsatisfied players can form, and replace the imputation under consideration with a dominant imputation. When this process terminates, the imputation under consideration becomes the actual payoffs.

Or if you like, the players observe the actions of all players, and if a coalition of players is unsatisfied with the resulting payoffs, they take actions corresponding to a dominant imputation, thus causing a contradiction.

Beliefs

Now suppose the players are uncertain about what imputation they'll end up with. Let's represent this uncertainty with a nonempty set of "plausible" imputations. (No probabilities.) You might think it's weird to represent uncertainty without using probabilities, but (VNM 1944) did just that in chapters 8–10 for extensive-form games.

Let's suppose the players share the same set , and that satisfies these conditions:

(a) Belief that all players are instrumentally rational. If and is plausible, then is not plausible.

(b) Cromwell's rule. If is not plausible, then there exists a that is plausible.

(a) says that a coalition will be motivated to move from to a plausible dominant imputation if there is one, and then can't be an equilibrium. The idea of (b) is that a coalition won't bother to move from to an implausible imputation, and so we shouldn't rule out .

It's easy to see that the plausible sets that satisfy (a) and (b) are precisely the VNM stable sets.

(As a corollary, we don't have to assume that the plausible set is nonempty, since stable sets are necessarily nonempty.)

The Lucas game

(Lucas 1967) proved that a stable set need not exist by exhibiting a 10-player game with no stable set. The paper is short and sweet, so if you want to know exactly how the game is defined, go ahead and read it. Just be careful about the characteristic function that Lucas defines: You should really take as the characteristic function. (Lucas 1969) has more details, but it's not really easier to read.

Lucas might have started out looking for a game with three imputations that dominate each other in a cycle. You can't partition such a set into plausible and implausible subsets. But in practice the other dominance relations in the vector space interfere with this strategy.

Instead Lucas managed to construct a game where the space of imputations contains three disjoint 3-dimensional subsets , , that dominate each other and aren't dominated by anything else:

where is a cyclic permutation of . (I'm ignoring the even players 2, 4, 6, 8, 10.) The relevant part of the characteristic function is:

(There are also even players in these coalitions who have to get a positive share of the coalition's payoff, but I'm omitting them. See the paper for the full definition.) So for every such that , the coalition is a witness to the dominance relation . And the same thing is true with (1, 3) replaced with (3, 5) or (5, 1). So the dominance relation has infinite ascending chains that cycle around while and increase towards 1.

It's easy to see that there's no stable set. If is plausible, then all with and must be implausible. So there must be a plausible with and . So all with and must be implausible. So there must be a plausible with and , which contradicts the second sentence of this paragraph.

So what happens?

From the perspective of players 1, 3, 5, region is a game of hot potato: One player has payoff , and it's in their power to swap payoffs with someone else. As the hot potato is passed around, players 7 and 9 sit back and enjoy monotonically increasing combined payoff.

What epistemic state do these players have? I can think of a few answers:

  1. The dynamics allow many changes to the imputation-under-consideration, up to an unknown time limit. The players are uncertain which of they'll end up in, so the imputations in have subjective values that are less than their "face values". This changes the structure of the dominance relation, and there's a stable set based on these subjective values that excludes .

  2. Each player is certain they won't end up holding the hot potato, which means they have a disagreement about what the time limit is.

  3. The negotiation has an equilibrium in because player 1 doesn't bother to counterpropose because 1 believes ( is implausible because 3 believes ( is plausible because (5 believes is implausible because ...)))

I'm not currently enthusiastic about 1 because it involves a kind of double counterfactual ("if imputation obtained and (if obtained then we'd have counterproposal ) then we'd have counterproposal "), and that seems unnatural to me. There's something called a "far-sighted stable set"; maybe that's related?

2 seems more natural to me, so I'm going to allow players to have different beliefs. I'm not ready to give the players probabilistic beliefs because that will be complicated, so I can't model 2. But as soon as players have different beliefs, we can model 3. That's what the next section is about.

Modeling disagreement

If players have different beliefs about which imputations are plausible, we have to talk about metabeliefs and counterfactuals. Let's say an individual epistemic state is a choice of "counterfactual" global epistemic state for each imputation, together with a set of "plausible" imputations. And a global epistemic state is a choice of individual epistemic state for each player. In other words, if is the space of imputations and is the set of players, recursively define and . In other words, a player has object-level beliefs of the form " is a plausible imputation" as well as meta-level beliefs of the form "if imputation obtained, then counterfactually player would have individual epistemic state ".

Consider these axioms for global epistemic states :

(1) Belief that all players are instrumentally rational. If the dominance > is witnessed by a coalition , and holds that conditional on , every member of believes that is plausible, then holds that is implausible.

(2) Object-level Cromwell's rule. (Converse of 1.) If holds that conditional on , for all witnessed by , there exists that believes is implausible, then holds that is plausible.

(3) Reflection. If holds that conditional on , the global state is , then .

(4) Belief that all players are epistemically rational. holds that conditional on any , the global epistemic state satisfies (1)–(4).

Sketch proof of existence

It's straightforward to prove that a zero-sum game always has a global epistemic state satisfying (1)–(4).

First, consider the simplex whose vertices are the imputations where every player but one gets their maximin value. The complement of the simplex is dominated by singleton coalitions, and it must be common knowledge that the complement of the simplex is implausible.

Next, if there are undominated imputations, it must be common knowledge that they're plausible. And it must be common knowledge that the imputations they dominate are implausible. And of the remaining imputations, the ones which are dominated only by implausible imputations must be plausible (and this is common knowledge). Continuing in this way transfinitely many times, we get a partition of the simplex into: , the commonly-known-to-be-plausible imputations; , the commonly-known-to-be-implausible imputations; and , the remainder. Crucially, every imputation in is dominated by another imputation in .

If is empty, we've already described a global epistemic state and we're done. Otherwise, it's possible to complete the global epistemic state by recursively choosing the plausible sets to be for a varying , starting at the object level and going up the meta hierarchy, guided by axioms (1)–(4).

Uncertain meta-beliefs

For aesthetic reasons I like players to have uncertainty about the beliefs of others, which looks like . And there are corresponding versions of axioms 1–4 as well as a meta-level Cromwell's rule. Existence follows from the existence result proved above.

Conclusion

The flaw of VNM stable sets is that they don't always exist. The flaw of these global epistemic states is that there are so many of them! In some games, any singleton set can be the plausible set of an individual epistemic state, and no doubt larger sets can be, too.

One way to summarize what we've proven is that it's possible to lock yourself into all sorts of beliefs if you believe the other players can have grossly inaccurate beliefs. What's more interesting is what sort of equilibria can exist if the players have approximately-correct beliefs. For that we need probabilities.

Another interpretation is that it's unsurprising that we get so many solutions, because we've made no assumptions about the players themselves apart from some basic rationality conditions. If we knew the source code of the players, the equilibria would be a lot more constrained.

References

William F. Lucas 1967. A game with no solution.

William F. Lucas 1969. The proof that a game may not have a solution.

John Von Neumann and Oskar Morgenstern 1944. Theory of Games and Economic Behavior.

Edit 2021.01.25: Fixed proof of nonexistence in the Lucas game.

New Comment