In this post, we generalize the notions in Cooperative Oracles: Nonexploited Bargaining to deal with the possibility of introducing extra agents that have no control but have preferences. We further generalize this to infinitely many agents. (Part of the series started here.)

Consider example 1 from the previous post: and are in a prisoners dilemma. Their utility functions are and . Their fair sets are and . We will introduce one extra player which has no options.

However, he will have preferences, . Since he has no actions, his fair set must be everything.

Now, since the game is zero sum, every outcome is Pareto Optimal, so using the method described in the previous post, any equilibrium in which will be valid. We will modify the framework to make this no longer the case, and make the only valid solution. For this, we will define a "stratified Pareto optimum."

We will still have a finite number of players . Each player will specify a subset of players they will depend on. Let be set of players that depends on. must only be a function of the actions of players in , and must be closed under changing the actions of any player not in .

We define a preorder on the agents. We say that if , and we take the transitive closure, so whenever and . We also say that for all .

We say that is a stratified Pareto improvement over for player if:

- for all

We say that is stratified Pareto optimal in for player if there is no alternate outcome such that is a stratified Pareto improvement over for player . If is stratified Pareto optimal for all players, we say it is stratified Pareto optimal.

**Claim:** The set has a stratified Pareto optimum.

**Proof:** Consider some nonempty subset of players with the property that for all , , and there is no , with . We know that such a subset exists, since the preorder over players induces a poset over equivalence classes of players (where every player in an equivalence class transitively depends on every other player), and this poset must have a minimal element.

We then take an element of , which is Pareto optimal for all the players in . Note that it is important here that the players in have utility functions and fair sets that only depend on the other players in . We can then lock in all the players in as behaving according to this local Pareto optimum. We will treat them a constants, and effectively remove from the game, and repeat the process with the remaining players.

Take found by above process. Assume by way of contradiction that is a stratified Pareto improvement over for . Consider the set of players with . We can restrict the game to only those players, which makes sense since these players only have utility functions and fair sets that refer to each other. Thus, we may assume without loss of generality that every player . Thus is in the set of players locked in on the last step. Notice that in this case being a stratified Pareto improvement is equivalent to being a Pareto improvement for the players locked in on the last step, while fixing all the actions that were already locked in on previous steps, which contradicts the Pareto optimality of the distributions chosen in the last step.

Now, we can give a new bargaining procedure, which is the same as from the previous post, except instead of choosing a Pareto optimal point in , we choose a stratified Pareto optimal point in . Observe that in the above 3 player game, we end up with the first two players cooperating, since they do not depend on the third player.

Unfortunately, the above construction depends heavily on the fact that there only finitely many players. Indeed, if you extend the above definitions to infinitely many players in the obvious way, stratified Pareto optima need not exist:

Consider a game with a player for all . Each player outputs a single probability of outputting (as opposed to ). The utility function is just equal to , the probability that that player outputs . The fair set is the set of points such that either or . This is a valid fair set, since we can use the strategy if and otherwise, which is in fact Kakutani. Observe that iff .

The set , which is the intersection of all the fair sets of all the players is still closed and nonempty. (And this is still true for arbitrary games with infinitely many players) We have that if either for all or if there is some first nonzero , and all for are equal to . Either way, there is some such that for all . Consider the stratified Pareto improvement for given by , for , and for .

We therefore need a weaker notion than stratified Pareto optimal for infinitely many players. Note that we can still get stratified Pareto optima if there are no infinite descending chains of strict dependency. (e.g. if the above example grounded out and only had positive integers)

The way that we weaken stratified Pareto Optima is to take the (point wise) closure of the notion of stratified Pareto Optima for player . We say that is an almost stratified Pareto Optimum (ASPO) in for player if it is in the point wise closure of the set of all stratified Pareto optima for player . We say that is ASPO in if it is ASPO for for all players .

**Claim:** In a game with possibly infinitely many points, the set of fair points fair for all players is nonempty and there exists an ASPO in .

**Proof:** First, observe that for any finite subset of players, the set of points fair for all those players is compact and nonempty by the Kakutani fixed point theorem.
By compactness, the set of points fair for all players is also compact and non-empty.

Now, observe that for any finite subset of players, there exists an which is stratified Pareto optimal in for each all players in , using the argument from the finite case. Thus, the subset of which is ASPO for all players in is nonempty and compact. Again by compactness, this means that the subset of which is ASPO for all players is also nonempty and compact. Thus, there exists an which is ASPO in for all players.

At first this weakening may seem unnatural. Notice that the all 0's vector in the above example is ASPO, even though there are many Pareto improvements and stratified Pareto improvements. (Locally, it is arbitrarily close to points with all small positive probabilities that cannot be improved.) However, we think that this condition is actually pretty natural, and the most you could hope for. The point of the players having Kakutani responses and continuous utility functions was to set them up to only have continuous(ish) access to the game state. We similarly need our notion of Pareto optimal to be such that you only violate the notion if you can point out a Pareto improvement in a continuous way, which roughly corresponds to taking the interior of the notion of Pareto improvement, and thus the closure of the notion of Pareto optimal.

In the next post, I plan to connect this up with the existing notion of reflective oracles, to define cooperative oracles.

This notion of dependency seems too binary to me. Concretely, let's modify your example from the beginning so that P3 must grant an extra 10−10 utility to either P1 or P2, and gets to decide which. Now, everyone's utility depends on everyone's actions, and the game is still zero-sum, so again, so any strategy profile with p=q will be a stratified Pareto optimum. But it seems like P1 and P2 should ignore still ignore P3.

I agree with this. I think that the most interesting direction of future work is to figure out how to have better notions of dependency. I plan on writing some on this in the future, but basically we have not successfully figured out how to deal with this.

In the infinite game example, I think that something doesn't add up in the definition of f. A single-valued Kakutani map into a compact space is just a continuous map, but f is not continuous.

It looks legitimate, actually.

Remember, f is set-valued, so if ri−1=0, f(r)=[0,1]. In all other cases, f(r)=ri−12. f is a nonempty convex set-valued function, so all that's left is to show the closed graph property. If the limiting value of ri−1 is something other than 0, the closed graph property holds, and if the limiting value of ri−1 is 0, the closed graph property holds because 0∈[0,1].

Hi Alex!

I agree that the multimap you described is Kakutani and gives the correct fair set, but in the OP it says that if ri−1=0 then f(r)=ri, not f(r)=[0,1]. Maybe I am missing something about the notation?

How about using some conception of "coalition-stable"? In which an option has that property if there is no sub-coalition of players that can unilaterally increase their utility, whatever all the other players choose to do.

This is basically the core, though it's usually defined for cooperative games, where (a) utility is transferrable and (b) adding a new player never makes an existing player worse off. It's easy to generalize though.

Can you use the maximum "swing" from the other player's decisions to get a non-binary measure of dependency?

It's similar to my idea here.

Then the dependency of player Pi on Pj would be maxajUi−minajUi, where aj are the actions of Pj.