Review

Review

Equilibria in adversarial supervised learning

0Vanessa Kosoy

New Comment

1 comment, sorted by Click to highlight new comments since: Today at 8:59 PM

It would be nice to prove theorems analogous to standard PAC learning theory in this setting. For example, suppose there are distributions on that the adversary can choose from. Our algorithm is given a sample of for each and should choose a hypothesis. We can then try to prove that, given finite VC dimension, playing a minimax strategy of the game resulting from thinking of our samples as the entire distributions is approximately minimax for the real game.

Also, computing Nash equilibria in zero-sum two-player games is easy using linear programming, as a direct consequence of the minimax theorem.

Note: This describes an idea of Jessica Taylor's.## Introduction

An important aim in safe AI development is to make systems whose submodules analyze how they might go wrong. One model for this in academic machine learning research is training on adversarial examples.

In order to better understand the adversarial training process, we outline a specific adversarial learning model and consider what kind of equilibrium state it might possess.

## Setup

Consider a general adversarial learning scenario described by the tuple (X,Y,W,g,f,ℓ) where:

The learner's aim is to choose a hypothesis w∈W that achieves low expected loss ℓ(x,y,w), given that the labeled example is sampled from an adversarial distribution, x∼f(w),y∼g(x).

## An equilibrium in expert selection

One plausible approach for the learner is to consider hypotheses that perform well on some adversarial distribution f(w). The hypotheses that perform best on a some fixed distribution f(w) are given by the set-valued function:

h(w)=argminw′∈WEx∼f(w),y∼g(x)[ℓ(x,y,w′)]

The difficulty with choosing a hypothesis is that the adversarial distribution is not fixed, but rather it varies according to which hypothesis is chosen. One response is to choose some hypothesis w∗ that is located at a fixed point, such that w∗∈h(w∗). There is always at least one such fixed point, w∗, and this can be shown by Kakutani's fixed-point theorem.

To apply Kakutani's fixed point theorem, we need to show that: i) the domain of h is compact, convex and nonempty, ii) h is convex-valued, and iii) h is compact-valued and upper hemicontinuous.

First, by its definition, W, the domain of h, is non-empty, closed, and compact. Second, h is convex-valued because it minimizes a convex function. Remember that ℓ is convex in w′. The expectation of ℓ is then a non-negative weighted combination of functions convex in w′, so it is convex. Hence the values that minimize h are some convex set. Finally, h is compact-valued and upper hemicontinuous by Berge's maximum theorem. This is because h has a compact domain and outputs the arguments that minimize a continuous function. So by Kakutani's fixed point theorem, h will have at least one fixed point, w∗.

## Discussion

The motivation for choosing a hypothesis that lies at a fixed point is that this hypothesis need not be changed in response to the examples selected by an adversary. We should be clear though that this does not mean we can design an algorithm with this approach. Locating these fixed points would be computationally expensive and require lots of data in the general case. One open question is under what circumstances we will be able to find such an equilibrium efficiently.

Note that the equilibrium weight vectors do not necessarily minimize loss over all possible weight vectors; it might be possible to get a lower loss by selecting a non-equilibrium weight vector (on which the adversary f will produce easier examples). In some applications, it will be desirable to meet the equilibrium condition, whereas in others, it will be sufficient to have no regret compared to alternative weight vectors.

It is worth noting that we can arrive at the same result by thinking of adversarial supervised learning as a continuous two player game. This game has two players: the learner and the adversary. The learner chooses from the strategies W in order to minimize the expectation of ℓ over x and y, and the adversary chooses some distribution over some compact set of examples ΔX in order to maximize some general objective function. The Nash equilibria of such a game correspond to the fixed points in our supervised learning setup. It follows that (at least) one fixed point will always exist. Of course, like the fixed points in adversarial supervised learning, Nash equilibria can be difficult to compute, and will not always give any particular agent an optimal score.

In future analysis, we may be able to think of other properties that the adversarial learning setup inherits from two player games.