Note: This describes an idea of Jessica Taylor's, and is the first of several posts about aspects of online learning.

Introduction

Thanks to the recent paper on logical inductors, we are currently interested in what can be done with weighted ensembles of predictors. In particular, it would be neat if the idea about finding the fixed point of some predictors could be applied to online learning.

It seems like this approach fits in neatly into the framework of prediction with expert advice in online convex optimization. Usually, in online convex optimization, you can achieve low cumulative regret with respect to experts that make some fixed predictions. However, using the approach of finding the experts' fixed point, we can obtain a low cumulative regret with respect to experts that make modifications to our predictions.

Setup

In each round, , the learner predicts a vector where is some compact convex subset of Euclidean space. It does this with the advice of a set of expert predictors . Each of these experts is a continuous function that maps a prediction to a prediction. These experts are interpreted as hypotheses about how predictions can be improved, and they are known to the learner. After lodging its prediction, the learner is informed of the value of some loss function if each expert's advice was followed: . is assumed to be convex and -Lipschitz with respect to . The learner's goal is to minimize its regret at time , relative to the advice of its experts, , where:

Proposed learning approach

The basic idea is that the learner maintains a weighting that indicates its relative level of trust in each expert. This lies on the simplex, , with each element giving the credence in the expert. At each time step, this weighting is used to make a prediction . Specifically, the prediction is chosen so that the weighted ensemble of experts would not alter it (i.e. it is the prediction that lies at their fixed point).

Formally, define the weighted combination of the experts at time step , as:

For the first time step, each may simply be initialized to , giving equal weight to each of the expert predictors. Then, at each time step , the learner predicts the vector that lies at the fixed point of , so that:

By Brouwer's fixed point theorem, there will be at least one such fixed point.

After submitting this prediction, the learner discovers the loss that each expert would have incurred. It uses these to compute a new value of for the next time step. This is done by performing an update step with respect to , which will be convex and -Lipschitz with respect to .:

This can be done using the exponentiated gradients algorithm or some such other online convex optimization algorithm. If exponentiated gradients algorithm is used, we obtain a regret bound of [1]: where is the optimal weighting over experts and depends on the Lipschitz constants of the loss functions. Recall also that is the number of experts.

Equivalently:

This bounds the regret with respect to the advice of any particular expert, because these experts correspond to cases in which any expert is given weighting :

Discussion

This is in more general than the usual use of prediction with expert advice because the experts are more varied. Instead of each expert representing a constant prediction, it represents a continuous map from the space of prediction vectors to itself. One limitation, however, is that in order for the fixed point argument to work, the prediction vectors must be located in a compact convex set. Although, so long as these vectors are predicting some quantity with known bounds, this is not a severe limitation. Another limitation is that locating the fixed point of the experts may be very computationally difficult.

Footnotes

  1. Given in Corollary 2.14 on page 140 of the review Shalev-Shwartz, Shai. "Online learning and online convex optimization." Foundations and Trends in Machine Learning 4.2 (2011): 107-194.
Personal Blog

4

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 9:10 AM

This is cool!

It would be nice to know what you can get in polynomial time. Can you find -approximate fixed points in this setting in time polynomial in ? That would give a polynomial time algorithm with the same regret asymptotically.

(It's normally prediction with expert advice rather than assistance. And you should probably use bigger parentheses around sums.)

In general finding -approximate fixed points will be PPAD-hard. On the other hand, it seems like we could use an online learning algorithm to get a distribution over weight vectors such that the combined expert's expected modification to a weight vector sampled from this distribution is small, in polynomial time. I haven't worked out the details, but some version of this might imply that the combined expert can't expect to do better than you, if you randomly choose a weight vector from this distribution and the loss function is decided before you make this random choice.

(by the way, the same technique should work to construct a qualipolynomial-time randomized "logical inductor" satisfying a weaker property)

A very similar idea is actually introduced by Blum and Mansour in 2007 on page 1312, though they deal with linear rather than continuous transformations and so can find the fixed point easily.

Also see the notion of "internal" regret, e.g. as discussed by Blum and Mansour.

And I just wanted to write a post about an algorithm that leads to correlated equilibria without information or monitoring by ensuring a bound on swap regret, thinking this result is novel. Oh well...

We could also generalize this to a general update function , where may be the simplex over experts or some other set. Then we just require that be convex in each argument and Lipschitz in the first. And the complexity is certainly polynomial in the number of calls to the fixed point oracle. This seems to be what I needed to improve the in my COLT paper to . (I explicitly used the fixed point in the analysis, but didn't think of this algorithm.)

Given that this is my first post, critical feedback is especially welcome.