Personal Blog

Summary: If we train a classifier on a training set that comes from one distribution, and test it on a dataset coming from a different distribution, uniform convergence guarantees generally no longer hold. This post presents a strategy for creating classifiers that will reject test points when they are sufficiently different from training data. It works by rejecting points that are much more probable under the predicted test distribution than under the training distribution.

Introduction

In machine learning, we often train a system (e.g. a classifier or regression system) on a training set, and then test it on a test set. If the test set comes from the same distribution as the training set, uniform convergence guarantees allow us to bound the system's expected error on the test set based on its performance on the training set. As an example, if we are creating an automated system for making moral judgments, we could get training data by asking humans for their moral judgments. Then we could use the system to make additional moral judgments.

If the test dataset comes from the same distribution as the training dataset, then uniform convergence guarantees can give us nice bounds on the performance on the test set. In reality, the test set will often be different. For a moral judgment system, this could be disastrous: perhaps we only train the classifier on ordinary moral problems, but then the classifier decides whether it is a good idea to tile the universe with tiny smiley faces. At this point, we have no guarantees about whether the classifier will correctly judge this question.

Therefore, I aim to create a system that, when presented with a question, will choose to either answer the question or abort. It should abort when the question is sufficiently different from the training data that the system cannot make reliable judgments.

Mathematical setup

We have some input set . Let be the (known) distribution of training data. We train a classifier (or some other system) on the training data, and (using uniform convergence guarantees) estimate that it has good expected performance on random points from . Afterwards, we will receive test points in . The system can either use the classifier on the test point and make decisions on the basis of this classification, or abort (e.g. by asking a human for their judgment instead of the classifier).

Specifically, suppose there is some unknown cost function . represents the cost of using the classifier's judgment on point . Due to uniform convergence guarantees, we have (i.e. it is not very costly to use the classifier on random training points). Also suppose that aborting always has a cost of .

Let represent the system's prediction for the next test point. After predicting the next test point, the system chooses a policy , which maps each possible input point to the probability that the system will use the classifier for that point.

The total expected cost will then be .

The minimax policy

Suppose the system chooses a minimax policy. Then

To solve this optimization problem, first consider how to maximize (i.e. the probability of using the classifier for the next point) subject to the hard constraint on that .

The constraint can be rewritten:

As explained previously in the post about quantilizers, this constraint is equivalent to or equivalently

This constraint states that we may only rarely use the classifier for points that are much more probable under the predicted test distribution than the training distribution.

How can the system maximize subject to this constraint on ? Since the constraint is a conjunction of independent constraints for each , and the quantity to be maximized is monotonic in each , it is sufficient to maximize each subject to the constraint that . But this is achieved by setting . Therefore, this policy maximizes subject to the constraint.

Now let's step back and use this fact to solve the original optimization problem. This policy achieves maximum cost

To minimize worst-case expected cost, choose to minimize this quantity, and then set .

Classifying multiple points

The previous section analyzed the case when we classify a single point . We can apply this strategy repeatedly to classify multiple data points, but this strategy is not necessarily optimal. Instead, the optimal policy is to reject points that will appear more expected times in the data, compared to the probability under the training set. The derivation of this rule follows.

Suppose we will classify multiple data points , with joint distribution . We will choose the policy , where is the probability of using the classification of point given the previous points . All will be chosen to jointly minimize the maximum possible total expected cost

Due to linearity of expectation, this can be decomposed $$

\max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n (k - k \mathbb{E}{p{test}}[\pi(X_i)] + \mathbb{E}{p{test}}[\pi(X_i) c(X_i)])

nk - k \sum_{i=1}^n \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_i)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_i) c(X_i)]

nk - k \sum_{i=1}^n \sum_{x \in \mathcal{X}} p_{test}(X_i = x)\mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n \sum_{x \in \mathcal{X}} p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] c(x)

nk - k \sum_{x \in \mathcal{X}} \sum_{i=1}^n p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i - 1}, x)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{x \in \mathcal{X}} c(x) \sum_{i=1}^n p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] $$

In this term, each use of appears in an expression of the form . This means that the only features of relevant to expected cost are for each . Define . The sum must be between 0 and , and any value in this interval can be achieved using a which ignores both and its first inputs (i.e. for some . is the probability of using the classification of a point , regardless of which iteration it is encountered on). Therefore, an optimal policy of this form exists, and we can rewrite the expected total cost as

As before, suppose we have a hard constraint that and we want to maximize . Just as the original constraint could be rewritten as this constraint can be rewritten as To maximize subject to this constraint, just maximize each term subject to for each . This is achieved by setting .

Now this policy achieves maximum expected cost $$

nk - k \sum_{x \in \mathcal{X}} \alpha(x) \min{1, d \frac{p_{train}(x)}{\alpha(x)}} + d

nk - k \sum_{x \in \mathcal{X}} \min{\alpha(x), d p_{train}(x)} + d $$

So to minimize the original expected cost, it is sufficient to choose to minimize this quantity, and then choose .

Extensions and questions for future research

  • These bounds rely on the cost decomposing as a sum of costs for each iteration. How problematic is this assumption, and how might we drop it? This problem is also discussed in the post about quantilizers.
  • Here, the uniform convergence guarantee is only about a single distribution, . In practice, the same classifier may have good performance on many different distributions. It should be possible to use these additional bounds to use the classifier on a wider range of test points.
  • The analysis does not handle the case when the policy affects the distribution of for any .
  • It is somewhat worrying that the system uses the classification of every point with some probability. This is justified under certain adversarial assumptions, but it indicates some flaw in this approach, and fixing it in a principled manner might yield a better system.
  • This system requires an accurate Bayesian prediction module. It should be possible to adapt this system to work under pessimistic assumptions about predictions (e.g. choosing a minimax policy if we have many predictors and only one is guaranteed to be accurate).
  • If the data points contain timestamps, then the test data will always be identifiably different from the training data (since they have higher timestamps). This is a problem, because then the system will almost always reject new points. There might be some additional constraints we could place on the classifier to prevent this from being a problem.
  • If we predict that we will see some class of points in the test data, and we do not have enough training data to know how to classify these points, perhaps it would be good to ask the user about some points in this class (i.e. active learning). It might be interesting to look at active learning in this framework.
Personal Blog

2

New Comment