Summary: If we train a classifier on a training set that comes from one distribution, and test it on a dataset coming from a different distribution, uniform convergence guarantees generally no longer hold. This post presents a strategy for creating classifiers that will reject test points when they are sufficiently different from training data. It works by rejecting points that are much more probable under the predicted test distribution than under the training distribution.
Introduction
In machine learning, we often train a system (e.g. a classifier or regression system) on a training set, and then test it on a test set. If the test set comes from the same distribution as the training set, uniform convergence guarantees allow us to bound the system's expected error on the test set based on its performance on the training set. As an example, if we are creating an automated system for making moral judgments, we could get training data by asking humans for their moral judgments. Then we could use the system to make additional moral judgments.
If the test dataset comes from the same distribution as the training dataset, then uniform convergence guarantees can give us nice bounds on the performance on the test set. In reality, the test set will often be different. For a moral judgment system, this could be disastrous: perhaps we only train the classifier on ordinary moral problems, but then the classifier decides whether it is a good idea to tile the universe with tiny smiley faces. At this point, we have no guarantees about whether the classifier will correctly judge this question.
Therefore, I aim to create a system that, when presented with a question, will choose to either answer the question or abort. It should abort when the question is sufficiently different from the training data that the system cannot make reliable judgments.
Mathematical setup
We have some input set X. Let ptrain:ΔX be the (known) distribution of training data.
We train a classifier (or some other system) on the training data, and (using uniform convergence guarantees)
estimate that it has good expected performance on random points from ptrain. Afterwards, we will receive test points in X. The system can either use the classifier on the test point and make decisions on the basis of this classification, or abort (e.g. by asking a human for their judgment instead of the classifier).
Specifically, suppose there is some unknown cost function c:X→R+. c(x) represents the cost of using the classifier's judgment on point x. Due to uniform convergence guarantees, we have Eptrain[c(X)]≤1 (i.e. it is not very costly to use the classifier on random training points). Also suppose that aborting always has a cost of k.
Let ptest:ΔX represent the system's prediction for the next test point. After predicting the next test point, the system chooses a policy π:X→[0,1], which maps each possible input point to the probability that the system will use the classifier for that point.
The total expected cost will then be Eptest[π(X)c(X)+(1−π(X))k]=k−kEptest[π(X)]+Eptest[π(X)c(X)].
To solve this optimization problem, first consider how to maximize Eptest[π(X)] (i.e. the probability of using the classifier for the next point) subject to the hard constraint on π that ∀c.Eptrain[c(X)]≤1→Eptest[π(X)c(X)]≤d.
The constraint can be rewritten:
∀c.Eptest[π(X)c(X)]Eptrain[c(X)]≤d
As explained previously in the post about quantilizers, this constraint is equivalent to
∀x∈X.ptest(X)π(X)ptrain(X)≤d
or equivalently
∀x∈X.π(X)≤dptrain(X)ptest(X)
This constraint states that we may only rarely use the classifier for points that are much more probable under the predicted test distribution than the training distribution.
How can the system maximize Eptest[π(X)] subject to this constraint on π? Since the constraint is a conjunction of independent constraints for each x, and the quantity to be maximized is monotonic in each pi(x), it is sufficient to maximize each π(x) subject to the constraint that π(x)≤dptrain(X)ptest(X). But this is achieved by setting π(x)=min{1,dptrain(X)ptest(X)}. Therefore, this policy maximizes Eptest[π(X)] subject to the constraint.
Now let's step back and use this fact to solve the original optimization
problem. This policy π achieves maximum cost
maxc,Eptrain[c(X)]≤1Eptest[π(X)c(X)+(1−π(X))k]=k−kEptest[π(X)]+maxc,Eptrain[c(X)]≤1Eptest[π(X)c(X)]=k−kEptest[min{1,dptrain(X)ptest(X)}]+d=kEptest[max{0,1−dptrain(X)ptest(X)}]+d
To minimize worst-case expected cost, choose d to minimize this quantity, and then set π(x)=min{1,dptrain(x)ptest(x)}.
Classifying multiple points
The previous section analyzed the case when we classify a single point X. We can apply this strategy repeatedly
to classify multiple data points, but this strategy is not necessarily optimal. Instead, the optimal policy is
to reject points that will appear more expected times in the data, compared to the probability under the training set.
The derivation of this rule follows.
Suppose we will classify multiple data points X1,...,Xn,
with joint distribution ptest(X1,...,Xn).
We will choose the policy πi:Xi→[0,1],
where πi(X1,...,Xi−1,Xi) is
the probability of using the classification of point Xi
given the previous points X1,...,Xi−1. All πi will be chosen
to jointly minimize the maximum possible total expected cost
In this term, each use of π appears in an expression of the form ∑ni=1Eptest[πi(X1,...,Xi−1,x)]. This means that the only features
of π relevant to expected cost are ∑ni=1ptest(Xi=x)Eptest[πi(X1,...,Xi−1,x)] for each x∈X.
Define α(x)=∑ni=1ptest(Xi=x).
The sum must be between 0 and α(x), and any value in this interval
can be achieved using a πi which ignores both i and its first i−1 inputs
(i.e. πi(x1,...,xi)=β(xi) for some β:X→[0,1].
β(x) is the probability of using the classification of a point x, regardless of which
iteration it is encountered on). Therefore,
an optimal policy of this form exists, and we can rewrite the expected total cost as
As before, suppose we have a hard constraint that
∀c.Eptrain[c(X)]≤1→∑x∈Xc(x)α(x)β(x)≤d
and we want to maximize ∑x∈Xα(x)β(x).
Just as the original constraint
∀c.Eptrain[c(X)]≤1→Eptest[π(X)c(X)]≤d
could be rewritten as
∀x∈X.ptest(x)π(x)≤dptrain(x)
this constraint can be rewritten as
∀x∈X.α(x)β(x)≤dptrain(x)
To maximize ∑x∈Xα(x)β(x) subject to this constraint,
just maximize each term α(x)β(x) subject to α(x)β(x)≤dptrain(x)
for each x. This is achieved by setting β(x)=min{1,dptrain(x)α(x)}.
Now this policy achieves maximum expected cost
nk−k∑x∈Xα(x)β(x)+d $$
nk - k \sum_{x \in \mathcal{X}} \alpha(x) \min{1, d \frac{p_{train}(x)}{\alpha(x)}} + d
nk - k \sum_{x \in \mathcal{X}} \min{\alpha(x), d p_{train}(x)} + d
$$
So to minimize the original expected cost, it is sufficient to choose d
to minimize this quantity, and then choose
πi(x1,...,xi)=min{1,dptrain(x)α(x)}.
Extensions and questions for future research
These bounds rely on the cost decomposing as a sum of costs for each iteration. How problematic is this assumption, and how might we drop it? This problem is also discussed in the post about quantilizers.
Here, the uniform convergence guarantee is only about a single distribution, ptrain. In practice, the same classifier may have good performance on many different distributions. It should be possible to use these additional bounds to use the classifier on a wider range of test points.
The analysis does not handle the case when the policy πi affects the distribution of Xj for any j>i.
It is somewhat worrying that the system uses the classification of every point with some probability. This is justified under certain adversarial assumptions, but it indicates some flaw in this approach, and fixing it in a principled manner might yield a better system.
This system requires an accurate Bayesian prediction module. It should be possible to adapt this system to work under pessimistic assumptions about predictions (e.g. choosing a minimax policy if we have many predictors and only one is guaranteed to be accurate).
If the data points contain timestamps, then the test data will always be identifiably different from the training data (since they have higher timestamps). This is a problem, because then the system will almost always reject new points. There might be some additional constraints we could place on the classifier to prevent this from being a problem.
If we predict that we will see some class of points in the test data, and we do not have enough training data to know how to classify these points, perhaps it would be good to ask the user about some points in this class (i.e. active learning). It might be interesting to look at active learning in this framework.
Summary: If we train a classifier on a training set that comes from one distribution, and test it on a dataset coming from a different distribution, uniform convergence guarantees generally no longer hold. This post presents a strategy for creating classifiers that will reject test points when they are sufficiently different from training data. It works by rejecting points that are much more probable under the predicted test distribution than under the training distribution.
Introduction
In machine learning, we often train a system (e.g. a classifier or regression system) on a training set, and then test it on a test set. If the test set comes from the same distribution as the training set, uniform convergence guarantees allow us to bound the system's expected error on the test set based on its performance on the training set. As an example, if we are creating an automated system for making moral judgments, we could get training data by asking humans for their moral judgments. Then we could use the system to make additional moral judgments.
If the test dataset comes from the same distribution as the training dataset, then uniform convergence guarantees can give us nice bounds on the performance on the test set. In reality, the test set will often be different. For a moral judgment system, this could be disastrous: perhaps we only train the classifier on ordinary moral problems, but then the classifier decides whether it is a good idea to tile the universe with tiny smiley faces. At this point, we have no guarantees about whether the classifier will correctly judge this question.
Therefore, I aim to create a system that, when presented with a question, will choose to either answer the question or abort. It should abort when the question is sufficiently different from the training data that the system cannot make reliable judgments.
Mathematical setup
We have some input set X. Let ptrain:ΔX be the (known) distribution of training data. We train a classifier (or some other system) on the training data, and (using uniform convergence guarantees) estimate that it has good expected performance on random points from ptrain. Afterwards, we will receive test points in X. The system can either use the classifier on the test point and make decisions on the basis of this classification, or abort (e.g. by asking a human for their judgment instead of the classifier).
Specifically, suppose there is some unknown cost function c:X→R+. c(x) represents the cost of using the classifier's judgment on point x. Due to uniform convergence guarantees, we have Eptrain[c(X)]≤1 (i.e. it is not very costly to use the classifier on random training points). Also suppose that aborting always has a cost of k.
Let ptest:ΔX represent the system's prediction for the next test point. After predicting the next test point, the system chooses a policy π:X→[0,1], which maps each possible input point to the probability that the system will use the classifier for that point.
The total expected cost will then be Eptest[π(X)c(X)+(1−π(X))k]=k−kEptest[π(X)]+Eptest[π(X)c(X)].
The minimax policy
Suppose the system chooses a minimax policy. Then
π=argminπmaxc,Eptrain[c(X)]≤1(k−kEptest[π(X)]+Eptest[π(X)c(X)]) =argminπ(−kEptest[π(X)]+maxc,Eptrain[c(X)]≤1Eptest[π(X)c(X)])
To solve this optimization problem, first consider how to maximize Eptest[π(X)] (i.e. the probability of using the classifier for the next point) subject to the hard constraint on π that ∀c.Eptrain[c(X)]≤1→Eptest[π(X)c(X)]≤d.
The constraint can be rewritten:
∀c.Eptest[π(X)c(X)]Eptrain[c(X)]≤d
As explained previously in the post about quantilizers, this constraint is equivalent to ∀x∈X.ptest(X)π(X)ptrain(X)≤d or equivalently ∀x∈X.π(X)≤dptrain(X)ptest(X)
This constraint states that we may only rarely use the classifier for points that are much more probable under the predicted test distribution than the training distribution.
How can the system maximize Eptest[π(X)] subject to this constraint on π? Since the constraint is a conjunction of independent constraints for each x, and the quantity to be maximized is monotonic in each pi(x), it is sufficient to maximize each π(x) subject to the constraint that π(x)≤dptrain(X)ptest(X). But this is achieved by setting π(x)=min{1,dptrain(X)ptest(X)}. Therefore, this policy maximizes Eptest[π(X)] subject to the constraint.
Now let's step back and use this fact to solve the original optimization problem. This policy π achieves maximum cost maxc,Eptrain[c(X)]≤1Eptest[π(X)c(X)+(1−π(X))k] =k−kEptest[π(X)]+maxc,Eptrain[c(X)]≤1Eptest[π(X)c(X)] =k−kEptest[min{1,dptrain(X)ptest(X)}]+d =kEptest[max{0,1−dptrain(X)ptest(X)}]+d
To minimize worst-case expected cost, choose d to minimize this quantity, and then set π(x)=min{1,dptrain(x)ptest(x)}.
Classifying multiple points
The previous section analyzed the case when we classify a single point X. We can apply this strategy repeatedly to classify multiple data points, but this strategy is not necessarily optimal. Instead, the optimal policy is to reject points that will appear more expected times in the data, compared to the probability under the training set. The derivation of this rule follows.
Suppose we will classify multiple data points X1,...,Xn, with joint distribution ptest(X1,...,Xn). We will choose the policy πi:Xi→[0,1], where πi(X1,...,Xi−1,Xi) is the probability of using the classification of point Xi given the previous points X1,...,Xi−1. All πi will be chosen to jointly minimize the maximum possible total expected cost
maxc:Eptrain[c(X)]≤1Eptest[n∑i=1(π(Xi)c(Xi)+(1−π(Xi))k)]
Due to linearity of expectation, this can be decomposed maxc:Eptrain[c(X)]≤1n∑i=1Eptest[π(Xi)c(Xi)+(1−π(Xi))k] $$
\max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n (k - k \mathbb{E}{p{test}}[\pi(X_i)] + \mathbb{E}{p{test}}[\pi(X_i) c(X_i)])
nk - k \sum_{i=1}^n \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_i)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_i) c(X_i)]
nk - k \sum_{i=1}^n \sum_{x \in \mathcal{X}} p_{test}(X_i = x)\mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{i=1}^n \sum_{x \in \mathcal{X}} p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] c(x)
nk - k \sum_{x \in \mathcal{X}} \sum_{i=1}^n p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i - 1}, x)] + \max_{c : \mathbb{E}{p{train}}[c(X)] \leq 1} \sum_{x \in \mathcal{X}} c(x) \sum_{i=1}^n p_{test}(X_i = x) \mathbb{E}{p{test}}[\pi_i(X_1, ..., X_{i-1}, x)] $$
In this term, each use of π appears in an expression of the form ∑ni=1Eptest[πi(X1,...,Xi−1,x)]. This means that the only features of π relevant to expected cost are ∑ni=1ptest(Xi=x)Eptest[πi(X1,...,Xi−1,x)] for each x∈X. Define α(x)=∑ni=1ptest(Xi=x). The sum must be between 0 and α(x), and any value in this interval can be achieved using a πi which ignores both i and its first i−1 inputs (i.e. πi(x1,...,xi)=β(xi) for some β:X→[0,1]. β(x) is the probability of using the classification of a point x, regardless of which iteration it is encountered on). Therefore, an optimal policy of this form exists, and we can rewrite the expected total cost as
nk−k∑x∈Xα(x)β(x)+maxc:Eptrain[c(X)]≤1∑x∈Xc(x)α(x)β(x)
As before, suppose we have a hard constraint that ∀c.Eptrain[c(X)]≤1→∑x∈Xc(x)α(x)β(x)≤d and we want to maximize ∑x∈Xα(x)β(x). Just as the original constraint ∀c.Eptrain[c(X)]≤1→Eptest[π(X)c(X)]≤d could be rewritten as ∀x∈X.ptest(x)π(x)≤dptrain(x) this constraint can be rewritten as ∀x∈X.α(x)β(x)≤dptrain(x) To maximize ∑x∈Xα(x)β(x) subject to this constraint, just maximize each term α(x)β(x) subject to α(x)β(x)≤dptrain(x) for each x. This is achieved by setting β(x)=min{1,dptrain(x)α(x)}.
Now this policy achieves maximum expected cost nk−k∑x∈Xα(x)β(x)+d $$
nk - k \sum_{x \in \mathcal{X}} \alpha(x) \min{1, d \frac{p_{train}(x)}{\alpha(x)}} + d
nk - k \sum_{x \in \mathcal{X}} \min{\alpha(x), d p_{train}(x)} + d $$
So to minimize the original expected cost, it is sufficient to choose d to minimize this quantity, and then choose πi(x1,...,xi)=min{1,dptrain(x)α(x)}.
Extensions and questions for future research