Fair upfront warning: This is not a particularly readable proof section (though much better than Section 2 about belief functions). There's dense notation, logical leaps due to illusion of transparency since I've spent a month getting fluent with these concepts, and a relative lack of editing since it's long. If you really want to read this, I'd suggest PM-ing me to get a link to MIRIxDiscord, where I'd be able to guide you through it and answer questions.

 

Proposition 1: If  then  is a positive functional on .

Proof Sketch: We just check three conditions. Linearity, being nonnegative on , and continuity.

Linearity proof. Using  for constants,

So we have verified that  and we have linearity.

Positivity proof: An sa-measure , writeable as  has  uniquely writeable as a pair of finite measures  (all the positive regions) and a  (all the negative regions) by the Jordan Decomposition Theorem, and . So,

The first  by , so the expectation of  is positive and  is negative so taking the expectation of 1 is more negative. The second  is by the condition on how  relates to .

Continuity proof: Fix a sequence  converging to . Obviously the  part converges, so now we just need to show that  converges to . The metric we have on the space of finite signed measures is the KR-metric, which implies the thing we want. This only works for continuous , not general .

 

Theorem 1: Every positive functional on  can be written as , where , and 

Proof Sketch: The first part is showing that it's impossible to have a positive functional where the  term doesn't matter, without the positive functional being the one that maps everything to 0. The second part of the proof is recovering our  by applying the positive functional to Dirac-delta measures , to see what the function must be on point .

Part 1:  Let's say  isn't 0, ie there's some nonzero  pair where , and yet  (which, by linearity, means that  for all ). We'll show that this situation is impossible.

Then,  by our starting assumption, and Jordan decomposition of , along with linearity of positive functionals. Now,  because positive functionals are linear, and everything in that above equation is an sa-measure (flipping a negative measure makes a positive measure, which doesn't impose restrictions on the  term except that it be ).  And so, by nonnegativity of positive functionals on sa-measures, . Using this, we get

Another use of linearity was invoked for the first  in the second line, and then the second  made use of our assumption that  for all .

At this point, we have derived that . Both of these are positive measures. So, there exists some positive measure  where .

Now, observe that, for all 

Let b be sufficiently huge to make  into an sa-measure. Also, since , which is impossible because positive functionals are nonnegative on all sa-measures. Contradiction. Due to the contradiction, if there's a nonzero positive functional, it must assign , so let  be our  term.

Proof part 2: Let's try to extract our f. Let  This is just recovering the value of the hypothesized  on  by feeding our positive functional the measure  that assigns 1 value to  and nothing else, and scaling. Now, we just have to verify that this  is continuous and in .

For continuity, let  limit to . By the KR-metric we're using,  limits to . By continuity of  limits to . Therefore,  limits to  and we have continuity.

For a lower bound, , because  is a ratio of two nonnegative numbers, and the denominator isn't 0.

Now we just have to show that . For contradiction, assume there's an  where . Then , so , and in particular, .

But then, , so 

However,  is an sa-measure, because , and must have nonnegative value, so we get a contradiction. Therefore, .

To wrap up, we can go:

And , and , so we're done.

 

Lemma 1: Compactness Lemma: Fixing some nonnegative constants  and , the set of sa-measures where , is compact. Further, if a set lacks an upper bound on  or on , it's not compact.

Proof Sketch: We fix an arbitrary sequence of sa-measures, and then use the fact that closed intervals are compact-complete and the space  is compact-complete to isolate a suitable convergent subsequence. Since all sequences have a limit point, the set is compact. Then, we go in the other direction, and get a sequence with no limit points assuming either a lack of upper bounds on , or a lack of upper bounds on .

Proof: Fix some arbitrary sequence  wandering about within this space, which breaks down into , and then, since all measures are just a probability distribution scaled by the constant , it further breaks down into . Since  must be bounded in .

Now, what we can do is extract a subseqence where  ,, and  all converge, by Tychonoff's Theorem (finite product, no axiom of choice required) Our three number sequences are all confined to a bounded interval, and our two probability sequences are wandering around within  which is a compact complete metric space if  is. The limit of this subsequence is a limit point of the original sequence, since all its components are arbitrarily close to the components that make up  for large enough n in our subsequence.

The limiting value of  and  both obey their respective bounds, and the cone of sa-measures is closed, so the limit point is an sa-measure and respects the bounds too. Therefore the set is compact, because all sequences of points in it have a limit point.

In the other direction, assume a set  has unbounded  values. Then we can fix a sequence  where  increases without bound, so the a-measures can't converge. The same applies to all subsequences, so there's no limit point, so  isn't compact.

Now, assume a set  has bounded  values, call the least upper bound , but the value of  is unbounded. Fix a sequence  where  is unbounded above. Assume a convergent subsequence exists. Since  must be bounded in . Then because , and the latter quantity is finite,  must be unbounded above. However, in order for the  to limit to some , which results in a contradiction. Therefore, said convergent subsequence doesn't exist, and  is not compact.

Put together, we have a necessary-and-sufficient condition for a closed subset of  to be compact. There must be an upper bound on  and , respectively.

 

Lemma 2: The upper completion of a closed set of sa-measures is closed.

Proof sketch: We'll take a convergent sequence  in the upper completion of  that limits to , and show that, in order for it to converge, the same sorts of bounds as the Compactness Lemma uses must apply. Then, breaking down  into , where , and  is an sa-measure, we'll transfer these Compactness-Lemma-enabling bounds to the sequences  and , to get that they're both wandering around in a compact set. Then, we just take a convergent subsequence of both, add the two limit points together, and get our limit point , witnessing that it's in the upper completion of .

Proof: Let  limit to some . A convergent sequence (plus its one limit point) is a compact set of points, so, by the Compactness Lemma, there must be a  and  that are upper bounds on the  and  values, respectively.

Now, for all n, break down  as , where , and  is an sa-measure.

Because , we can bound the  and  quantities by . This transfers into a  lower bound on  and , respectively.

Now, we can go:

Using worst-case values for  and , we get:

So, we have upper bounds on  and  of , respectively.

Due to the sequences  and  respecting bounds on  and  ( and  respectively), and wandering around within the closed sets  and  respectively, we can use the Compactness Lemma and Tychonoff's theorem (finite product, no axiom of choice needed) to go "hey, there's a subsequence where both  and  converge, call the limit points  and . Since  and  are closed, , and ."

Now, does ? Well, for any , there's some really large n where , and . Then, we can go:

So, regardless of , so . So, we've written  as a sum of an sa-measure in  and an sa-measure, certifying that , so  is closed.

 

Proposition 2: For closed convex nonempty ,

Proof sketch: Show both subset inclusion directions. One is very easy, then we assume the second direction is false, and invoke the Hahn-Banach theorem to separate a point in the latter set from the former set. Then we show that the separating functional is a positive functional, so we have a positive functional where the additional point underperforms everything in , which is impossible by the definition of the latter set.

Easy direction: We will show that 

This is because a , can be written as . Let  be our  of interest. Then, it is indeed true that for all 

Hard direction: Assume by contradiction that

Then there's some  where  and  is the upper completion of a closed set, so by the Compactness Lemma, it's closed, and since it's the Minkowski sum of convex sets, it's convex.

Now, we can use the variant of the Hahn-Banach theorem from the Wikipedia article on "Hahn-Banach theorem", in the "separation of a closed and compact set" section. Our single point  is compact, convex, nonempty, and disjoint from the closed convex set . Banach spaces are locally convex, so we can invoke Hahn-Banach separation.

Therefore, there's some continuous linear functional  s.t. 

We will show that this linear functional is actually a positive functional!

Assume there's some sa-measure  where . Then we can pick a random , and consider , where  is extremely large.  lies in , but it would also produce an extremely negative value for \phi which undershoots  which is impossible. So  is a positive functional.

However, , so . But also,  fulfills the condition , because of the set it came from. So, there must exist some  where . But, we have a contradiction, because .

So, there cannot be any point in  that isn't in . This establishes equality.

 

Lemma 3: For any closed set  and point , the set  is nonempty and compact.

Proof: It's easy to verify nonemptiness, because  is in the set. Also, it's closed because it's the intersection of two closed sets.  was assumed closed, and the other part is the Minkowski sum of  and , which is closed if  is, because it's just a shift of  (via a single point).  is closed because it's -1 times a closed set.

We will establish a bound on the  and  values of anything in the set, which lets us invoke the Compactness Lemma to show compactness, because it's a closed subset of a compact set.

Note that if , then , so . Rewrite this as 

Because , we can bound  and  by . This transfers into a  lower bound on  and . Now, we can go:

Using worst-case values for  and , we get:

So, we have an upper bound of  on , and an upper bound of  on . Further,  was arbitrary in , so we have our bounds. This lets us invoke the Compactness Lemma, and conclude that said closed set is compact.

 

Lemma 4: If  is a partial order on  where  iff there's some sa-measure  where , then

Proof: 

Also, 

Also, 

Putting all this together, we get

And we're halfway there. Now for the second half.

Also, 

Putting this together, we get

And the result has been proved.

 

Theorem 2: Given a nonempty closed set , the set of minimal points  is nonempty and all points in  are above a minimal point.

Proof sketch: First, we establish an partial order that's closely tied to the ordering on , but flipped around, so minimal points in  are maximal elements. We show that it is indeed a partial order, letting us leverage Lemma 4 to translate between the partial order and the set . Then, we show that every chain in the partial order has an upper bound via Lemma 3 and compactness arguments, letting us invoke Zorn's lemma to show that that everything in the partial order is below a maximal element. Then, we just do one last translation to show that minimal points in  perfectly correspond to maximal elements in our partial order.

Proof: first, impose a partial order on , where  iff there's some sa-measure  where . Notice that this flips the order. If an sa-measure is "below" another sa-measure in the sa-measure addition sense, it's above that sa-measure in this ordering. So a minimal point in  would be maximal in the partial order. We will show that it's indeed a partial order.

Reflexivity is immediate. , so .

For transitivity, assume . Then there's some  and  s.t. , and . Putting these together, we get , and adding sa-measures gets you an sa-measure, so .

For antisymmetry, assume  and . Then , and . By substitution, , so . For all positive functionals, , and since positive functionals are always nonnegative on sa-measures, the only way this can happen is if  and  are 0, showing that .

Anyways, since we've shown that it's a partial order, all we now have to do is show that every chain has an upper bound in order to invoke Zorn's lemma to show that every point in  lies below some maximal element.

Fix some ordinal-indexed chain , and associate each of them with the set , which is compact by Lemma 3 and always contains .

The collection of  also has the finite intersection property, because, fixing finitely many of them, we can consider a maximal , and  is in every associated set by:

Case 1: Some other  equals , so  and .

Case 2: , and by Lemma 4, .

Anyways, since all the  are compact, and have the finite intersection property, we can intersect them all and get a nonempty set containing some point  lies in , because all the sets we intersected were subsets of . Also, because  for all  in our chain, then if , Lemma 4 lets us get , and if , then . Thus,  is an upper bound for our chain.

By Zorn's Lemma, because every chain has an upper bound, there are maximal elements in , and every point in  has a maximal element above it.

To finish up, use Lemma 4 to get: 

 

Proposition 3: Given a , and a  that is nonempty closed, 

Direction 1: since  is a subset of , we get one direction easily, that

Direction 2: Take a . By Theorem 2, there is a  s.t. . Applying our positive functional  (by Proposition 1), we get that . Because every point in  has a point in  which scores as low or lower according to the positive functional,

And this gives us our desired equality.

 

Proposition 4: Given a nonempty closed convex  and 

Proof: First, we'll show . We'll use the characterization in terms of the partial order  we used for the Zorn's Lemma proof of Theorem 2. If a point  is in , then it can be written as , so . Since all points added in  lie below a preexisting point in  (according to the partial order from Theorem 2) the set of maximals (ie, set of minimal points) is completely unchanged when we add all the new points to the partial order via upper completion, so .

For the second part, one direction is immediate. , so . For the reverse direction, take a point . It can be decomposed as , and then by Theorem 2,  can be decomposed as , so , so it lies in , and we're done.

 

Theorem 3: If the nonempty closed convex sets  and  have , then there is some  where 

Proof sketch: We show that upper completion is idempotent, and then use that to show that the upper completions of A and B are different. Then, we can use Hahn-Banach to separate a point of  from  (or vice-versa), and show that the separating functional is a positive functional. Finally, we use Theorem 1 to translate from a separating positive functional to different expectation values of some 

Proof: Phase 1 is showing that upper completion is idempotent. . One direction of this is easy, . In the other direction, let . Then we can decompose  into , where , and decompose that into  where , so  and .

Now for phase 2, we'll show that the minimal points of one set aren't in the upper completion of the other set. Assume, for contradiction, that this is false, so  and . Then, by idempotence, Proposition 4, and our subset assumption,

Swapping the A and B, the same argument holds, so , so .

Now, using this and Proposition 4, .

But wait, we have a contradiction, we said that the minimal points of  and  weren't the same! Therefore, either , or vice-versa. Without loss of generality, assume that .

Now for phase 3, Hahn-Banach separation to get a positive functional with different inf values. Take a point  in  that lies outside . Now, use the Hahn-Banach separation of  and  used in the proof of Proposition 2, to get a linear functional  (which can be demonstrated to be a positive functional by the same argument as the proof of Proposition 2) where: . Thus, , so 

Said positive functional can't be 0, otherwise both sides would be 0. Thus, by Theorem 1,  where , and . Swapping this out, we get:

and then this is  So, we have crafted our  which distinguishes the two sets and we're done.

 

Corollary 1: If two nonempty closed convex upper-complete sets  and  are different, then there is some  where 

Proof: Either , in which case we can apply Theorem 3 to separate them, or their sets of minimal points are the same. In that case, by Proposition 4 and upper completion,  and we have a contradiction because the two set are different.

 

Theorem 4: If  is an infradistribution/bounded infradistribution, then  is concave in , monotone, uniformly continuous/Lipschitz, , and if 

Proof sketch:  is trivial, as is uniform continuity from the weak bounded-minimal condition. For concavity and monotonicity, it's just some inequality shuffling, and for  if ,, we use upper completion to have its worst-case value be arbitrarily negative. Lipschitzness is much more difficult, and comprises the bulk of the proof. We get a duality between minimal points and hyperplanes in , show that all the hyperplanes we got from minimal points have the same Lipschitz constant upper bound, and then show that the chunk of space below the graph of  itself is the same as the chunk of space below all the hyperplanes we got from minimal points. Thus,  has the same (or lesser) Lipschitz constant as all the hyperplanes chopping out stuff above the graph of .

Proof: For normalization,  and  by normalization for . Getting the uniform continuity condition from the weak-bounded-minimal condition on an infradistribution  is also trivial, because the condition just says  is uniformly continuous, and that's just  itself.

Let's show that  is concave over , first. We're shooting for . To show this,

And concavity has been proved.

Now for monotonicity. By Proposition 3 and Proposition 1,

Now, let's say . Then:

And we're done. The critical inequality in the middle came from all minimal points in an infradistribution having no negative component by positive-minimals, so swapping out a function for a greater function produces an increase in value.

Time for . Let's say there exists an  s.t. . We can take an arbitrary sa-measure , and consider , where  is the point measure that's 1 on , and  is extremely huge. The latter part is an sa-measure. But then,. Since , and  is extremely huge, this is extremely negative. So, since there's sa-measures that make the function as negative as we wish in  by upper-completeness,  A very similar argument can be done if there's an  where , we just add in  to force arbitrarily negative values.

Now for Lipschitzness, which is by far the worst of all. A minimal point  induces an affine function  (kinda like a hyperplane) of the form . Regardless of , as long as it came from a minimal point in  for functions with range in , because

Ok, so if a point is on-or-below the graph of  over , then it's on-or-below the graph of  for all .

What about the other direction? Is it possible for a point  to be strictly above the graph of  and yet  all the graphs of ? Well, no. Invoking Proposition 3,

So, there exists a minimal point  where , so  lies above the graph of .

Putting these two parts together, 's hypograph over  is the same as the intersection of the hypographs of all these . If we can then show all the  have a Lipschitz constant bounded above by some constant, then we get that  itself is Lipschitz with the same constant.

First, a minimal  must have  having no negative parts, so it can be written as , and by bounded-minimals (since we have a bounded infradistribution), . Now,

So, we get that: 

Note that  is our distance metric between functions in . This establishes that regardless of which minimal point we picked,  is Lipschitz with Lipschitz constant , and since , then  itself has the same bound on its Lipschitz constant.

 

Lemma 5: 

Proof sketch: We'll work in the Banach space  of  measurable functions w.r.t the absolute value of the signed measure . Then, we consider the discontinuous (but ) function that's 1 everywhere where  is negative. Continuous functions are dense in  measurable functions, so we can fix a sequence of continuous functions limiting to said indicator function. Then we just have to check that  is a bounded linear functional, and we get that there's a sequence of continuous functions  where  limits to the measure of the indicator function that's 1 where everything is negative. Which is the same as the measure of the "always 1" function, but only on the negative parts, and we're done.

Consider the Banach space  of measurable functions w.r.t. the absolute value of the signed measure , ie, , which is a measure. It has a norm given by . To begin with, we can consider the  indicator function  that's 1 where the measure is negative. Note that

Because continuous functions are dense in , we can fix a sequence of continuous functions  limiting to . Then, just clip those continuous functions to , making a continuous function . They'll get closer to  that way, so the sequence  of continuous functions  limits to  too.

We'll take a detour and show that  is a bounded linear functional , with a Lipschitz constant of 1 or less.

First, , trivially, establishing linearity.

As for the boundedness, if , then , so:

So, . An  having a norm of 1 or less gets mapped to a number with a norm of 1 or less, so the Lipschitz constant of  is 1 or less. This implies continuity.

Now that we have all requisite components, fix some . There's some n where, for all greater n, . Mapping them through , due to having a Lipschitz constant of 1 or less, then means that  because the value of 1-but-only-on-negative-parts is as-or-more negative than  on the measure, due to  being bounded in . Summarizing,  for all n beyond a certain point, so, for all n beyond a certain point, 

So we have a sequence of functions in  where  limits to , and our signed measure was arbitrary. Therefore, we have our result that .

 

Theorem 5: If  is a function  that is concave, monotone, uniformly-continuous/Lipschitz, , and , then it specifies a infradistribution/bounded infradistribution by: , where  is the function given by , and  is the convex conjugate of . Also, going from a infradistribution to an  and back recovers exactly the infradistribution, and going from an  to a infradistribution and back recovers exactly .

Proof sketch: This is an extremely long one. Phase 1 and 2 is showing isomorphism. One direction is reshuffling the definition of  until we get the definition of the set built from  via convex conjugate, showing that going  to  and back recovers your original set. In the other direction, we show that expectations w.r.t the set we built from  match up with  exactly.

Phase 3 is cleanup of the easy conditions. Nonemptiness is pretty easy to show, the induced set being a set of sa-measures is harder to show and requires moderately fancier arguments, and closure and convexity require looking at basic properties of functions and the convex conjugate. Upper completeness takes some equation shuffling to show but isn't too bad. The weak-minimal bound property is immediate, and normalization is fairly easy. 

That just leaves the positive-minimal property and the bounded-minimal properties, respectively, which are nightmares. A lesser nightmare and a greater nightmare. For phase 4 to lay the groundwork for these, we establish an isomorphism between points in H and hyperplanes which lie above the graph of h, as well as a way of certifying that a point in H isn't minimal by what its hyperplane does.

Phase 5 is, for showing positive-minimals, we can tell whether a hyperplane corresponds to an a-measure, and given any hyperplane above the graph of , construct a lower one that corresponds to a lower point in  that does correspond to an a-measure

Phase 6 is, for bounded-minimals, we take a hyperplane that may correspond to a minimal point, but which is too steep in certain directions. Then, we make an open set that fulfills the two roles of: if you enter it, you're too steep, or you overshoot the hyperplane of interest that you're trying to undershoot. Some fancy equation crunching and one application of Hahn-Banach later, we get a hyperplane that lies above  and doesn't enter our open set we crafted. So, in particular, it undershoots our hyperplane of interest, and isn't too steep. This certifies that our original "too steep" hyperplane didn't actually correspond to a minimal point, so all minimal points must have a bound on their  values by the duality between hyperplanes above  and points in .

Fix the convention that  or  is assumed to mean , we'll explicitly specify when  has bounds.

Phase 1: Let's show isomorphism. Our first direction is showing  to  and back is  exactly. By upper completion, and Proposition 2, we can also characterize  as

Using Theorem 1 to express all positive functionals as arising from an , and observing that the a constant in front doesn't change which stuff scores lower than which other stuff, so we might as well characterize everything in terms of  can also be expressed as

We can swap out  for , because, from the  argument in Theorem 4,  going outside  means that . And then, our  can further be reexpressed as

Also, , so we can rewrite this as:

and, by the definition of the convex conjugate(sup characterization) and the space of finite signed measures being the dual space of , and  being a functional applied to an element, this is  So, our original set  is identical to the convex-conjugate set, when we go from  to  back to a set of sa-measures.

Proof Phase 2: In the reverse direction for isomorphism, assume that  fulfills the conditions. We want to show that , so let's begin.

Given an , we have a natural candidate for minimizing the , just set it equal to . So then we get 

And this is just...  (proof by Wikipedia article, check the inf characterization), and, because  is continuous over , and concave, and  everywhere outside the legit functions then  is continuous over , and convex, and  everywhere outside the legit functions, so in particular,  is convex and lower-semicontinuous and proper, so  by the Fenchel-Moreau Theorem. From that, we get

and we're done with isomorphism. Now that isomorphism has been established, let's show the relevant conditions hold. Namely, nonemptiness, closure, convexity, upper completion, normality, weak-bounded-minimals (phase 3) and positive-minimals (phase 5) and bounded-minimals (assuming h is Lipschitz) (phase 6) to finish off. The last two will be extremely hard.

Begin phase 3. Weak-bounded-minimals is easy by isomorphism. For our  we constructed, if  wasn't uniformly continuous, then because  equals , we'd get a failure of uniform continuity for , which was assumed.

By the way, the convex conjugate, , can be expressed as (by Wikipedia, sup charaacterization)  We can further restrict  to functions with range in , because if it was anything else, we'd get . We'll be using  (or the  variant) repeatedly.

For nonemptiness, observe that  is present in  because, fixing an arbitrary ,

This is from our format of the convex conjugate, and  being normalized and monotone, so the highest it can be is  and it attains that value. Therefore, , so  is in the  we constructed.

For showing that our constructed set  lies in , we have that, for a random , it has (by our characterization of )

This is by the lower bound on  being  and unpacking the convex conjugate,  by monotonicity and normalization, a reexpression of , and Lemma 5, respectively.  so it's an sa-measure.

For closure and convexity, by monotonicity of , we have   and  is continuous on , concave, and  everywhere else by assumption, so  is proper, continuous on , convex, and lower-semicontinuous in general because of the  everywhere else, so, by the Wikipedia page on "Closed Convex Function",  is a closed convex function, and then by the Wikipedia page on "Convex Conjugate" in the Properties section,  is convex and closed. From the Wikipedia page on "Closed Convex Function", this means that the epigraph of  is closed, and also the epigraph of a convex function is convex. This takes care of closure and convexity for our 

Time for upper-completeness. Assume that  lies in the epigraph. Our task now is to show that  lies in the epigraph. This is equivalent to showing that . Note that , because  is an sa-measure. Let's begin.

This was done by unpacking the convex conjugate, splitting up  into  and , locking two of the components in the sup to be an upper bound (which also gives the sup more flexibility on maximizing the other two components, so this is greater), packing up the convex conjugate, and using that  because 

Normalization of the resulting set is easy. Going from  to a (maybe)-inframeasure  back to  is identity as established earlier, so all we have to do is show that a failure of normalization in a (maybe)-inframeasure makes the resulting  not normalized. Thus, if our  is normalized, and it makes an  that isn't normalized, then going back makes a non-normalized , which contradicts isomorphism. So, assume there's a failure of normalization in . Then , or , so either  or  and we get a failure of normalization for  which is impossible. So  must be normalized.

Begin phase 4. First, continuous affine functionals  that lie above the graph of  perfectly correspond to sa-measures in . This is because the continuous dual space of  is the space of finite signed measures, so we can interpret  as a finite signed measure, and  as the  term. In one direction, given an ,

so every point in  induces a continuous affine functional  whose graph is above .

In the other direction, from earlier, we can describe  as: 

and then, for ,

because . So continuous affine functionals whose graph lies above the graph of  correspond to points in .

So, we have a link between affine functionals that lie above the graph of , and points in . What would a minimal point correspond to? Well, a non-minimal point corresponds to , where the latter component is nonzero. There's some  where  due to the latter component being nonzero, and for all . Using Theorem 1 to translate positive functionals to , this means that the  induced by  lies below the affine functional induced by  over the . So, if there's a different affine functional  s.t. , then  must correspond to a nonminimal point.

Further, we can characterize whether  corresponds to an a-measure or not. For a measure, if you increase your function you're feeding in, you increase the value you get back out, . For a signed measure with some negative component, Lemma 5 says we can find some  that attain negative value, so you can add one of those  to your  and get . So, a  corresponds to an a-measure exactly when it's monotone.

Phase 5: Proving positive-minimals. With these links in place, this means we just have to take any old point that's an sa-measure in , get a  from it, it'll fulfill certain properties, and use those properties to find a  that lies below  and above  on  and is monotone, certifying that  corresponds to a point below our minimal-point of interest that's still in  but is an a-measure, so we have a contradiction.

To that end, fix a  that corresponds to some point in  that's not an a-measure (in particular, it has a negative component), it lies above the graph of .

Now, translate  to a , where , and  is minimized at some . Since our  corresponds to something that's not an a-measure,  

Let our affine continuous functional  be defined as 

In order to show that  corresponds to an a-measure below  in , we need three things. One is that  is monotone (is an a-measure), two is that it lies below  over  and three is that it lies above . Take note of the fact that , because .

For monotonicity of , it's pretty easy. If , then

and we're done with that part.

For being less than or equal to  over  (we know it's not the same as  because  isn't monotone and  is),

For being  over  it takes a somewhat more sophisticated argument. By Lemma 5, regardless of , there exists a  where . Then, we can go:

The last steps were done via the definition of , and  being monotonic.

So,  for all  and all  getting  for  (because  is  everywhere else)

Thus,  specifies an a-measure ( being monotone) that is below the sa-measure encoded by  (by  over ), yet , so said point is in . This witnesses that there can be no minimal points in  that aren't a-measures. That just leaves getting the slope bound from Lipschitzness, the worst part of this whole proof.

Phase 6: Let  be the Lipschitz constant for . Fix a  that corresponds to a minimal point with . This violates the Lipschitz bound when traveling from 0 to 1, so the Lipschitz bound is violated in some direction. Further, the graph of  touches the graph of  at some point , because if it didn't, you could shift  down further until it did touch, witnessing that the point  came from wasn't minimal (you could sap more from the  term).

Now, if this point is minimal, it should be impossible to craft a  which is  over , and different from . We shall craft such a , witnessing that said point isn't actually minimal. Further, said  won't violate the Lipschitz bound in any direction. Thus, all affine functionals corresponding to minimal points must obey the Lipschitz bound and be monotone, so they're a-measures with .

In order to do this, we shall craft three sets in , and .

Set  is . Pretty much, this set is the hypograph of . It's obviously convex because  is concave, and the hypograph of a concave function is convex. It's closed because  is continuous.

Set  is . This could be thought of as the the interior of the epigraph of  restricted to . Undershooting this means you never exceed  over . First, it's open. This is because, due to  being continuous over a compact set , the maximum and minimum are attained, so any  is bounded below 1 and above 0, so we've got a little bit of room to freely wiggle  in any direction. Further, since  is a continuous linear functional on  which is a Banach space, it's a bounded linear functional and has some Lipschitz constant (though it may exceed ), so we have a little bit of room to freely wiggle  as well. So  is open.

Also,  is convex, because a mixture of  and  that are bounded away from 0 and 1 is also bounded away from 0 and 1, and .

Set  is . This could be thought of as an open cone with a point (it's missing that exact point, though) at , that opens straight up, and certifies a failure of the  bound on slope. If an affine function includes the point  in its graph, then if it increases faster than  in any direction, it'll land in this set. It's open because, given a point in it, we can freely wiggle the  and  values around a little bit in any direction, and stay in the set. Now we'll show it's convex. Given an  and  in it, due to  being a Banach space (so it has a norm), we want to check whether .

Observe that (using the defining axioms for a norm)

So,  is convex.

Ok, so we've got a convex closed set and two convex opens. Now, consider . The convex hull of an open set is open. We will show that .

Assume this is false, and that they overlap. The point where they overlap, can then be written as a convex mixture of points from . However,  and  are both convex, so we can reduce it to a case where we're mixing one point  from  and one point  in . And .

If , then we've just got a single point in . Also, .

This is because  and  has a Lipschitz constant of , so it can't increase as fast as we're demanding as we move from  to , which stays in . So .

If , then we've just got a single point in . Then , so again, .

For the case where  isn't 0 or 1, we need a much more sophisticated argument. Remembering that , and , we will show that  lies strictly above the graph of . Both  and  lie in , so their mix lies in the same set, so we don't have to worry about  being undefined there. Also, remember that  over . Now,

The critical > is by the definition of , and . So, the  term is strictly too high for this point (different than the one we care about) to land on the graph of .

With the aid of this, we will consider "what slope do we have as we travel from  to "? Said slope is

That critical > is by  and the definition of 

So, if we start at  (and  lies in ), we're above the graph of . Then, we travel to , where  by assumption that this point is in , but while doing this, we ascend faster than , the Lipschitz constant for . So, our point of interest  lies above the graph of  and can't lie in , and we have a contradiction.

Putting all this together, . Since  is open, and they're both convex and nonempty, we can invoke Hahn-Banach (first version of the theorem in the "Separation of Sets" section)and conclude they're separated by some continuous linear functional . Said linear functional must increase as  does, because , and  (for some sufficiently large ) lies in , thus in . This means that given any  and  to specify a level, we can find a unique  where .

So, any level set of this continuous linear functional we crafted can also be interpreted as an affine functional. There's a critical value of the level set that achieves the separation, . This is because , but  is in , thus in , for all . So we've uniquely pinned down which affine function  we're going for. Since the graph of  is a hyperplane separating  and  (It may touch the set , just not cut into it, but it doesn't touch ), from looking at the definitions of  and  and , we can conclude:

From the definition of , so  over .

From the definition of  over , and they're both continuous, so we can extend  to  by continuity, so  over .

Also, , so , and this, paired with the ability of  to detect whether an affine function exceeds the  slope bound (as long as the graph of said function goes through ), means that the graph of  not entering  certifies that its Lipschitz constant is  or less. Since \phi does enter  due to violating the Lipschitz constant bound, this also certifies that .

Putting it all together, given a  which corresponds to a minimal point and violates the Lipschitz bound, we can find a  below it that's also above , so said minimal point isn't actually minimal.

Therefore, if you were to translate a minimal point in the induced  into an affine function above h, it'd have to A: not violate the Lipschitz bound (otherwise we could undershoot it) and B: be monotone (otherwise we could undershoot it). Being monotone certifies that it's an a-measure, and having a Lipschitz constant of  or less certifies that the  of the a-measure is  or less. We're finally done!

The next proofs are here.

New Comment