A quick note here is that whenever a proposition or theorem or something is cited with a complicated-looking number, like, not Proposition 3, but "Proposition 3.4.2.7", it's being cited from this book.
Proposition 2:If X is a metrizable space, then the pointwise limit of a sequence of lower-bounded lower-semicontinuous functions fn:X→(−∞,∞] with fn+1≥fn is lower-bounded and lower-semicontinuous.
Proof: Lower-boundedness of f∗(the pointwise limit of the fn) is trivial because f∗≥f1, and f1 is lower-bounded. For lower-semicontinuity, let xi limit to x, and n be some arbitrary natural number. We have
liminfi→∞f∗(xi)≥liminfi→∞fn(xi)≥fn(x)
Because f∗≥fn and fn is lower-semicontinuous. And, since this works regardless of n, we have
liminfi→∞f∗(xi)≥limn→∞fn(x)=f∗(x)
Since f∗ is the pointwise limit of the fn, showing that f∗ is lower-semicontinuous.
Proposition 3:If X is a metrizable space, then given any lower-bounded lower-semicontinuous function f∗:X→(−∞,∞], there exists a sequence of bounded continuous functions fn:X→R s.t. fn+1≥fn and the fn limit pointwise to f∗.
First, fix X with an arbitrary metric, and stipulate that f∗ is lower-bounded and lower-semicontinuous. We'll be defining some helper functions for this proof. Our first one is the gn family, defined as follows for n>0:
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Ie, we take a point, and map it to the worst-case value according to f∗ produced in an open ball of size 1n around said point. Obviously, regardless of n, gn≤f∗.
Now we must show that gn, regardless of n, is upper-semicontinuous, for later use. Let ϵ be arbitrary, and x be some arbitrary point in X, and xm be some sequence limiting to x. Our task is to show limsupm→∞gn(xm)≤gn(x), that's upper-semicontinuity.
First, we observe that there must exist an x′ s.t. d(x,x′)<1n, and f∗(x′)≤gn(x)+ϵ. Why? Well, gn(x) is "what's the worst-case value of f∗ in the 1n-sized open ball", and said worst-case value may not be exactly attained, but we should be able to find points in said open ball which get arbitrarily close to the infinimum or attain it.
Now, since xm converges to x, there must be some m∗ where, forever afterward, d(xm,x)<1n−d(x,x′) (the latter term is always nonzero from how we selected x′). For any m in that tail, we can rearrange this and get that d(xm,x)+d(x,x′)<1n, so by the triangle inequality, we have that d(xm,x′)<1n. Ie, x′ is in the open ball around xm for the tail of the convergent sequence.
Now, since x′ is in all those open balls, the definition of gn means that for our tail of sufficiently late xm, gn(xm)≤f∗(x′), and we already know that f∗(x′)≤gn(x)+ϵ. Putting these together, for all sufficiently late xm, gn(xm)≤gn(x)+ϵ. ϵ was arbitrary, so we have
limsupm→∞gn(xm)≤gn(x)
(the limit might not exist, but limsup always does). And now we know that our gn auxiliary functions are upper-semicontinuous.
Next up: We must show that, if fn is some continuous function, sup(fn,gn) is upper-semicontinuous. For this, we do:
For this, we distributed the limsup inside the sup, used that fn is continuous, and then used that gn is upper-semicontinuous for the inequality.
Now, here's what we're going to do. We're going to inductively define the continuous fn as follows. Abbreviate inf(0,infx′∈Xf∗(x′)) (which is finite) as f∗min, and then consider the set-valued functions, for n>0, Φn:X→P(R) inductively defined as follows:
The big unknown is whether we can even find a continuous selection from all these set-valued functions to have the induction work. However, assuming that part works out, we can easily clean up the rest of the proof. So let's begin.
We'll be applying the Michael Selection Theorem. R is a Banach space, and X being metrizable implies that X is paracompact, so those conditions are taken care of. We need to show that, for all n and x, Φn(x) is nonempty, closed, and convex, and Φn is lower-hemicontinuous. This will be done by induction, we'll show it for n=1, establish some inequalities, and use induction to keep going.
For n=1, remember,
Φ1(x)=[f∗min,min(f∗(x),f∗min+1)]
This is obviously nonempty, closed, and convex for all x. That just leaves establishing lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φ1(x), and there's a sequence xm limiting to x, there's a (sub)sequence ym∈Φ1(xm) that limits to y. We can let our ym:=min(f∗(xm),f∗min+1,y). Since y∈Φ1(x), y≥f∗min, so
The first equality was distributing the liminf in and neglecting it for constants, the second is because y≤f∗min+1 because y∈Φ1(x), and the inequality is from lower-semicontinuity of f∗, and the equality is because y∈Φ1(x) so y≤f∗(x).
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=limm→∞min(f∗(xm),f∗min+1,y)=y
And bam, lower-hemicontinuity is proved for the base case, we can get off the ground with the Michael selection theorem picking our f1.
Now for the induction step. The stuff we'll be assuming for our induction step is that fn≤f∗ (we have this from the base case), and that fn is continuous (we have this from the base case). Now,
Because fn≤f∗ by induction assumption, and gn≤f∗, because the definition of gn was
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
we have nonemptiness, and closure and convexity are obvious, so we just need to verify lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φn+1(x), and there's a sequence xm limiting to x, there's a (sub)sequence ym∈Φn+1(xm) that limits to y. We can define
So it's an appropriate sequence of ym to pick. I will reiterate that, since fn is continuous (induction assumption) and we already established that all the gn are upper-semicontinuous, and the sup of a continuous function and upper-semicontinuous function is upper-semicontinuous as I've shown, we have upper-semicontinuity for sup(fn,gn). Remember that. Now, we can go:
Up to this point, what we did is move the limsup into the max for the equality, and then swapped our min of three components for one of the components (the y), producing a higher value.
Now, we can split into two exhaustive cases. Our first possible case is one where f∗min+n≤max(fn(x),gn(x)). In such a case, we can go:
This occurred because we swapped out our min of two components for one of the components, the constant lower bound, producing a higher value. Then, the equality was because, by assumption, f∗min+n≤max(fn(x),gn(x)) so we can alter the second term. Then, we finally observe that because y∈Φn(x), and min(max(fn(x),gn(x)),f∗min+n) is the lower-bound on said set, y is the larger of the two.
Our second possible case is one where f∗min+n≥max(fn(x),gn(x)). In such a case, we can go:
The first inequality was we swapped out our min of two components for one of the components, producing a higher value regardless of the m. The second inequality was because sup(fn,gn) is upper-semicontinuous, as established before. The equality then is just because we're in the case where f∗min+n≥max(fn(x),gn(x)). Finally, we just observe that the latter term is the lower-bound for Φn(x), which is the set that y lies in, so y is greater. These cases were exhaustive, so we have a net result that
The first inequality was swapping out the max for just one of the components, as that reduces the value of your liminf. Then, for the equality, we just distribute the liminf in, and neglect it on the constants. The next inequality after that is lower-semicontinuity of f∗, then we just regroup the mins in a different way. Finally, we observe that min(f∗(x),f∗min+n+1) is the upper-bound on Φn(x), which y lies in, so y is lower and takes over the min.
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=y
And we've shown lower-hemicontinuity, and the Michael selection theorem takes over from there, yielding a continuous fn+1. Now, let's verify the following facts in order to show A: that the induction proceeds all the way up, and B: that the induction produces a sequence of functions fn fulfilling the following properties.
First: fn+1≤f∗. This is doable by fn+1≤inf(f∗,f∗min+n+1)≤f∗, from our Φn+1 upper bound. It holds for our base case as well, since the upper bound there was inf(f∗,f∗min+1).
Second: fn+1≤f∗min+n+1, which is doable by the exact same sort of argument as our first property.
Third: fn+1≥fn. This is doable by
fn+1≥inf(sup(gn,fn),f∗min+n)≥inf(fn,f∗min+n)=fn
Where the inequality uses that fn+1 is a selection of Φn+1, and the lower bound on Φn+1. Then we just swap out some contents for lower contents, and use our second fact which inducts up the tower to establish that f∗min+n is an upper bound on the function fn. This is our one missing piece to show that our induction proceeds through all the n.
Fourth: fn+1≥inf(gn,f∗min+n). Same argument as before, except we swap sup(gn,fn) out for gn instead.
At this point, we've built a sequence of continuous functions fn where you always have fn+1≥fn, and fn+1≥inf(gn,f∗min+n) and fn≤f∗, regardless of n.
Our last step is to show that gn limits to f∗ pointwise, ie, regardless of x, limn→∞gn(x)=f∗(x).
We recall that the definition of gn was gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Obviously, as n goes up, gn(x) goes up, it's monotonically increasing, since we're minimizing over fewer and fewer points each time, the open ball is shrinking. So the limit exists. And we also know that all the gn lie below f∗. So, to show that gn limits to f∗ pointwise, we just have to rule out the case where limn→∞gn(x)<f∗(x).
Assume this was the case. Then, for each n we can pick a point xn only 1n distance away from x that comes extremely close to attaining the infinimum value. These xn get closer and closer to x, they limit to it. So, we'd have:
limn→∞gn(x)=limn→∞f∗(xn)<f∗(x)
But the definition of lower-semicontinuity is that
liminfn→∞f∗(xn)≥f∗(x)
So we have a contradiction, and this can't be the case. Thus, the gn limit to f∗ pointwise.
Now, we will attempt to show that the fn limit to f∗ pointwise. Let x be arbitrary, and we have
The first inequality was f∗≥fn for all n, the equality was just a reindexing, and the second inequality was fn+1≥inf(gn,f∗min+n) (from our induction). Now, we can split into two cases. In case 1, gn(x) diverges to ∞. Since f∗min+n diverges as well, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
In case 2, gn(x) doesn't diverge, but since f∗min+n does, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
So, in either case, we can continue by
=limn→∞gn(x)=f∗(x)
Because the gn limit to f∗ pointwise. Wait, we just showed f∗(x)≥f∗(x) no matter what. So all the inequalities must be equalities, and we have
f∗(x)=limn→∞fn(x)
And now we have our result, that any lower-bounded lower-semicontinuous function f∗ can be written as the pointwise limit of an increasing sequence of bounded continuous functions (since all the fn were bounded above by some constant and bounded below by f∗min.)
Theorem 1/Monotone Convergence Theorem For Inframeasures:Given X a Polish space, Ψ an inframeasure set over X, f∗ a lower-bounded lower-semicontinuous function X→(−∞,∞], and {fn}n∈N an ascending sequence of lower-bounded lower-semicontinuous functions X→(−∞,∞] which limit pointwise to f∗, then limn→∞EΨ[fn]=EΨ[f∗]
We'll need to do a bit of lemma setup first. Said intermediate result is that, if some f∞ is lower-semicontinuous and lower-bounded, then the function from a-measures to (−∞,∞] given by (m,b)↦m(f∞)+b is lower-semicontinuous.
By Proposition 3, we can craft an ascending sequence fn of bounded continuous functions which limit pointwise to f∞.
Now to establish lower-semicontinuity of (m,b)↦m(f∞)+b. Let (mm,bm) limit to (m∞,b∞). Let n be arbitrary. Then
liminfm→∞mm(f∞)+bm≥limm→∞mm(fn)+bm=m∞(fn)+b∞
The first inequality is because f∞ is above the fn sequence that limit to it. We then use limits because the sequence converges now. Since fn is continuous and bounded, and (mm,bm) converges to (m∞,b∞), then mm(fn)+bm must converge to m∞(fn)+b∞.
Anyways, now that we know that for arbitrary n,
liminfm→∞mm(f∞)+bm≥m∞(fn)+b∞
We can get the inequality that
liminfm→∞mm(f∞)+bm≥limn→∞(m∞(fn)+b∞)=m∞(f∞)+b∞
Where the equality comes from Beppo Levi's monotone convergence theorem, since the fn sequence is an ascending sequence of lower-bounded functions. Accordingly, we now know that if f∞ is lower-bounded and lower-semicontinuous, then (m,b)↦m(f∞)+b is a lower-semicontinuous function. Let's proceed.
Remember, the thing we're trying to show is that
limn→∞EΨ[fn]=EΨ[f∗]
Where all the fn are lower-semicontinuous and lower-bounded, and limit to f∗ pointwise. One direction of this is pretty easy to show.
The first equality is just unpacking what expectation means, then we use that f∗≥fn regardless of n to show that all the points in Ψ are like "yup, the value increased" to get the inequality. Then we just observe that it doesn't depend on n anymore and pack up the expectation, yielding
limn→∞EΨ[fn]≤EΨ[f∗]
So, that's pretty easy to show. The reverse direction, that
EΨ[f∗]≤limn→∞EΨ[fn]
is much trickier to show.
We'll start with splitting into cases. The first case is that Ψ is the empty set, in which case, everything is infinity, and the reverse inequality holds and we're done.
The second case is that Ψ isn't empty. In such a case, for each fn, we can pick (mn,bn) a-measures from the minimal points of Ψ s.t.
mn(fn)+bn≃inf(m,b)∈Ψm(fn)+b
coming as close to minimizing said functions within Ψ as we like!
Now, we can again split into two cases from here. In our first case, the bn sequence from the (mn,bn) sequence diverges. Then, in such a case, we have
Why does this work? Well, the (mn,bn) approximately minimize the expectation value of fn. Then, the worst-case value of mn(fn) is either 0, or the amount of measure in mn times the lowest value of fn. Since it's an ascending sequence of functions, we can lower-bound this by (amount of measure in mn) times (worst-case value of f1). And, since we're picking minimal points, and there's an upper bound on the amount of measure present in minimal points of an inframeausure set from the Lipschitz criterion (our finite λ⊙ value), the worst-case value we could possibly get for mn(fn) is either 0 or the maximum amount of measure possible times a lower bound on fn. Both of these quantities are finite, so we get a finite negative number. But the bn are assumed to diverge, so the expectation values head off to infinity.
Ok then, we're on to our last case. What if Ψ isn't empty, and our sequence (mn,bn) of a-measures has a subsequence where the bn doesn't diverge? Well, the only possible way a sequence of a-measures in a nonempty inframeasure set can fail to have a convergent subsequence is for the b values to diverge, from the compact-projection property for an inframeasure set.
So, in our last case, we can isolate a convergent subsequence of our (mn,bn) sequence of a-measures, which converges to the a-measure (m∞,b∞). Let's use the index i to denote this subsequence. At this point, the argument proceeds as follows. Let j be an arbitrary natural number.
limn→∞EΨ[fn]=limi→∞EΨ[fi]
=limi→∞mi(fi)+bi≥liminfi→∞mi(fj)+bi≥m∞(fj)+b∞
In order, this is just "we went to a subsequence of an ascending sequence, so it's got the same limit", then swapping out the worst-case value for the actual minimizing point. Then we just use that since the sequence of functions is ascending, eventually mi(fi)+bi will blow past mi(fj)+bi (at timestep j) because j is a fixed constant. Then we just apply that fj is lower-bounded and lower-semicontinuous, so the function (m,b)↦m(fj)+b is lower-semicontinuous, getting our inequality we pass to (m∞,b∞).
Since this holds for arbitrary j, this then means we have
And we have the inequality going the other way, proving our result! This last stretch was done via Beppo Levi's Monotone Convergence Theorem for the equality, then the inequality is because nonempty inframeasure sets Ψ are closed, so the limit point (m∞,b∞) also lies in Ψ, and then packing up definitions.
Since we've proven both directions of the inequality, we have
limn→∞EΨ[fn]=EΨ[f∗]
and we're done!
Proposition 4:All compact Polish spaces are compact second-countable LHC spaces.
Proof: Compactness is obvious. All Polish spaces are second-countable. So that just leaves the LHC property. Our compact hull operator K will be taken to just be the closure of the set. Since the space X is compact, all closed subsets of it are compact as well. The two properties
∀U∈B(X):U⊆K(U)
∀U1,U2∈B:U1⊆U2→K(U1)⊆K(U2)
Are trivially fulfilled when K is interpreted as set closure.
That leaves the LHC property. Since all Polish spaces are Hausdorff, the various definitions of local compactness coincide, and the space is compact, so all definitions of local compactness hold. So, given some O and x∈O, by local compactness for the space X, we can find a compact set C and open set O′ s.t. x∈O′⊆K⊆O. Since O′ is open, it can be written as a union of sets from the topology base. Now, pick a set from the base that makes O′ and contains x, call it U. We have x∈U⊆O′⊆C. In Hausdorff spaces, all compact sets are closed, so C is a closed superset of U, and so K(U) (the closure of U) is a subset of C. Since K(U)⊆C⊆O, and U⊆K(U) (since K is just closure), we have x∈U⊆K(U)⊆O, as desired. This works for any x and O, so the space fulfills the LHC property.
Proposition 5:All open subsets of ω-BC domains are second-countable LHC spaces.
Given D, an ω-BC domain, it has a countable basis. You can close the countable basis under finite suprema (when they exist), and the set will stay countable and stay a basis, but now it's closed under finite suprema. Do that, to get a countable basis B which is closed under finite suprema.
Our attempted base for the open subset O⊆D will be those of the form x⇈ for x∈B∩O, ie, the set of all the points y where x≪y, and x is in the countable basis for the domain and also in O. Also the empty set. This is clearly countable, because B is.
To show it's a topology base, we need to check closure under finite intersection, check that all these are open, and check that every open set in O can be made by unioning these things together.
First, closure under finite intersection. In order to do this, we'll need to show a lemma that if x≫xi for finitely many xi, then x≫⊔i≤nxi. This occurs because, given any directed set with a supremum of x or higher, since x≫xi, you can find a yi in the directed set which is above xi, and then take an upper bound for the finitely many yi which exists within your directed set, and it'll exceed ⊔i≤nxi. So, every directed set with a supremum of x or higher has an element which matches or exceeds the supremum of the xi, so the supremum of the xi approximates x.
Now that we have this result, we'll show that ⋂i≤nxi⇈=(⊔i≤nxi)⇈
In the two directions, anything which is approximated by ⊔i≤nxi is also approximated by anything below it, ie, all the xi, so we have ⋂i≤nxi⇈⊇(⊔i≤nxi)⇈
And, for the reverse direction, we've already established that anything which all the xi approximate will also have the supremum of the xi approximate it. Since our countable basis B was made to be closed under finite suprema, the open set associated with the supremum of the xi is also in our basis, so our base is closed under finite intersection.
Second, checking that they're open. Sets of the form x⇈ are always open in the Scott-topology, and every Scott-open set is closed upwards, so all the sets in this base are open in the original domain D, and so remain open when we restrict to the open subset of D and equip it with the subspace topology.
All that remain to show that this is a base is that it can make any Scott-open subset of O by union. By Proposition 2.3.6.2, a set U is open in the Scott-topology on D iff U=⋃x∈U∩Bx⇈.
Since all of our open subsets of O are open in D, they can be written as a union of sets from our countable collection by this result.
We have crafted a countable base for our open subset of an ω-BC domain D, so it's indeed second-countable.
Now for the LHC property. The compact hull operator K maps the set x⇈ to (⊓x⇈)↑. This is compact because all Scott-opens are closed upwards, so an open cover of (⊓x⇈)↑ must have an open which includes ⊓x⇈ itself, and this single open covers the entire set. This set is a subset of x↑ because everything that x approximates lies above x, so x is a lower bound, and thus must equal or lie below ⊓x⇈. Since x is in our open subset O, the same must also apply to this lower bound, because Scott-open sets are closed up.
For our first property of a compact hull operator, we need to show that x⇈⊆(⊓x⇈)↑. This is easy, anything which lies in x⇈ must lie above the infinimum of that set.
For our second property of a compact hull operator, we need to show that x⇈⊆y⇈ implies (⊓x⇈)↑⊆(⊓y⇈)↑. For this, assuming the starting assumption, by basic properties of inf, we have ⊓x⇈⊒⊓y⇈, so then anything above that first point must also be above the second point, so we have our result.
Now, we just need to check the LHC property. Let U be an open subset of O, and let x∈U. We can consider the directed set of approximants to x from the basis of our domain D, called Bx. Scott-opens have the feature that for any directed set with a supremum in the open set (and Bx is directed and has a supremum of x, which lies in U), there's an element in the directed set which also lies in the open set. So, we can find a y which lies in the basis, and y≪x, and y∈U. Then just let your open set from the base be y⇈. This is an open from the base. It contains x. And, K(y⇈)⊆y↑⊆U, because y∈U and Scott-opens are closed upwards.
And thus, our result is shown.
Corollary 1:All ω-BC domains are compact second-countable LHC spaces.
Just use Proposition 5 when your open set is all of your domain to get second-countability and LHC-ness. For compactness, any open cover of the domain D must have an open set which includes the bottom point ⊥, and Scott-opens are closed upwards, so we can isolate a particular open set which covers the entire domain.
Theorem 2:If X is second-countable and locally compact, then [X→[0,1]] is an ω-BC domain, as is [X→[−∞,∞]].
To begin, [−∞,∞] is topologically identical to [0,1], so we can just prove it for the [0,1] case to get both results.
As a recap, the Scott-topology on [0,1] is the topology with the open sets being ∅,[0,1], and all sets of the form (q,1] for q∈[0,1).
The space [X→[0,1]] is the space of all continuous functions X→[0,1], where [0,1] is equipped with the Scott-topology, which has been equipped with a partial order via f⊑g↔∀x∈X:f(x)≤g(x)
We will actually show a stronger result, that this space is a continuous complete lattice with a countable basis, as this implies that the set is an ω-BC domain.
To show this, we'll need to show that the space is closed under arbitrary suprema. This is sufficient to show that the space is a complete lattice because, to get the infinimum of the empty set, we can get a top element by taking the supremum of everything, and to get the infinimum of any nonempty set of elements, we can take the set of lower bounds (which is nonempty because a bottom element exists, the supremum of the empty set), and take supremum of that. Then we just need to find a countable basis, and we'll be done. Most of the difficulty is in finding the countable basis for the function space.
Showing there's a bottom element (sup of the empty set) is easy, just consider the function which maps everything in X to 0.
Now, we'll show that suprema exist for all nonempty sets of functions. Let F be our nonempty set of functions.
The natural candidate for the supremum of functions is is x↦supf∈Ff(x), call this function ⊔F. It's a least upper bound of all the functions f∈F, the only fiddly part we need to show is that this function is a continuous function X→[0,1], in order for it to appear in the space [X→[0,1]].
Fix an arbitrary open set in [0,1], we'll show that the preimage is open in X. If the set is the empty set or [0,1] itself, then the preimage will be the empty set or all of X, so those two cases are taken care of. Otherwise, the open set is of the form (q,1] for q∈[0,1). Now, we have
(⊔F)−1((q,1])={x∈X|(⊔F)(x)>q}
={x∈X|supf∈Ff(x)>q}={x∈X|∃f∈F:f(x)>q}
=⋃f∈F{x∈X|f(x)>q}=⋃f∈Ff−1((q,1])
And now we see that this preimage can be written as a union of preimages of open sets, which are open since all the f are continuous, so the set (⊔F)−1((q,1]) is open. Since q was arbitrary in [0,1), we have that the preimage of all open sets in the unit interval is open in X, so ⊔F is indeed continuous and exists in [X→[0,1]], to serve as the least upper bound there.
All that remains is coming up with a countable basis for [X→[0,1]], which is the hard part. Let B(X) be a countable base of X (this can be done since X is second-countable). Given an open set U∈B(X) and rational number q∈Q∩[0,1], we define the atomic step function (U↘q) as:
x∈U→(U↘q)(x)=q
x∉U→(U↘q)(x)=0
First things first is showing these are continuous. Clearly, any preimage will either be the empty set, all of X, or the open set U (because there's no open set in [0,1] with the Scott-topology that contains 0 without containing q), so they're indeed continuous.
Our attempted countable basis B will be all of these atomic step functions, and all finite suprema of such. Suprema of arbitrary sets of functions always exist as we've shown, so this is well-defined. It's countable because there's countably many choices of open set, countably many choices of rational number, and we're considering only the finite subsets of this countable set.
The hard part is showing that for any f, it can be built as the supremum of stuff from this basis which approximates f. First up, as a preliminary, is showing that Bf is a directed set for some arbitrary function f. Bf is the set of all elements of the basis which approximate f. What we can do is pick two arbitrary functions from the basis, g1 and g2 s.t. g1≪f and g2≪f. From back in Proposition 5, we know that g1⊔g2≪f. Since our basis is closed under finite suprema, we now know that the intersection of the basis and the approximants to f is closed under finite suprema, so it's directed.
So, now that we know that Bf is a directed set for arbitrary f, we must ask, does f=⨆↑Bf? For one direction, we have that every function in Bf approximates f, so it must be less than f itself, so we have ⨆↑Bf⊑f trivially. So let's get the other direction going, by showing ∀x∈X:⨆↑Bf(x)≥f(x)
First, let's solve the case where f(x)=0. We'd have ⨆↑Bf(x)≥0=
A quick note here is that whenever a proposition or theorem or something is cited with a complicated-looking number, like, not Proposition 3, but "Proposition 3.4.2.7", it's being cited from this book.
Proposition 2: If X is a metrizable space, then the pointwise limit of a sequence of lower-bounded lower-semicontinuous functions fn:X→(−∞,∞] with fn+1≥fn is lower-bounded and lower-semicontinuous.
Proof: Lower-boundedness of f∗(the pointwise limit of the fn) is trivial because f∗≥f1, and f1 is lower-bounded. For lower-semicontinuity, let xi limit to x, and n be some arbitrary natural number. We have
liminfi→∞f∗(xi)≥liminfi→∞fn(xi)≥fn(x)
Because f∗≥fn and fn is lower-semicontinuous. And, since this works regardless of n, we have
liminfi→∞f∗(xi)≥limn→∞fn(x)=f∗(x)
Since f∗ is the pointwise limit of the fn, showing that f∗ is lower-semicontinuous.
Proposition 3: If X is a metrizable space, then given any lower-bounded lower-semicontinuous function f∗:X→(−∞,∞], there exists a sequence of bounded continuous functions fn:X→R s.t. fn+1≥fn and the fn limit pointwise to f∗.
First, fix X with an arbitrary metric, and stipulate that f∗ is lower-bounded and lower-semicontinuous. We'll be defining some helper functions for this proof. Our first one is the gn family, defined as follows for n>0:
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Ie, we take a point, and map it to the worst-case value according to f∗ produced in an open ball of size 1n around said point. Obviously, regardless of n, gn≤f∗.
Now we must show that gn, regardless of n, is upper-semicontinuous, for later use. Let ϵ be arbitrary, and x be some arbitrary point in X, and xm be some sequence limiting to x. Our task is to show limsupm→∞gn(xm)≤gn(x), that's upper-semicontinuity.
First, we observe that there must exist an x′ s.t. d(x,x′)<1n, and f∗(x′)≤gn(x)+ϵ. Why? Well, gn(x) is "what's the worst-case value of f∗ in the 1n-sized open ball", and said worst-case value may not be exactly attained, but we should be able to find points in said open ball which get arbitrarily close to the infinimum or attain it.
Now, since xm converges to x, there must be some m∗ where, forever afterward, d(xm,x)<1n−d(x,x′) (the latter term is always nonzero from how we selected x′). For any m in that tail, we can rearrange this and get that d(xm,x)+d(x,x′)<1n, so by the triangle inequality, we have that d(xm,x′)<1n. Ie, x′ is in the open ball around xm for the tail of the convergent sequence.
Now, since x′ is in all those open balls, the definition of gn means that for our tail of sufficiently late xm, gn(xm)≤f∗(x′), and we already know that f∗(x′)≤gn(x)+ϵ. Putting these together, for all sufficiently late xm, gn(xm)≤gn(x)+ϵ. ϵ was arbitrary, so we have
limsupm→∞gn(xm)≤gn(x)
(the limit might not exist, but limsup always does). And now we know that our gn auxiliary functions are upper-semicontinuous.
Next up: We must show that, if fn is some continuous function, sup(fn,gn) is upper-semicontinuous. For this, we do:
limsupm→∞max(fn(xm),gn(xm))=max(limsupm→∞fn(xm),limsupm→∞gn(xm))
=max(fn(x),limsupm→∞gn(xm))≤max(fn(x),gn(x))
For this, we distributed the limsup inside the sup, used that fn is continuous, and then used that gn is upper-semicontinuous for the inequality.
Now, here's what we're going to do. We're going to inductively define the continuous fn as follows. Abbreviate inf(0,infx′∈Xf∗(x′)) (which is finite) as f∗min, and then consider the set-valued functions, for n>0, Φn:X→P(R) inductively defined as follows:
Φ1(x):=[f∗min,min(f∗(x),f∗min+1)]
Φn+1(x):=[min(max(fn(x),gn(x)),f∗min+n),min(f∗(x),f∗min+n+1)]
where fn is a continuous selection from Φn.
The big unknown is whether we can even find a continuous selection from all these set-valued functions to have the induction work. However, assuming that part works out, we can easily clean up the rest of the proof. So let's begin.
We'll be applying the Michael Selection Theorem. R is a Banach space, and X being metrizable implies that X is paracompact, so those conditions are taken care of. We need to show that, for all n and x, Φn(x) is nonempty, closed, and convex, and Φn is lower-hemicontinuous. This will be done by induction, we'll show it for n=1, establish some inequalities, and use induction to keep going.
For n=1, remember,
Φ1(x)=[f∗min,min(f∗(x),f∗min+1)]
This is obviously nonempty, closed, and convex for all x. That just leaves establishing lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φ1(x), and there's a sequence xm limiting to x, there's a (sub)sequence ym∈Φ1(xm) that limits to y. We can let our ym:=min(f∗(xm),f∗min+1,y). Since y∈Φ1(x), y≥f∗min, so
ym=min(f∗(xm),f∗min+1,y)∈[f∗min,min(f∗(x),f∗min+1)]=Φ1(xm)
So it's an appropriate sequence to pick. Now, we can go:
limsupm→∞min(f∗(xm),f∗min+1,y)≤limsupm→∞y=y
and
liminfm→∞min(f∗(xm),f∗min+1,y)=min(liminfm→∞f∗(xm),f∗min+1,y)
=min(liminfm→∞f∗(xm),y)≥min(f∗(x),y)=y
The first equality was distributing the liminf in and neglecting it for constants, the second is because y≤f∗min+1 because y∈Φ1(x), and the inequality is from lower-semicontinuity of f∗, and the equality is because y∈Φ1(x) so y≤f∗(x).
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=limm→∞min(f∗(xm),f∗min+1,y)=y
And bam, lower-hemicontinuity is proved for the base case, we can get off the ground with the Michael selection theorem picking our f1.
Now for the induction step. The stuff we'll be assuming for our induction step is that fn≤f∗ (we have this from the base case), and that fn is continuous (we have this from the base case). Now,
Φn+1(x):=[min(max(fn(x),gn(x)),f∗min+n),min(f∗(x),f∗min+n+1)]
Because fn≤f∗ by induction assumption, and gn≤f∗, because the definition of gn was
gn(x):=inf{f∗(x′)|d(x,x′)<1n}
we have nonemptiness, and closure and convexity are obvious, so we just need to verify lower-hemicontinuity. Lower-hemicontinuity is, if y∈Φn+1(x), and there's a sequence xm limiting to x, there's a (sub)sequence ym∈Φn+1(xm) that limits to y. We can define
ym:=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
Now,
ym=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≥min(max(fn(xm),gn(xm)),f∗min+n)
And also,
max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≤max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
because f∗≥fn and gn, by our induction assumption and definition of gn respectively. Then, we can observe that because
f∗(xm)≥min(f∗(xm),y,f∗min+n+1)
f∗(xm)≥min(f∗(xm),f∗min+n)
we have
f∗(xm)≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
and also we have
f∗min+n+1≥min(f∗(xm),y,f∗min+n+1)
f∗min+n+1≥min(f∗(xm),f∗min+n)
so we have
f∗min+n+1≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
Putting all this together, our net result is that
min(f∗(xm),f∗min+n+1)≥max(min(f∗(xm),y,f∗min+n+1),min(f∗(xm),f∗min+n))
Combining this with previous results (because the later term is an upper-bound to our ym when unpacked), we have:
max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≤min(f∗(xm),f∗min+n+1)
putting the upper and lower bounds together, we have:
ym=max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
∈[min(max(fn(xm),gn(xm)),f∗min+n),min(f∗(x),f∗min+n+1)]=Φn+1(xm)
So it's an appropriate sequence of ym to pick. I will reiterate that, since fn is continuous (induction assumption) and we already established that all the gn are upper-semicontinuous, and the sup of a continuous function and upper-semicontinuous function is upper-semicontinuous as I've shown, we have upper-semicontinuity for sup(fn,gn). Remember that. Now, we can go:
limsupm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
=max(limsupm→∞min(f∗(xm),y,f∗min+n+1),limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
Up to this point, what we did is move the limsup into the max for the equality, and then swapped our min of three components for one of the components (the y), producing a higher value.
Now, we can split into two exhaustive cases. Our first possible case is one where f∗min+n≤max(fn(x),gn(x)). In such a case, we can go:
max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,f∗min+n)=max(y,min(max(fn(x),gn(x)),f∗min+n))=y
This occurred because we swapped out our min of two components for one of the components, the constant lower bound, producing a higher value. Then, the equality was because, by assumption, f∗min+n≤max(fn(x),gn(x)) so we can alter the second term. Then, we finally observe that because y∈Φn(x), and min(max(fn(x),gn(x)),f∗min+n) is the lower-bound on said set, y is the larger of the two.
Our second possible case is one where f∗min+n≥max(fn(x),gn(x)). In such a case, we can go:
max(y,limsupm→∞min(max(fn(xm),gn(xm)),f∗min+n))
≤max(y,limsupm→∞max(fn(xm),gn(xm)))≤max(y,max(fn(x),gn(x)))
=max(y,min(max(fn(x),gn(x)),f∗min+n))=y
The first inequality was we swapped out our min of two components for one of the components, producing a higher value regardless of the m. The second inequality was because sup(fn,gn) is upper-semicontinuous, as established before. The equality then is just because we're in the case where f∗min+n≥max(fn(x),gn(x)). Finally, we just observe that the latter term is the lower-bound for Φn(x), which is the set that y lies in, so y is greater. These cases were exhaustive, so we have a net result that
limsupm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))≤y
Now for the other direction.
liminfm→∞max(min(f∗(xm),y,f∗min+n+1),min(max(fn(xm),gn(xm)),f∗min+n))
≥liminfm→∞min(f∗(xm),y,f∗min+n+1)
=min(liminfm→∞f∗(xm),y,f∗min+n+1)
≥min(f∗(x),y,f∗min+n+1)=min(y,min(f∗(x),f∗min+n+1))=y
The first inequality was swapping out the max for just one of the components, as that reduces the value of your liminf. Then, for the equality, we just distribute the liminf in, and neglect it on the constants. The next inequality after that is lower-semicontinuity of f∗, then we just regroup the mins in a different way. Finally, we observe that min(f∗(x),f∗min+n+1) is the upper-bound on Φn(x), which y lies in, so y is lower and takes over the min.
Since the limsup of this sequence is below y and the liminf is above y, we have:
limm→∞ym=y
And we've shown lower-hemicontinuity, and the Michael selection theorem takes over from there, yielding a continuous fn+1. Now, let's verify the following facts in order to show A: that the induction proceeds all the way up, and B: that the induction produces a sequence of functions fn fulfilling the following properties.
First: fn+1≤f∗. This is doable by fn+1≤inf(f∗,f∗min+n+1)≤f∗, from our Φn+1 upper bound. It holds for our base case as well, since the upper bound there was inf(f∗,f∗min+1).
Second: fn+1≤f∗min+n+1, which is doable by the exact same sort of argument as our first property.
Third: fn+1≥fn. This is doable by
fn+1≥inf(sup(gn,fn),f∗min+n)≥inf(fn,f∗min+n)=fn
Where the inequality uses that fn+1 is a selection of Φn+1, and the lower bound on Φn+1. Then we just swap out some contents for lower contents, and use our second fact which inducts up the tower to establish that f∗min+n is an upper bound on the function fn. This is our one missing piece to show that our induction proceeds through all the n.
Fourth: fn+1≥inf(gn,f∗min+n). Same argument as before, except we swap sup(gn,fn) out for gn instead.
At this point, we've built a sequence of continuous functions fn where you always have fn+1≥fn, and fn+1≥inf(gn,f∗min+n) and fn≤f∗, regardless of n.
Our last step is to show that gn limits to f∗ pointwise, ie, regardless of x, limn→∞gn(x)=f∗(x).
We recall that the definition of gn was gn(x):=inf{f∗(x′)|d(x,x′)<1n}
Obviously, as n goes up, gn(x) goes up, it's monotonically increasing, since we're minimizing over fewer and fewer points each time, the open ball is shrinking. So the limit exists. And we also know that all the gn lie below f∗. So, to show that gn limits to f∗ pointwise, we just have to rule out the case where limn→∞gn(x)<f∗(x).
Assume this was the case. Then, for each n we can pick a point xn only 1n distance away from x that comes extremely close to attaining the infinimum value. These xn get closer and closer to x, they limit to it. So, we'd have:
limn→∞gn(x)=limn→∞f∗(xn)<f∗(x)
But the definition of lower-semicontinuity is that
liminfn→∞f∗(xn)≥f∗(x)
So we have a contradiction, and this can't be the case. Thus, the gn limit to f∗ pointwise.
Now, we will attempt to show that the fn limit to f∗ pointwise. Let x be arbitrary, and we have
f∗(x)≥limn→∞fn(x)=limn→∞fn+1(x)≥limn→∞min(gn(x),f∗min+n)
The first inequality was f∗≥fn for all n, the equality was just a reindexing, and the second inequality was fn+1≥inf(gn,f∗min+n) (from our induction). Now, we can split into two cases. In case 1, gn(x) diverges to ∞. Since f∗min+n diverges as well, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
In case 2, gn(x) doesn't diverge, but since f∗min+n does, we have
limn→∞min(gn(x),f∗min+n)=limn→∞gn(x)
So, in either case, we can continue by
=limn→∞gn(x)=f∗(x)
Because the gn limit to f∗ pointwise. Wait, we just showed f∗(x)≥f∗(x) no matter what. So all the inequalities must be equalities, and we have
f∗(x)=limn→∞fn(x)
And now we have our result, that any lower-bounded lower-semicontinuous function f∗ can be written as the pointwise limit of an increasing sequence of bounded continuous functions (since all the fn were bounded above by some constant and bounded below by f∗min.)
Theorem 1/Monotone Convergence Theorem For Inframeasures: Given X a Polish space, Ψ an inframeasure set over X, f∗ a lower-bounded lower-semicontinuous function X→(−∞,∞], and {fn}n∈N an ascending sequence of lower-bounded lower-semicontinuous functions X→(−∞,∞] which limit pointwise to f∗, then limn→∞EΨ[fn]=EΨ[f∗]
We'll need to do a bit of lemma setup first. Said intermediate result is that, if some f∞ is lower-semicontinuous and lower-bounded, then the function from a-measures to (−∞,∞] given by (m,b)↦m(f∞)+b is lower-semicontinuous.
By Proposition 3, we can craft an ascending sequence fn of bounded continuous functions which limit pointwise to f∞.
Now to establish lower-semicontinuity of (m,b)↦m(f∞)+b. Let (mm,bm) limit to (m∞,b∞). Let n be arbitrary. Then
liminfm→∞mm(f∞)+bm≥limm→∞mm(fn)+bm=m∞(fn)+b∞
The first inequality is because f∞ is above the fn sequence that limit to it. We then use limits because the sequence converges now. Since fn is continuous and bounded, and (mm,bm) converges to (m∞,b∞), then mm(fn)+bm must converge to m∞(fn)+b∞.
Anyways, now that we know that for arbitrary n,
liminfm→∞mm(f∞)+bm≥m∞(fn)+b∞
We can get the inequality that
liminfm→∞mm(f∞)+bm≥limn→∞(m∞(fn)+b∞)=m∞(f∞)+b∞
Where the equality comes from Beppo Levi's monotone convergence theorem, since the fn sequence is an ascending sequence of lower-bounded functions. Accordingly, we now know that if f∞ is lower-bounded and lower-semicontinuous, then (m,b)↦m(f∞)+b is a lower-semicontinuous function. Let's proceed.
Remember, the thing we're trying to show is that
limn→∞EΨ[fn]=EΨ[f∗]
Where all the fn are lower-semicontinuous and lower-bounded, and limit to f∗ pointwise. One direction of this is pretty easy to show.
limn→∞EΨ[fn]=limn→∞inf(m,b)∈Ψm(fn)+b≤limn→∞inf(m,b)∈Ψm(f∗)+b
=inf(m,b)∈Ψm(f∗)+b=EΨ[f∗]
The first equality is just unpacking what expectation means, then we use that f∗≥fn regardless of n to show that all the points in Ψ are like "yup, the value increased" to get the inequality. Then we just observe that it doesn't depend on n anymore and pack up the expectation, yielding
limn→∞EΨ[fn]≤EΨ[f∗]
So, that's pretty easy to show. The reverse direction, that
EΨ[f∗]≤limn→∞EΨ[fn]
is much trickier to show.
We'll start with splitting into cases. The first case is that Ψ is the empty set, in which case, everything is infinity, and the reverse inequality holds and we're done.
The second case is that Ψ isn't empty. In such a case, for each fn, we can pick (mn,bn) a-measures from the minimal points of Ψ s.t.
mn(fn)+bn≃inf(m,b)∈Ψm(fn)+b
coming as close to minimizing said functions within Ψ as we like!
Now, we can again split into two cases from here. In our first case, the bn sequence from the (mn,bn) sequence diverges. Then, in such a case, we have
limn→∞EΨ[fn]=limn→∞mn(fn)+bn≥limn→∞inf(0,λ⊙⋅infx(f1(x)))+bn=∞≥EΨ[f∗]
Why does this work? Well, the (mn,bn) approximately minimize the expectation value of fn. Then, the worst-case value of mn(fn) is either 0, or the amount of measure in mn times the lowest value of fn. Since it's an ascending sequence of functions, we can lower-bound this by (amount of measure in mn) times (worst-case value of f1). And, since we're picking minimal points, and there's an upper bound on the amount of measure present in minimal points of an inframeausure set from the Lipschitz criterion (our finite λ⊙ value), the worst-case value we could possibly get for mn(fn) is either 0 or the maximum amount of measure possible times a lower bound on fn. Both of these quantities are finite, so we get a finite negative number. But the bn are assumed to diverge, so the expectation values head off to infinity.
Ok then, we're on to our last case. What if Ψ isn't empty, and our sequence (mn,bn) of a-measures has a subsequence where the bn doesn't diverge? Well, the only possible way a sequence of a-measures in a nonempty inframeasure set can fail to have a convergent subsequence is for the b values to diverge, from the compact-projection property for an inframeasure set.
So, in our last case, we can isolate a convergent subsequence of our (mn,bn) sequence of a-measures, which converges to the a-measure (m∞,b∞). Let's use the index i to denote this subsequence. At this point, the argument proceeds as follows. Let j be an arbitrary natural number.
limn→∞EΨ[fn]=limi→∞EΨ[fi]
=limi→∞mi(fi)+bi≥liminfi→∞mi(fj)+bi≥m∞(fj)+b∞
In order, this is just "we went to a subsequence of an ascending sequence, so it's got the same limit", then swapping out the worst-case value for the actual minimizing point. Then we just use that since the sequence of functions is ascending, eventually mi(fi)+bi will blow past mi(fj)+bi (at timestep j) because j is a fixed constant. Then we just apply that fj is lower-bounded and lower-semicontinuous, so the function (m,b)↦m(fj)+b is lower-semicontinuous, getting our inequality we pass to (m∞,b∞).
Since this holds for arbitrary j, this then means we have
limn→∞EΨ[fn]≥limj→∞m∞(fj)+b∞
And then we can go
limj→∞m∞(fj)+b∞=m∞(f∗)+b∞≥inf(m,b)∈Ψm(f∗)+b=EΨ[f∗]
And we have the inequality going the other way, proving our result! This last stretch was done via Beppo Levi's Monotone Convergence Theorem for the equality, then the inequality is because nonempty inframeasure sets Ψ are closed, so the limit point (m∞,b∞) also lies in Ψ, and then packing up definitions.
Since we've proven both directions of the inequality, we have
limn→∞EΨ[fn]=EΨ[f∗]
and we're done!
Proposition 4: All compact Polish spaces are compact second-countable LHC spaces.
Proof: Compactness is obvious. All Polish spaces are second-countable. So that just leaves the LHC property. Our compact hull operator K will be taken to just be the closure of the set. Since the space X is compact, all closed subsets of it are compact as well. The two properties
∀U∈B(X):U⊆K(U)
∀U1,U2∈B:U1⊆U2→K(U1)⊆K(U2)
Are trivially fulfilled when K is interpreted as set closure.
That leaves the LHC property. Since all Polish spaces are Hausdorff, the various definitions of local compactness coincide, and the space is compact, so all definitions of local compactness hold. So, given some O and x∈O, by local compactness for the space X, we can find a compact set C and open set O′ s.t. x∈O′⊆K⊆O. Since O′ is open, it can be written as a union of sets from the topology base. Now, pick a set from the base that makes O′ and contains x, call it U. We have x∈U⊆O′⊆C. In Hausdorff spaces, all compact sets are closed, so C is a closed superset of U, and so K(U) (the closure of U) is a subset of C. Since K(U)⊆C⊆O, and U⊆K(U) (since K is just closure), we have x∈U⊆K(U)⊆O, as desired. This works for any x and O, so the space fulfills the LHC property.
Proposition 5: All open subsets of ω-BC domains are second-countable LHC spaces.
Given D, an ω-BC domain, it has a countable basis. You can close the countable basis under finite suprema (when they exist), and the set will stay countable and stay a basis, but now it's closed under finite suprema. Do that, to get a countable basis B which is closed under finite suprema.
Our attempted base for the open subset O⊆D will be those of the form x⇈ for x∈B∩O, ie, the set of all the points y where x≪y, and x is in the countable basis for the domain and also in O. Also the empty set. This is clearly countable, because B is.
To show it's a topology base, we need to check closure under finite intersection, check that all these are open, and check that every open set in O can be made by unioning these things together.
First, closure under finite intersection. In order to do this, we'll need to show a lemma that if x≫xi for finitely many xi, then x≫⊔i≤nxi. This occurs because, given any directed set with a supremum of x or higher, since x≫xi, you can find a yi in the directed set which is above xi, and then take an upper bound for the finitely many yi which exists within your directed set, and it'll exceed ⊔i≤nxi. So, every directed set with a supremum of x or higher has an element which matches or exceeds the supremum of the xi, so the supremum of the xi approximates x.
Now that we have this result, we'll show that ⋂i≤nxi⇈=(⊔i≤nxi)⇈
In the two directions, anything which is approximated by ⊔i≤nxi is also approximated by anything below it, ie, all the xi, so we have ⋂i≤nxi⇈⊇(⊔i≤nxi)⇈
And, for the reverse direction, we've already established that anything which all the xi approximate will also have the supremum of the xi approximate it. Since our countable basis B was made to be closed under finite suprema, the open set associated with the supremum of the xi is also in our basis, so our base is closed under finite intersection.
Second, checking that they're open. Sets of the form x⇈ are always open in the Scott-topology, and every Scott-open set is closed upwards, so all the sets in this base are open in the original domain D, and so remain open when we restrict to the open subset of D and equip it with the subspace topology.
All that remain to show that this is a base is that it can make any Scott-open subset of O by union. By Proposition 2.3.6.2, a set U is open in the Scott-topology on D iff U=⋃x∈U∩Bx⇈.
Since all of our open subsets of O are open in D, they can be written as a union of sets from our countable collection by this result.
We have crafted a countable base for our open subset of an ω-BC domain D, so it's indeed second-countable.
Now for the LHC property. The compact hull operator K maps the set x⇈ to (⊓x⇈)↑. This is compact because all Scott-opens are closed upwards, so an open cover of (⊓x⇈)↑ must have an open which includes ⊓x⇈ itself, and this single open covers the entire set. This set is a subset of x↑ because everything that x approximates lies above x, so x is a lower bound, and thus must equal or lie below ⊓x⇈. Since x is in our open subset O, the same must also apply to this lower bound, because Scott-open sets are closed up.
For our first property of a compact hull operator, we need to show that x⇈⊆(⊓x⇈)↑. This is easy, anything which lies in x⇈ must lie above the infinimum of that set.
For our second property of a compact hull operator, we need to show that x⇈⊆y⇈ implies (⊓x⇈)↑⊆(⊓y⇈)↑. For this, assuming the starting assumption, by basic properties of inf, we have ⊓x⇈⊒⊓y⇈, so then anything above that first point must also be above the second point, so we have our result.
Now, we just need to check the LHC property. Let U be an open subset of O, and let x∈U. We can consider the directed set of approximants to x from the basis of our domain D, called Bx. Scott-opens have the feature that for any directed set with a supremum in the open set (and Bx is directed and has a supremum of x, which lies in U), there's an element in the directed set which also lies in the open set. So, we can find a y which lies in the basis, and y≪x, and y∈U. Then just let your open set from the base be y⇈. This is an open from the base. It contains x. And, K(y⇈)⊆y↑⊆U, because y∈U and Scott-opens are closed upwards.
And thus, our result is shown.
Corollary 1: All ω-BC domains are compact second-countable LHC spaces.
Just use Proposition 5 when your open set is all of your domain to get second-countability and LHC-ness. For compactness, any open cover of the domain D must have an open set which includes the bottom point ⊥, and Scott-opens are closed upwards, so we can isolate a particular open set which covers the entire domain.
Theorem 2: If X is second-countable and locally compact, then [X→[0,1]] is an ω-BC domain, as is [X→[−∞,∞]].
To begin, [−∞,∞] is topologically identical to [0,1], so we can just prove it for the [0,1] case to get both results.
As a recap, the Scott-topology on [0,1] is the topology with the open sets being ∅,[0,1], and all sets of the form (q,1] for q∈[0,1).
The space [X→[0,1]] is the space of all continuous functions X→[0,1], where [0,1] is equipped with the Scott-topology, which has been equipped with a partial order via f⊑g↔∀x∈X:f(x)≤g(x)
We will actually show a stronger result, that this space is a continuous complete lattice with a countable basis, as this implies that the set is an ω-BC domain.
To show this, we'll need to show that the space is closed under arbitrary suprema. This is sufficient to show that the space is a complete lattice because, to get the infinimum of the empty set, we can get a top element by taking the supremum of everything, and to get the infinimum of any nonempty set of elements, we can take the set of lower bounds (which is nonempty because a bottom element exists, the supremum of the empty set), and take supremum of that. Then we just need to find a countable basis, and we'll be done. Most of the difficulty is in finding the countable basis for the function space.
Showing there's a bottom element (sup of the empty set) is easy, just consider the function which maps everything in X to 0.
Now, we'll show that suprema exist for all nonempty sets of functions. Let F be our nonempty set of functions.
The natural candidate for the supremum of functions is is x↦supf∈Ff(x), call this function ⊔F. It's a least upper bound of all the functions f∈F, the only fiddly part we need to show is that this function is a continuous function X→[0,1], in order for it to appear in the space [X→[0,1]].
Fix an arbitrary open set in [0,1], we'll show that the preimage is open in X. If the set is the empty set or [0,1] itself, then the preimage will be the empty set or all of X, so those two cases are taken care of. Otherwise, the open set is of the form (q,1] for q∈[0,1). Now, we have
(⊔F)−1((q,1])={x∈X|(⊔F)(x)>q}
={x∈X|supf∈Ff(x)>q}={x∈X|∃f∈F:f(x)>q}
=⋃f∈F{x∈X|f(x)>q}=⋃f∈Ff−1((q,1])
And now we see that this preimage can be written as a union of preimages of open sets, which are open since all the f are continuous, so the set (⊔F)−1((q,1]) is open. Since q was arbitrary in [0,1), we have that the preimage of all open sets in the unit interval is open in X, so ⊔F is indeed continuous and exists in [X→[0,1]], to serve as the least upper bound there.
All that remains is coming up with a countable basis for [X→[0,1]], which is the hard part. Let B(X) be a countable base of X (this can be done since X is second-countable). Given an open set U∈B(X) and rational number q∈Q∩[0,1], we define the atomic step function (U↘q) as:
x∈U→(U↘q)(x)=q
x∉U→(U↘q)(x)=0
First things first is showing these are continuous. Clearly, any preimage will either be the empty set, all of X, or the open set U (because there's no open set in [0,1] with the Scott-topology that contains 0 without containing q), so they're indeed continuous.
Our attempted countable basis B will be all of these atomic step functions, and all finite suprema of such. Suprema of arbitrary sets of functions always exist as we've shown, so this is well-defined. It's countable because there's countably many choices of open set, countably many choices of rational number, and we're considering only the finite subsets of this countable set.
The hard part is showing that for any f, it can be built as the supremum of stuff from this basis which approximates f. First up, as a preliminary, is showing that Bf is a directed set for some arbitrary function f. Bf is the set of all elements of the basis which approximate f. What we can do is pick two arbitrary functions from the basis, g1 and g2 s.t. g1≪f and g2≪f. From back in Proposition 5, we know that g1⊔g2≪f. Since our basis is closed under finite suprema, we now know that the intersection of the basis and the approximants to f is closed under finite suprema, so it's directed.
So, now that we know that Bf is a directed set for arbitrary f, we must ask, does f=⨆↑Bf? For one direction, we have that every function in Bf approximates f, so it must be less than f itself, so we have ⨆↑Bf⊑f trivially. So let's get the other direction going, by showing ∀x∈X:⨆↑Bf(x)≥f(x)
First, let's solve the case where f(x)=0. We'd have ⨆↑Bf(x)≥0=