Previous proof post is here.
Theorem 2: Isomorphism Theorem: For (causal, pseudocausal, acausal, surcausal) or which fulfill finitary or infinitary analogues of all the defining conditions, and are (causal, pseudocausal, acausal, surcausal) hypotheses. Also, and define an isomorphism between and , and and define an isomorphism between and .
Proof sketch: The reason this proof is so horrendously long is that we've got almost a dozen conditions to verify, and some of them are quite nontrivial to show and will require sub-proof-sketches of their own! Our first order of business is verifying all the conditions for a full belief function for . Then, we have to do it all over again for . That comprises the bulk of the proof. Then, we have to show that taking a full belief function and restricting it to the infinite/finite levels fulfills the infinite/finite analogues of all the defining conditions for a belief function on policies or policy-stubs, which isn't quite as bad. Once we're done with all the legwork showing we can derive all the conditions from each other, showing the actual isomorphism is pretty immediate from the Consistency condition of a belief function.
Part 1:Let's consider . This is defined as:
We'll show that all 9+2 defining conditions for a belief function are fulfilled for . The analogue of the 9+2 conditions for a is:
1: Stub Nirvana-free Nonemptiness:
2: Stub Closure:
3: Stub Convexity.
4: Stub Nirvana-free Upper-Completion.
5: Stub Restricted Minimals:
6: Stub Normalization: and
7: Weak Consistency:
8: Stub Extreme Point Condition: for all :
9: Stub Uniform Continuity: The function is uniformly continuous.
C: Stub Causality:
where the outcome function is defined over all stubs.
P: Stub Pseudocausality:
Let's begin showing the conditions. But first, note that since we have weak consistency, we can invoke Lemma 6 to reexpress as Where is the n'th member of the fundamental sequence of .
Also note that, for all stubs, . We'll be casually invoking this all over the place and won't mention it further.
Proof: By Lemma 6 with weak consistency, Now, m can be anything we like, as long as it's finite. Set m to be larger than the maximum timestep that the stub is defined for. Then no matter what n is (since it's above m) and projection from a stub to itself is identity, so the preimage is exactly our original set .
We'll also be using another quick result. For all stubs , given stub causality,
Proof: Fix an arbitrary point . By causality, we get an outcome function which includes , giving us that there's something in that projects down onto . Use weak consistency to get the other subset direction.
Condition 1: Nirvana-free Nonemptiness.
Invoking Stub Nirvana-free Nonemptiness, so we get nirvana-free nonemptiness for .
Now, assume is not a stub. By stub-bounded-minimals, there is some bound on the set of minimal points, regardless of stub. Let be
This contains all the minimal nirvana-free points for . This set is nonempty because we have stub nirvana-free nonemptiness, so a nirvana-free point exists. We have stub-closure and stub minimal-boundedness, so we can step down to a minimal nirvana-free point below , and it obeys the bound.
Further, by weak consistency and projection preserving and and nirvana-freeness,
Invoking Lemma 9, the intersection of preimages of these is nonempty. It's also nirvana-free, because if there's nirvana somewhere, it occurs after finite time, so projecting down to some sufficiently large finite stage preserves the presence of Nirvana, but then we'd have a nirvana-containing point in a nirvana-free set , which is impossible. This is also a subset of the typical intersection of preimages used to define . Pick an arbitrary point in said intersection of preimages of clipped subsets.
Bam, we found a nirvana-free point in and we're done.
Time for conditions 2 and 3, Closure and Convexity. These are easy.
The preimage of a closed set (stub-closure) is a closed set, and the intersection of closed sets is closed, so we have closure.
Also, is linear, so the preimage of a convex set (stub-convexity) is convex, and we intersect a bunch of convex sets so it's convex as well.
Condition 4: Nirvana-free upper completion.
Let . Let's check whether (assuming that's an a-measure and is nirvana-free) also lies in the set. A sufficient condition on this given how we defined things is that for all , , as that would certify that is in all the preimages.
is linear, so
The first component is in , obviously. And then, by stub nirvana-free-upper-completion, we have a nirvana-free a-measure plus a nirvana-free sa-measure (projection preserves nirvana-freeness), making a nirvana-free a-measure (projection preserves a-measures), so is in , and we're done.
Condition 5: Bounded-Minimals
So, there is a critical value by restricted-minimals for
Fix a , and assume that there is a minimal point in with a value that exceeds the bound. Project down into each . Projection preserves and so each of these projected points lie above some .
Now, invoke Lemma 7 to construct a (or the nirvana-free variant) that lies below , and projects down to . Repeat this for all n. All these points are a-measures and have the standard bound so they all lie in a compact set and we can extract a convergent subsequence, that converges to , which still obeys the bound.
is below because (or the nirvana-free variant) is a closed set. Further, by Lemma 10, is in the defining sequence of intersections for . This witnesses that isn't minimal, because we found a point below it that actually obeys the bounds. Thus, we can conclude minimal-point-boundedness for .
Condition 6: Normalization. We'll have to go out of order here, this can't be shown at our current stage. We're going to have to address Hausdorff continuity first, then consistency, and solve normalization at the very end. Let's put that off until later, and just get extreme points.
Condition 8: Extreme point condition:
The argument for this one isn't super-complicated, but the definitions are, so let's recap what condition we have and what condition we're trying to get.
Condition we have: for all :
Condition we want: for all ,
Ok, so By the stub extreme point condition, there's a , where, for all that fulfill , there's a , where .
Lock in the we have. We must somehow go from this to a that projects down to our point of interest. To begin with, let be the n'th member of the fundamental sequence for . Past a certain point m, these start being greater than . The which projects down to that we get by the stub-extreme-point condition will be called . Pick some random-ass point in and call it .
all obey the and values of , because it projects down to . We get a limit point of them, , and invoking Lemma 10, it's also in . It also must be nirvana-free, because it's a limit of points that are nirvana-free for increasingly late times. It also projects down to because the sequence was wandering around in the preimage of , which is closed.
Condition 9: Hausdorff Continuity:
Ok, this one is going to be fairly complicated. Remember, our original form is:
"The function is uniformly continuous"
And the form we want is:
"The function is uniformly continuous"
Uniform continuity means that if we want an Hausdorff-distance between two preimages, there's a distance between partial policies that suffices to produce that. To that end, fix our . We'll show that the we get from uniform continuity on stubs suffices to tell us how close two partial policies must be.
So, we have an . For uniform continuity, we need to find a where, regardless of which two partial policies and we select, as long as they're or less apart, the sets (and likewise for ) are only apart. So, every point in the first preimage must have a point in the second preimage only distance away, and vice-versa. However, we can swap and (our argument will be order-agnostic) to establish the other direction, so all we really need to do is to show that every point in the preimage associated with is within of a point in the preimage associated with .
First, is the time at which two partial policies apart may start differing. Conversely, any two partial policies which only disagree at-or-after m are apart or less. Let be the policy stub defined as follows: take the inf of and (the partial policy which is everything they agree on, which is going to perfectly mimic both of them up till time m), and clip things off at time m to make a stub. This is only apart from and , because it perfectly mimics both of them up till time m, and then becomes undefined (so there's a difference at time m) Both and are .
Let be some totally arbitrary point in . is also in , because projects down to some point in that's nirvana-free.
Let , where , be the n'th stub in the fundamental sequence for . These form a chain starting at and ascending up to , and are all distance from .
Anyways, in , we can make a closed ball of size around . This restricts and to a small range of values, so we can use the usual arguments to conclude that is compact.
Further, because is or less away from , the two sets and are within of each other, so there's some point of the latter set that lies within our closed -ball.
Consider the set
the inner intersection is an intersection of closed and compact sets, so it's compact. Thus, this is an intersection of an infinite family of nonempty compact sets. To check the finite intersection property, just observe that since preimages of the sets get smaller and smaller as n increases due to weak-consistency but always exist.
Pick some arbitrary point from the intersection. it's away from since it's in the -ball. However, we still have to show that is in to get Hausdorff-continuity to go through.
To begin with, since lies in our big intersection, we can project it down to any . Projecting it down to stage n makes . Let be the point in defined by:
Well, we still have to show that this set is nonempty, contains only one point, and that it's in , and is nirvana-free, to sensibly identify it with a single point.
Nonemptiness is easy, just invoke Lemma 9. It lies in the usual intersections that define , so we're good there. If it had nirvana, it'd manifest at some finite point, but all finite projections are nirvana-free, so it's nirvana-free. If it had more than one point in it, they differ at some finite stage, so we can project to a finite to get two different points, but they both project to , so this is impossible. Thus, is a legit point in the appropriate set. If the projection of didn't equal , then we'd get two different points, which differ at some finite stage, so we could project down to separate them, but they both project to for all n so this is impossible.
So, as a recap, we started with an arbitrary point in , and got another point that's only or less away and lies in This argument also works if we flip and , so the two preimages are only or less apart in Hausdorff-distance.
So, given some , there's some where any two partial policies which are only apart have preimages only apart from each other in Hausdorff-distance. And thus, we have uniform continuity for the function mapping to the set of a-measures over infinite histories which project down to Hausdorff-continuity is done.
Condition 7: Consistency.
Ok, we have two relevant things to check here. The first, very easy one, is that
From earlier, we know that , and from how is defined, this is a tautology.
The other, much more difficult direction, is that
We'll split this into four stages. First, we'll show one subset direction holds in full generality. Second, we'll get the reverse subset direction for causal/surcausal. Third, we'll show it for policy stubs for pseudocausal/acausal, and finally we'll use that to show it for all partial policies for pseudocausal/acausal.
First, the easy direction.
If we pick an arbitrary , it projects down to for all stubs below . Since , it projects down to all stubs beneath . Since projections commute, projected down into makes a point that lies in the preimage of all the where , so it projects down into .
This holds for all points in , so . This works for all , so it holds for the union, and then due to closure and convexity which we've already shown, we get that the closed convex hull of the projections lies in too, establishing one subset direction in full generality.
Now, for phase 2, deriving in the causal/surcausal case.
First, observe that if , then . Fix some and arbitrary . We'll establish the existence of a that projects down to .
To begin with, projects down to in . Lock in a value for n, and consider the sequence that starts off , and then, by causality for stubs and , you can find something in that projects down onto , and something in that projects down onto that, and complete your sequence that way, making a sequence of points that all project down onto each other that climb up to . By Lemma 10, we get a . You can unlock n now. All these have the same and value because projection preserves them, so we can isolate a convergent subsequence converging to some .
Assume . Then we've got two different points. They differ at some finite stage, so there's some n where can project down onto to witness the difference, but from our construction process for , both and project down to , and we get a contradiction.
So, since , this establishes the other direction, showing equality, and thus consistency, for causal/surcausal hypotheses.
For part 3, we'll solve the reverse direction for pseudocausal/acausal hypotheses in the case of stubs, getting
Since we're working in the Nirvana-free case and are working with stubs, we can wield Lemma 3
So, if we could just show that the union of the projections includes all the extreme minimal points, then when we take convex hull, we'd get the convex hull of the extreme minimal points, which by Lemma 3, would also nab all the minimal points as well. By Lemmas 11 and 12, our resulting convex hull of a union of projections from above would be upper-complete. It would also get all the minimal points, so it'd nabs the entire within it and this would show the other set inclusion direction for pseudocausal/acausal stubs. Also, we've shown enough to invoke Lemma 20 to conclude that said convex hull is closed. Having fleshed out that argument, all we need that all extreme minimal points are captured by the union of the projections.
By our previously proved extreme minimal point condition, for every extreme minimal point in , there's some and in that projects down to , which shows that all extreme points are included, and we're good.
For part 4, we'll show that in the nirvana-free pseudocausal/acausal setting, we have
Fix some arbitrary . Our task is to express it as a limit of some sequence of points that are mixtures of stuff projected from above.
For this, we can run through the same exact proof path that was used in the part of the Lemma 21 proof about how the nirvana-free part of is a subset of closed convex hull of the projections of the nirvana-free parts of , . Check back to it. Since we're working in the Nirvana-free case, we can apply it very straightforwardly. The stuff used in that proof path is the ability to project down and land in (we have that by how we defined ), Hausdorff-continuity (which we have), and stubs being the convex hull, not the closed convex hull, of projections of stuff above them (which we mentioned recently in part 3 of our consistency proof).
Thus, consistency is shown.
Condition 6: Normalization.
Ok, now that we have all this, we can tackle our one remaining condition, normalization. Then move on to the two optional conditions, causality and pseudocausality.
A key thing to remember is, in this setting, when you're doing , it's actually , because if Murphy picked a thing with nirvana in it, you'd get infinite value, which is not ok, so Murphy always picks a nirvana-free point.
Let's show the first part, that . This unpacks as: and we have that
Projections preserves values, so we can take some nirvana-free point with a value of nearly 0, and project it down to (belief function of the empty policy-stub) where there's no nirvana possible because no events happen.
So, we've got values of nearly zero in there. Do we have a point with a value of exactly zero? Yes. it's closed, and has bounded minimals, so we can go "all positive functionals are minimized by a minimal point", to get a point with a value of exactly zero. Then, we can invoke Lemma 21 (we showed consistency, extreme points, and all else required to invoke it) to decompose our point into the projections of nirvana-free stuff from above, all of which must have a value of 0. So, there's a nirvana-free point in some policy with 0 value.
Now for the other direction. Let's show that .
This unpacks as:
We'll show this by disproving that the sup is <1, and disproving that the sup is >1.
First, assume that, regardless of , Then, regardless of we can pick some totally arbitrary , and there's a nirvana-free point with a value of or less. By consistency, we can project it down into , to get a nirvana-free point with a value of or less. Thus, regardless of the stub we pick, there's nirvana-free points where Murphy can force a value of or less, which contradicts
What if it's above 1? Assume there's some where
From uniform-continuity-Hausdorff, pick some to get a Hausdorff-distance or lower (for stuff obeying the bound, which all minimal points of do for all ). This specifies some extremely large n, consider . Now, consider the set of every policy above . All of these are or less away from . Also, remember that the particular sort of preimage-to-infinity that we used for Hausdorff-continuity slices away all the nirvana.
So, Murphy, acting on , can only force a value of or higher. Now, there can be no nirvana-free point in with . The reason for this is that, since is or less away from , there's a nirvana-free point in that's away, and thus has , which is impossible.
Ok, so all the nirvana-free points in where have .
Now, since we have Lemma 21, we can go "hm, equals the convex hull of the projections of . Thus, any minimal point with is a finite mix of nirvana-free stuff from above, one of which must have . But we get a contradiction with the fact that there's no nirvana-free point from above with a value that low, they're all "
So, since we've disproved both cases, . And we're done with normalization! On to causality and pseudocausality.
Condition C: Causality.
An "outcome function" for is a function that maps a to a point in , s.t. for all .
Causality is, if you have a , you can always find an outcome function where . Sadly, all we have is causality over stubs. We'll be using the usual identification between and .
Anyways, fix a and a point in . Project down to get a sequence . By causality for stubs, we can find an where, for all , . Observe that there are countably many stubs, and no matter the n, all the and values are the same because projection preserves those. We can view as a sequence in
By stub closure, and a and bound, this is a product of compact sets, and thus compact by Tychonoff (no axiom of choice needed, its just a countable product of compact metric spaces) so we can get a limiting (because it's only defined over stubs).
An outcome function for stubs fixes an outcome function for all partial policies, by
We've got several things to show now. We need to show that is an outcome function, that is well-defined, that and that it's actually an outcome function.
For showing that is an outcome function, observe that projection is continuous, and, letting n index our convergent subsequence of interest, regardless of stub , . With this,
Now, let's show that is well-defined. Since is an outcome function, all the points project down onto each other, so we can invoke Lemma 9 to show that the preimage is nonempty. If the preimage had multiple points, we could project down to some finite stage to observe their difference, but nope, they always project to the same point. So it does pick out a single well-defined point, and it lies in ) by being a subset of the defining sequence of intersection of preimages.
Does ? Well, projected down to all the . If , then So, the limit specification has for all n. The only thing that projects down to make all the is itself, so .
Last thing to check: Is an outcome function over partial policies? Well, if , then for all n, . Assume . Then, in that case, we can project down to some and they'll still be unequal. However, since projections commute, it doesn't matter whether you project down to and then to , or whether you project down to (making ), and then project down to (making ). Wait, hang on, this is the exact point that projects down to, contradiction. Therefore it's an outcome function.
And we're done, we took an arbitrary and , and got an outcome function with , showing causality.
Condition P: Pseudocausality: If , and 's support is on , then .
But all we have is, if and 's support is on , then .
There's a subtlety here. Our exact formulation of pseudocausality we want is the condition , so if the measure is 0, then support is the empty set, which is trivially a subset of everything, then pseudocausality transfers it to all partial policies.
Ok, so let's assume that , and the measure part has its support being a subset of but yet is not in . Then, since this is an intersection of preimages from below, there should be some finite level that you can project down to (it's present in , just maybe not in ) where the projection of (call it ) lies outside (lying outside the intersection of preimages)
This is basically "take , chop it off at height n". However, since , you can project it down to . Which does the exact same thing of chopping off at height n, getting you exactly. We can invoke stub-pseudocausality (because with full measure, the history will land in , then with full measure, the truncated history will land in as the latter is prefixes of the former, or maybe the full measure is 0 in which case pseudocausality transfer still works) to conclude that actually lies inside , getting a contradiction. This establishes pseudocausality in full generality.
Ok, so we have one direction. is a hypothesis, if fulfills analogues of the hypothesis conditions for the finitary stub case. Our proof of everything doesn't distinguish between causal and surcausal, and the arguments work for all types of hypotheses, whether causal, surcausal, pseudocausal, or acausal. Ok, we're 1/4 of the way through. Now we do the same thing, but for building everything from infinitary hypotheses.
PART 2: Infinitary hypotheses. We now consider , defined as