Here are the previous two posts.

Now, what about updates? We'll use  (and suppress the  that should be there) as shorthand for the function that maps  over  to  in  (or the nirvana-free or sur variant of this), and also use  as a function from belief functions to belief functions (just map all the sets through)

 

Lemma 27: When updating, the closure adds no nirvana-free points that weren't present originally if Nonemptiness, Nirvana-free upper-completion and closure holds originally (works in the sur-case too)

Proof sketch: We take a sequence  limiting to , and then take a preimage point of , go to a minimal below it, find a limit point in our original set by Compactness, and map it back through the update, getting a point below . Then, we find what we need to add to that to get , and find something above our limit point that maps to , so we didn't actually need closure anyways because we made  as an image of a nirvana-free point present in the original set.

Proof: Fix a sequence  in  (but without the defining closure part in the end) that limit to  which is nirvana-free.

Every  has a preimage point  with no nirvana off-h. For each , find a minimal point  below it, which have a  bound by bounded-minimals, so we can find a convergent subsequence limiting to  (actually, might not be minimal, still a limit of minimal points, though). Shoving the  (and the limit point) back through the update (which is a continuous function), we get a sequence  limiting to  (the thing you get from pushing  through the update).

Since  lies above  (upper-completion ordering), then updating preserves that property, because the update function is linear. Thus, all the  lie below their corresponding . Now, we can invoke Lemma 16 to conclude that  lies below . It lies below a nirvana-free point, so  is nirvana-free as well. Now, we just need to show nirvana-free upper-completion because .

We can take  and add on  (extend the measure back to the original domain by sticking an h prefix on everything, and saying that the measure is 0 everywhere else), making an a-measure that's in , by nirvana-free uppper-completion there. By linearity, and the update not affecting  (it's 0 outside of h so the  outside of h doesn't get to contribute anything to the b term when we update), updating  makes . So, if a nirvana-free point appears post-update (with closure), then it'll appear post-update (without closure).

 

Lemma 28: raw-update-then-project equals project-then-raw-update.

Take some . We want to show that:

First,  This is because the projection down doesn't change the measure at all outside of h, and we're evaluating a function that's 0 inside h. So, that takes care of the b term. Also, projection preserves the b term, so our desired equality is:

For the measure term, the first is "restrict to on-h histories, clip off the h prefixes, and project down", and the second is "project down the on-h histories accordingly, then restrict to on-h histories and clip off the h prefix", which are obviously equal.

 

Proposition 9: For causal, surcausal, pseudocausal and acausal hypotheses, updating them produces a causal, surcausal, pseudocausal or acausal hypothesis as long as renormalization doesn't fail.

Proof sketch: What we can do is consider the "raw update", show that it preserves all nice properties except for renormalization, and then show that the renormalization terms in the update are the proper renormalization terms to use. Thus, we'll define our raw update  via:  is: , mapped through the following function:  And then you take the closure of the resulting set at the end.

So, we take our partial policy  and glue it to the off-h-partial policy , go to that part of the original belief function, strip off everything that has nirvana off-h (for Murphy shall not select those, and properly, that should make the b term infinite post-update), slice off the part of the measure off-h, strip off the h prefix from those histories, and go "ok, our utility function is , let's take the expected utility off-h and fold it into the b term"

If we can get all nice properties but renormalization, we're good, just appeal to Lemma 24. As for showing the conditions: We're in for something almost as bad as one of the directions of the Isomorphism theorem. Nonemptiness, closure, and convexity are trivial, upper completion, pseudocausality, and bounded-minimals are easy and the extreme point condition and causality are moderately tricky.

For the extreme point condition, Step 1 is establishing that  equals the closed convex hull of nirvana-free projections from above by an argument that makes sense when you sketch it out but may be difficult if you don't have it sketched out, step 2 is using Hausdorff-continuity and Lemma 20 to turn it into an ordinary convex hull, and finally, arguing that a nirvana-free exteme point must have come from a nirvana-free point from above via step 2.

For causality, we can (for the most part) just go back to , get an outcome function there, and map it back through the update to get an outcome function, the hard part is netting the limit points, which requires a limit of outcome functions. But because we want a countable product of sets to get sequential compactness from Tychonoff, we have to work with stubs, which adds some extra complexity.

Hausdorff-continuity is just hellishly hard, we need to show that the preimages of the sets post-update are the updates of the preimages of sets pre-update, and then combine that with some fancy work with minimal points and upper completions and using two different characterizations of uniform continuity at once via Lemma 15, and a touch of functional analysis. There's way too many interacting points and sets in this one.

But easily the biggest grind is Consistency. We have 4 subset directions to show, each of which requires their own separate fancy argument, and two of them require splitting into a nirvana-containing/causal case and a nirvana-free case, so it's a 6-part proof. A good chunk of complexity arises because we have to take closure in the nirvana-containing case, an issue which goes away if we just let Nirvana be 1 reward forever. Let's begin.

Condition 1: Nirvana-free Nonemptiness: 

This is trivial. Just pick a nirvana-free point in  by nirvana-free nonemptiness, and update, to get one in .

Conditions 2,3: Closure, Convexity: 

Closure is a tautology since we took the closure. For convexity, the closure of a convex set is convex, and  is a linear function, so it maps convex sets to convex sets.

Condition 4: Nirvana-free Upper-Completeness: 

First, invoke Lemma 27 to see that all nirvana-free points must have been present in the raw  set to begin with, without the closure. What we want is that, if , and  lies in the raw updated set and is nirvana-free, and  is a nirvana-free a-measure, then  lies in the updated set as well.

Find a  that maps to  after updating. It must be nirvana-free, because the nirvana either occurs without h as a prefix (which is forbidden because all that stuff gets clipped off and doesn't get pushed through the update), or the nirvana occurs with h as a prefix, but then it'd show up in the measure component of  post-update, contradicting its nirvana-freeness. Now, we can consider  (basically, , but we take the measure back by sticking an h prefix on everything, and saying that it's 0 off-h). This is present in , by nirvana-free upper completion. By linearity of updating, and  having no measure in any off-h area where it'd get picked up by , this updates to , witnessing that  lies in the image of the update, so we get nirvana-free upper completion.

Condition 5: Bounded-Minimals:

 For bounded-minimals, we can pull the Lemma 16 trick of taking our  of interest that  limit to, taking a preimage  for each , finding a minimal  below each  (which obeys a  bound and also has no nirvana off-h), getting a limit point  (still no nirvana off-h) by compactness, and pushing the sequence through the update, to get a sequence  below  limiting to  which is below  (Lemma 16) Now, we just have to check up on the  values of our  sequence, and show that they respect the  bound, to transfer this to . The raw update deletes measure from off-h, and assigns it the value that  does, which is 1 or less, so any increase in b correspond to an equal-or-greater decrease in , so the  all obey the  bound as well. Thus, the limit point  obeys the bound, and it's below our original , so any minimal must obey the  bound.

Condition 7: Consistency: 

This is going to be extremely tedious and difficult to show, it's a 6-part proof. The first 3 parts are devoted to showing that 

Part 1 is showing  In the nirvana-free pseudocausal/acausal case.

Let  be in . By Lemma 27, since we're working in the nirvana-free case, we didn't need to take the closure, it won't add any points that aren't there anyways. So,  has a preimage point  that maps to it. By consistency for  lies in the closed convex hull of projections of policies down from above, so there are points in the convex hull of projections of policies that are arbitrarily close to , fix some sequence  of points in the convex hull of projections down from policies above that limit to . Mapping these through the raw update (which is continuous) we get a sequence  of points in  that limit to .

All these policies above  have the form . So,  can be written as a mix of finitely many , which are the projections of  from above, in policies. Update those, getting points  in . These project down to , which mix back together to make... . This is because of Lemma 28, that update-then-project equals project-then-update. Also, mix-then-project equals project-then-mix. Remember,  is made by: "Project  down to make , then mix to make , then update."

So, we can go project-mix-update equals mix-project-update equals mix-update-project equals update-mix-project equals update-project-mix, which is the process "update the  to , project down to , mix to "

The first equality is linearity of projection, the second equality is Lemma 28, the third equality is linearity of updating, the final equality is linearity of projection again.

Anyways, taking stock of what we've done, we have a sequence  limiting to our  of interest, and every  is crafted by taking points from finitely many , projecting them down, and mixing them. Therefore, our  lies in the closed convex hull of projections down from above.

Part 2: we'll show this again, but in the nirvana-containing case, where we'll leverage causality.

Fix a  (with closure). There's a sequence  that limits to it, that lies in the same set, but without closure, so we can take preimage points  that update to make . By causality, fix some arbitrary policy above , which can be expressed as , where . Anyways, we can take , and use causality to get an outcome function , to get a  that projects down to . We don't have to worry about nirvana off-h, because  already specifies everything that happens off-h and it says no nirvana occurs in that case. So,  can be updated to make a  in . By Lemma 28, this must project down to . So, all our  lie in the projection of , and since  is a limit point of that sequence, it must lie in the closed convex hull of projections.

And we've shown that 

And have taken care of 2 of our 6 parts. Now for the reverse direction, that

Thankfully, this can be done with a general argument that isn't sensitive to the presence of Nirvana.

Part 3: Fix a  in the closed convex hull, which has a sequence  limiting to it that's in the convex hull of projections down from above. the  shatter into finitely many , which are projections of . Now, these aren't necessarily preimage points, they may have been added in the closure. Thus, we can perturb by  or less if needed to make a  which does have a preimage point. Projecting these down to  and mixing, crafts a  point that is within  of  (remember, projection doesn't expand distance), so the sequence  still has  as a limit point (it gets arbitrarily close to a sequence that gets arbitrarily close to ). If we can show that all the  lie in , then by closure, we'll get that  lies in the same set so we're done.

Ok, so we have , that project down and mix to make , and importantly, we crafted them so they're produceable without closure. Thus, they have preimage points  (that lack nirvana off-h) Project them down to make , and mix them to make a  in the same set (which still lacks nirvana off-h), and this updates to make  via Lemma 28, as we'll show shortly.

Starting with the , we know that update, project, mix equals  via going . Then, update-project-mix equals project-update-mix equals project-mix-update, which is the path we took. Therefore, all the  lie in , which is closed, so  (arbitrary in the closed convex hull of projections) lies in the same set, establishing the reverse subset direction and thus equality,

Part 4: Now that we're halfway done,let's look at the "intersection of preimages of stubs from below" direction of consistency, . If we ignore the closure part and work with the raw update set sans closure, we can fix a  in , take a preimage point in , project it down to  by consistency, then update it to get exactly the projection of  (again, Lemma 28) Then, when we take the closure, we can just take our  in the closure, fix a sequence in the raw update set sans closure  that limits to , project down, getting  in the raw update set  sans closure, and then the limit point  lies in  by closure, and by continuity of projection,  is the projection of .

Since the sets get bigger as you go down, we can invoke Lemma 6 to swap out the intersection of preimages of all stubs below you, for the intersection of preimages of stubs of the form , this will be important later.

Now, it's trivial to show that  because we've established that projecting down makes a subset, and projection commutes, so any  projects down into  for all n.

All that's left now is the reverse subset direction,

Sadly, this one will require us splitting into the nirvana-containing (and thus causal) cases and the nirvana-free cases, and it's a really difficult one to show. 

Part 5: Let's address the nirvana-free case, we'll use a nifty trick to control the size of the preimage points we select.

Ok, let's say you have a  with some  and  value. And you take  that's a preimage point, but its  and  values are just... waaay too high. We want to have a preimage point with reasonable values, in order to apply bounding arguments. What you do, is find a minimal-point  below , so . Now, what you do, is swap out  ie , for . This is an sa-measure, because

Now, consider updating  instead (it's an a-measure, it has less negative parts than , and is present by nirvana-free upper-completion). This gets you the update of , plus...  (remember, 0 measure off-h). Which is the exact same thing you'd get by updating , so when we updated our new sum, we hit  exactly.

However, this sum is special, because we can stick some decent bounds on its  and  value! For starters, its  value is less than  (updating only adds on b-mass, and it updates to ). And as for the  value... well,  has its  bounded above by  (of the original ) due to being a minimal point. And in the worst-case, all of the measure in  came from the thing we added, so  has a measure of  or less. So our bound on the  value is .

Armed with this knowledge, we can begin to prove the last bit of consistency in the nirvana-free case. Take a  in the intersection of preimages. It projects down to make  in . Projection preserves  and , so they all have the same  bounds. Because we don't need to close in the nirvana-free case, we get a preimage point of  in  From our earlier considerations, we can always pick  such that its  is , and its  is , although we'll be using a bound of .

Now, we're going to have to be extremely careful here. Let the point  be defined as:

If , then  is some arbitrary point in , with  equal to or below , and  equal to or below , which always exists by all minimal points obeying the   bound.

If , then .

If , then 

Then, the tuple of  for all n is a point in:

Equipped with the product topology. In particular, this is a product of compact sets, so by Tychonoff's theorem, it's compact. Thus, we can get a convergent subsequence of the tuples. On this subsequence, all the  converge to a limit point , regardless of n.

Also,  projects down to  if , because for large enough j, the projection of  will always be , and by continuity of projection, the projection of  must be 

Ok, so we've got an infinite sequence of  for all n that all project down onto each other. Another nice feature is that updating  produces . This is because, when j climbs high enough,  projects down to , and  is just  which updates to , which projects down to . By Lemma 28, update-then-project equals project-then-update, so  must update to , for all sufficiently high j. The preimage of a single point is closed, so past a certain point, the  are wandering around in the preimage of , so  also updates to .

Now, our next step is, does the  sequence in  pick out a single point  in  that projects down accordingly? Yes it does. Just intersect all the preimages of single points, they're nested in each other and compact so the finite intersection property holds, and if the intersection wasn't composed of a single point, you'd have two distinct points with a difference at some finite time, but projecting down to any finite time the two distinct points are identical, so there can only be a single point in the intersection. Further, it must lie in , because you can project it down to  in  for any n, which, by consistency for , you can also project down to  (projecting down further), so it's present in the intersection of all the preimages, certifying that it's in the appropriate set.

Now, finally... does , when updated, produce , certifying that the point in the intersection of preimages is also in the raw update set? Well, let's say it didn't. Then we get a  that's not equal to , so projecting down to some finite n should suffice to observe that. However, projecting  and  down produces... . This is because of Lemma 28, update-then-project equals project-then-update. Projecting  down makes , which updates to .

So, no finite stage suffices to observe the difference between the updated form of  and  itself, so they must be identical, certifying  for the nirvana-free case.

Part 6: Let's move to the nirvana-case  where we can leverage causality. We'll be showing this in a rather nonstandard way. We're going to pick a , and show that our  of interest in the intersection of preimages can be written as a limit of points projected down from , establishing that  lies in the closed convex hull of points from above, which we've already shown equals .

Ok, so  is in the intersection of preimages. Project it down to all the , getting a batch of points  from them. This is the raw update set, so within  or less distance from , there's a  in the raw update sans closure, which has a preimage point  that lies in .

Now, pick some arbitrary policy above , which can be written as . Moving on even further, by causality, we can get a point  that projects down to . Update  to get a , which then (by our earlier thing about how a set equaled the closed convex hull of projections down from above), projects down to a .

Now, we can ask whether the sequence  limits to  itself.  is closed, so this would certify that  lies in the appropriate set.

First, observe that the projection of  down to  is . This is by Lemma 28, update-then-project equals project-then-update.  projects down to , which updates to , so  must be what you get by updating  to , and projecting down to  (making ), and projecting that down to .

Now, because projection preserves the b term, and  projects down to  which is within  of  (not much of a difference in the b terms), and  has the same b term as , we can certify convergence of the b term at least. Now for convergence of the measure components. Again,  projects down to  which is within  of  (not much difference before timestep n, shrinking increasingly low), and  perfectly mimics what  does before timestep n. So,  behaves increasingly closely to  for everything before time n, which increases without bound. Increasingly close matches on increasingly large initial segments of what happens mean that  must limit to  itself, certifying that  lies in  for the causal cases.

That's the last bit we needed! We're finally done with consistency now. This just leaves the hausdorff-condition and the extreme-point condition and pseudocausality and causality.

Condition 9: Hausdorff-continuity: 

What we need to do for our setup to even approach this is to show that updating the preimage of the nirvana-free part of , produces exactly the preimage of the nirvana-free part of .

One direction, we can get easily. If you fix a  in the preimage of the nirvana-free part of , it projects down to a , that updates to a , then by Lemma 28, project-then-update equals update-then-project, so  must update to a  that projects down to , certifying that updating the preimage of the nirvana-free part of  produces a subset of the preimage of the nirvana-free part of .

In the other direction, fix a  in the preimage of the nirvana-free part of . It projects down to a  in , and by Lemma 27,  wasn't introduced in the closure, so it has a preimage point .

Now, how do we extend  to craft a  that updates to ? Well, we can split into two parts. What happens on-h, and what happens off-h? For the off-h part, the post-update part has everything folded into the b term, while the pre-update part has an actual measure specified everywhere. Thus, our  should have the same off-h part as  to project down accordingly, so updating folds it into the same b term as  has.

Now, for the on-h part, it's a bit more complicated.  specified what happens for all infinite histories with h as a prefix. However,  and  only specify part of that data, but fortunately agree on that part. Thus, for , you can just extend with the conditional probabilities of , to perfectly mimic it on-h. This makes a  in the preimage that updates to .

Ok, so the appropriate preimages for Hausdorff-continuity (post-update) are made exactly by updating the preimages for Hausdorff-continuity (pre-update). Now, updating is a continuous linear operator. We're mapping from the Banach space  to the Banach space .

Well, this isn't quite right, your actions and observations may vary depending on where you are in history, but the general thing of "restrict to signed measures over infinite histories with h as a prefix" still checks out. Updating is still a continuous linear operator between Banach spaces, by Lemma 8 of Section 1.

Also, all continuous linear operators between Banach spaces are bounded, and thus Lipschitz-continuous at 0, and thus Lipschitz-continuous everywhere due to linearity. So, when we push two points that are only  apart through the update, they're now  apart at most, where  is a finite constant.

We're going to have a lot of points. Unusually enough, we'll be using the standard formulation of Hausdorff-continuity for our original , that for all , there's a  where two partial policies  and  that are  or less apart have  (and the analogous set for ) being only  apart in Hausdorff-distance.

Fixing your , you're gonna want  to be low enough to force a  difference between the clipped preimages, and . It's highly advised to sketch out how our points interact and what sets they're in. A superscript of infinity will be used to denote points in the preimages of the  sets (or ) (ie, at the infinite levels), and a superscript of "u" specifies post-update while its lack is pre-update.

Anyways, here's our points.

 lies in the preimage of , and it's our point that we want to find a point nearby.  will refer to the  value of this thing.

Projecting  down to  makes .

We can find a minimal point below  in .

A nirvana-free point wasn't introduced by the closure, and it has a minimal point in its preimage, so there's a  in  that updates to , and respects the  bound of .

Let  be defined as . We're extending the negative-measure part of  back to its original domain by sticking an h prefix on everything, and saying it's 0 everywhere else. this is an a-measure that lies in  (because  respects the  bound, and the thing that we added has a  value of 0)

Let  be defined as , it also lies in the same set Updating  makes , because, unpacking , it's , which updates to  which adds up to make .

Our goal now is to explicitly construct a  and  in the preimage of  s.t. they project down onto  and  lies below , and  updates to .

A sufficient way to do this is to make  and  by, after h, extending the measures further with the conditional probabilities of the measure component of . Extending  with the conditional probabilities of  witnesses that  lies below . They obviously project down onto  and .

As for  updating to , the b term and the fragment of the measure that doesn't get ignored by projection down matches because  projects to  which updates to  which is the projection of . And, for the fragment of the measure that isn't defined in , but that must be present on the infinite levels, we copied the conditional probabilities of the measure component , so we've got a match there.

Taking a break from setting up all our damn points for a brief recap, we have a  that lies in the preimage of , and a  that lies above it (in the preimage of ), and it updates to hit  (our original point in the preimage of ). Now, we can proceed.

So...  lies in the preimage of . By hausdorff-continuity for  and the distance between  and  being below  because the distance between  and  is below , and using our earlier thing about how a  distance means a  difference between the clipped preimages, we can find a point  in the preimage of  that's that close to . To go up from  to  requires adding  (with the measure component extended with the conditional probabilities of the measure component of , obviously).

Also, because the  value of  is the  value of , which was made by adding  to an a-measure, an upper bound on the  value of that a-measure we added onto  is... . Corresponding to the extreme case where all the measure of  came from .

Now, we can craft a point  which lies in the preimage of  that's only  away from . Why? Well, we can start with , which is only  away from , and take that positive-measure-thingy we added, and reshuffle the measure on it. With earthmover distance, the  distance between  and  corresponds to a time-threshold where they start to differ at , and you're moving dirt a  difference to account for having to land in the right preimage, and you've got  at most dirt to move. Then, you just add  and your reshuffled measure, to get your point . Which is the sum of two components that only differ by  and  from the components which sum to make .

Ok, so we have a point  in the preimage of , which updates to  that lies in the preimage of . And a point  in the preimage of  which is (taking into account that ) only  distance away from .

And now we can finish up, because the preimage of  is the update of the preimage of . So, we just update  to get a point  in the preimage of . And further, the distance between  and  is only  at most.  updates to , and  updates to . And we know that  has a Lipschitz constant of  (by being a continuous linear operator between Banach spaces), so  only has a distance of  from a point in the preimage of .

So, we get Hausdorff-continuity (the Lemma 15 variant).

Condition 8: Extreme Point Condition: 

We had to defer this because  isn't a stub, so we can't use the extreme point condition we had, and instead must regenerate it completely from scratch.

Our first step in this is showing 

One subset direction is easy, the closed convex hull of projections of nirvana-free stuff must all be in  by consistency which we've shown, and all must be nirvana-free. Now for the reverse direction. Let  By Lemma 27, this point wasn't added in the closure, so it has a preimage point . Using all our nice conditions for , we can invoke Lemma 21 to get that , so we can fix a sequence  limiting to  where each  shatters into  that came from some  that's nirvana-free and lies in the associated set of a full policy above .

Updating the  produces a sequence  which is nirvana-free, in , and limits to  by continuity.

Updating the  into  which lie in , projecting down to get , and mixing them, produces , by our usual Lemma 28 argument.

This witnesses that all the  lie in 

Thus,  lies in the closed convex hull of projections of nirvana-free stuff from above. What do we do with this? Well, now we can invoke Lemma 20, since we have Hausdorff-continuity proved, to conclude that  is closed, so we didn't really need the closed convex hull (which we've already shown is the same as )

And we now know that 

Now, we can take a minimal extreme nirvana-free point  in . It must be minimal and extreme and nirvana-free in the original set. If it wasn't minimal in the original set, all minimals below it would be nirvana-free too, witnessing its nonminimiality in the restricted set. And if it wasn't extreme in the original set, then the points that mix to make it must all be nirvana-free too, since it's nirvana-free, so we have a witness of non-extremeness in .

Ok, so it's extreme and nirvana-free. It must also be extreme in the convex hull set, but, since it can't be produced by mixtures, there's a  in some  that projects down to , establishing the extreme point condition.

That just leaves causality and pseudocausality.

Condition C: Causality

Ok, we pick a  and a point  in  Can we make an outcome function for everything that includes our point? By our proof of full causality in the first part of the Isomorphism theorem (finite-to-full direction), this can be done as long as all other conditions are met and we can make an outcome function for any point in any . So, let's just establish finitary causality. Fix some  and some .

Since  is in the updated set, there's a sequence  that limits to  that we don't need closure to get. There's a  and  bound on this sequence because it converges, call those bounds  and . Now, we can take a  that updates to . We can use causality for  to get an outcome function for .

We don't have to worry about nirvana-off-h, because  has no nirvana off-h, and the projection of  down to  preserves the off-h part, and is nirvana-free off-h, and everything above that (which is the only thing that determines the update) must also match the off-h part and certify that it's nirvana-free.

Updating an outcome function back in produces an outcome function for  by Lemma 28 (update then project equals project then update). Said outcome function for  maps  to . We can restrict it to just stubs, to get an outcome function over stubs.

So, proceeding in this way, we get a sequence  of outcome functions for the stubs of . Remember, outcome functions must match  and