This post is an appendix to "Infra-Bayesian physicalism: a formal theory of naturalized induction".

Proposition 2.16: Consider some , , and . Define by . Then,

We can prove the second subset inclusion directly as a corollary of Proposition 2.10, just let the of Proposition 2.10 be the projection function , so that just leaves the first subset inclusion direction. If you've seen the proofs so far, you know we do a thing where we try to show subset inclusion with expectations of functions and inequalities instead. And that the proofs all proceed by transforming the expectations until we get a maximum over contribution expectation values, and that's always where the hard part of proving the inequalities shows up. So, let's just get that part over with, an interested reader can work it out with previous proofs as a guide. Unusually, we'll be keeping track of identity functions here.

Plugging in some , and doing our usual activities to get every term into the appropriate form, we can get this result if we manage to show that So, to establish this, we'll show that, given some , we have , and that because, if we show that, then it means that is a rich enough set for the right-hand side of the equation to match anything the left-hand-side can put out.

First off, is pretty trivial to show. The only difference between the two processes is that the coordinate of is discarded immediately on the right-hand-side, and it's preserved for one step and then discarded on the second step for the left-hand-side.

Now for our inequality of interest. Let , and we're trying to show that First off, showing the support condition for which is somewhat nontrivial this time around. We start off with a guarantee that . This happens iff And so, we get that is guaranteed for that pushforward, support condition established.

endofunction condition time. It's important to remember that we want to treat as the computation side of things, and as the environment side of things, for our bridge transform we're working with. and . Begin. Let's unpack precisely what that set is. And we can rewrite the endofunction a little bit And finally apply our endofunction condition, since we've now got the function in a form that's treating as part of the computational universe... And we're done, this establishes our desired result.

Proposition 2.17: is a continuous function of .

The way this proof will work is by describing a composition of functions that makes from , and then showing that each of these functions is continuous, if is a finite set.

Claim: The bridge transform of some is equal to (using to denote restricting an ultradistribution to the event and to denote the inverse of said function, mapping an ultradistribution on to the largest ultradistribution that could have produced it via restriction) Breaking down the unfamilar notation, the type of is , just the usual projection. That asterisk up top is pullback along that function. The type of is . And is pullback along the function given by .

Let's unpack the exact conditions that cause a to lie in the set First off, a is in this set iff it is supported over the event , and it lies in the set Which occurs iff is supported over the event , and for all , lies in the set Which occurs iff is suported over the event , and for all , lies in the set Which occurs iff is supported over the event , and for all , lies in the set

Now, is just doing a little bit of type conversion, so we're justified in ignoring it. Anways, the previous thing occurs iff is supported over the event , and for all , .

Which happens iff is supported over the event and for all and , However, unpacking the left-hand side, we get Which is the exact condition for to lie in the bridge transform. So, we have an equivalence.

Now, since we've phrased the bridge transform as We just need to establish that when all the sets are finite, then pullbacks are continuous, pushforwards are continuous, un-restrictions are continuous, intersections are continuous, and restrictions are continuous. Then, this would just be a particularly fancy continuous function, and accordingly, if limited to , then would limit to .

Let's establish that when the sets are finite, pullbacks are continuous. Let , and and be finite sets, and . Then, we have With the convention that maximizing over the empty set produces a value of 0. That is an alternate phrasing of pullback. We can then go Admittedly, this isn't quite what our usual modified KR metric usually looks like. The reason we can do this is because we're just dealing with functions in , so the norm part of the modified KR metric doesn't matter, and since our sets are finite, we can say that all points are distance 1 from each other, so all functions are 1-Lipschitz, and then the two metrics coincide. So, pullback along any function is continuous.

For pushforward, it's easy because, if , then we've got For showing restrictions continuous, for the set that we're updating on, For intersections... that will take a bit more work. We'll have to use the equivalent formulation of closeness, that limits to iff the Hausdorff distance between the corresponding sets (according to the generalized KR measure) limits to 0. So, our task is to assume that limits to , and limits to , and show that limits to . The bound we'll manage to prove is that Where is the number of elements in the finite set . Here's the basic argument. For any particular point in the set , there's a nearby point in (since the Hausdorff distance is low) with only measure moved around or deleted. So, in particular, if all the measure moved or deleted was just deleted from instead, then that resulting contribution would be below the nearby contribution in that we picked, and so it would lie in as well due to downwards closure.

So, in particular, if and only have a Hausdorff distance of , then, taking any contribution in and subtracting measure from \emph{all points} (if possible, if not, just remove measure till you're at 0) is \emph{guaranteed} to make a point in , and vice-versa.

And a corollary of that is that, given any contribution in , the "subtract measure from each point" contribution is in , also in , and at a maximum distance of from the original contribution. And this argument can be reversed to show that the limit of the intersections is the intersection of the limits (because hausdorff distance between the two goes to 0), so we do in fact have intersection being continuous.

And that just leaves un-restricting. Again, this will take a Hausdorff-distance argument. Fixing some contribution in , it can be broken down into an on-E part , and an off-E part . When you restrict to E, then . Since is within of , there's some that's within of . Then, let your point in be (if there's slightly more than 1 measure there, delete measure from , or all the measure if there's less than present). It's close to because is close to , the other component of it is unchanged, and maybe we deleted a little bit of excess measure which didn't do much.

This line of argument shows that being close to the limit is sufficient to establish that the un-restriction of the two of them are comparably close together. So we have continuity for that, which is the last thing we needed.

Since we wrote the bridge transform as a sequence of continuous functions, we know it's continuous (as long as all the involved sets are finite)

Proposition 3.1: Let be a finite poset, and downward closed. Define by . Observe that is always non-decreasing. Then, .

Proof: Pick a s.t. . Ie, a maximizing contribution. Let be defined as . Ie, it moves a point down to somewhere below it where it can attain the highest value according to . Now, consider . It's present in because was, by assumption, downwards closed, and we just moved all the measure down.

Now, we have And so, all inequalities must be equalities, proving that . In order, the connectives were: unpacking definitions, using downward closure to conclude that , unpacking pushforwards, unpacking the definition of , using that applying a function to the argmax of inputs to that function just makes the max of the function, folding the definition of back up, using that was selected to maximize , and applying monotonicity. Done!

Proposition 4.1: Consider some , , a relation and a PUCK over . Let . Then,

First off, I'm not terribly picky about variable ordering, so I'll just write our desired proof target as The way we'll do this is by establishing the following result. For all monotone , we have Why does that suffice? Well, assume hypothetically that the result held. Since the inequalities go in a circle, we have equality for all monotone functions. And then, for some non-monotone function , we can go and swap out for to show the other equality, and then we'd have equality of the three ultradistributions on all functions, so they're equal.

For the equalities in the above equation, the first one arose because of Proposition 2.4 (bridge transforms are always downwards closed) and Proposition 3.1 (downwards-closed things let you swap out for and it doesn't affect the value). The second equality arose because is a monotone function and by assumption, we have equality for monotone functions. The third equality would arise because taking the downwards closure doesn't affect the expectation value of monotone functions. If you add a bunch of contributions made by measure flowing down, that's just strictly worse from the perspective of a monotone function and doesn't change expectation value. And the fourth equality arises from Proposition 3.1 again.

So, we just need to prove the following three inequalities, for monotone functions . The first one is easily addressable by Proposition 2.7. By proposition 2.7 and the definition of , we have And so, for monotone functions , we have Done.

Now to show our second inequality. Unpack the definition of the set Unpack the definition of The condition is equivalent to . After all, if , the distribution lies in , so would certify that . And if , then no matter the distribution in or kernel selected from , if gets picked, then the kernel selected from isn't going to be making along with it. Since we have that iff characterization, we have is the union of a bunch of for (and convex hull), so its support is equal to the union of the supports for the . Then, since each is a PoCK over , is the restriction of some measure to the set , which will be written as . And now we're about to get an inequality. is monotone, so making the associated set bigger (easier to fulfill the defining condition) should always increase the value of , and by monotonicity, increase the expectation value, so we get Then restate And pack back up as a semidirect product. And we have our second inequality established!

Now, onto the third inequality. Unpack the semidirect products And what top means And as for ... well, each is the convex hull of the various , for . So, the expectation for is the maximum expectation for the various , so we can rewrite as Pick a particular and that attain the maximal value Reexpress a little bit And pack this back up as a semidirect product And then we'll be showing that this contribution lies in . Once we've done that, we can go And we'd be done, having proven the third inequality and the last one to finish up the proof. So, now our proof target switches to showing that . We can show this if we show the support condition and the endofunction condition.

For the support condition, we have And then we use that the are all of the form "take this measure, restrict it to ", to get Unpacking the definitions, we get And so, this contribution is indeed supported on pairs s.t. .

Now for the endofunction condition. As usual, fix an and a . Unpack the semidirect product Plug in the dirac-deltas Reexpress the set membership criterion a bit And the contribution at the start Distribute it in as an indicator function. Pull the other indicator function out. Rewrite with Use an inequality to get rid of the indicator function Rewrite it a bit Swap out for , the latter is larger Swap out for , the latter is larger Abbreviate And bam, endofunction condition is shown, the entire proof goes through now.

Corollary 4.3: Suppose that for any and s.t. , it holds that . That is, the observations predicts to receive from the computer are consistent with the chosen policy. Let be a Cartesian loss function and a policy. Then,

I'm going to be proceeding very cautiously here. First off, make our value visually distinct by swapping it out for Now, by the identifications we made earlier, we can identify with , the space of policies. Using that to unpack the function a little bit, we have Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack , we get Apply that is "what would the agent do on if the agent is copying the behavior of ", so we can rephrase as: Pull off the projection, and use for a destiny in . At this point, we use that , and that is a PUCK over and Proposition 4.1 to go Before we can remove the downwards closure, we'll want to verify the function is monotone. So, we'll want to start unpacking the physicalist loss next. Applying definition 3.1, and using instead of to remember it's a destiny, we have Next up is unpacking . Using definition 3.1, it's At this point, we can, again, treat the same as . And now we need to take a moment to show that gets smaller when gets larger. Applying definition 1.5, the event unpacks as Now, if becomes a larger set, then it gets harder for the first condition to be fulfilled, so the set shrinks. Now, since this happens, it means that if gets bigger, it gets more difficult for the prerequisite of the implication in the indicator function to be fulfilled, so the implication is more likely to hold. Further, the minimization is taking place over a smaller set, so the loss goes up. So our function is monotone in , and we can remove the downwards closure. Unpacking the semidirect product, it is Substituting in the dirac-delta everywhere that is, we get Now, is the set of policies s.t. . The "this policy is consistent with this destiny" relation. Also let's swap out for maximization Now, we're going to try to address that minimum, and show that the only that fulfill the conditions are exactly those . This requires showing that is a sufficient condition to fulfill the relevant properties, and then to show that implies a failure of one of the properties.

So, first up. Assume . Then, for any , and \emph{must} imply that , that's what policy consistency means. Also, unpacks as the two conditions As for the first condition,clearly, if is consistent with , it's consistent with because , and so it must be consistent with any prefix of , so the first condition holds.

For the second condition, is a valid choice, because we assumed , and occurs always, because always being supported on s.t. was one of our problem assumptions.

So, we have one implication direction down. Now for the reverse implication direction. Assume . Then there are two possibilities. the first possibility is that first diverges from on an observation. The second possibility is that first diverges from on an action.

For the first possibility, it's possible to make two policies which are consistent with but also differ in their actions on history , because isn't a prefix of if first differs from on an observation.

For the second possibility, it's ruled out by either the condition for that goes or the extra condition that applied to the first a-history prefix that deviates from , because implies that must be the action which dictates, not the action that deviates from .

And that establishes the other direction of the iff statement.

Thus, we can swap out our fancy minimization with just minimizing over the . This minimization is attained by selecting itself. So then it turns into At this point, what we'll do is show that an upper bound and lower bound on the value of this term is the same. Going from upper bound to lower bound, it's starting out with At this point, we'll use that is a PUCK, so there's a set of environments (PoCK's) that is generated from, so we can go: Now pull the indicator function back out. Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, Let be one of the history prefixes of some in the support of . The two conditions for are fulfilled, because they are For the first condition, if , then , and so if is consistent with , it must take the same action in response to , the action that commands, . So that's fulfilled. For the second condition, let be . holds, and so holds certainly, because is supported on s.t. .

So, for all in the support of , . Since we assumed our forall statement as prerequisite, this means that for all , . And means . Since mimics for all history prefixes of , this means , ie .

So, since this is a looser constraint, when we were previously at we can proceed further to Which is our value we're trying to sandwich. Now, at this point, plug in and get And bam, we've sandwiched our term between on both sides, and so the result follows.

Proposition 4.2: Let , and be finite sets, a relation, a -PoCK over and . Then, there exist and s.t. for all , is a PoCK over , and for all , . Moreover, suppose that and are both as above. Then,

Our first order of business is establishing that there's even a and that has those effects at all. Here's a way to define them. Where is the measure on that is associated with, ie, must be true for some because is a Z-PoCK over Q. And, will be defined as: With those definitions in place, it's easy to establish that . We can just fix an arbitrary pair and go And we're done with showing that such functions exist in the first place. Well, as long as we check that and behave accordingly. First off, being a contribution follows from being a Z-polycontribution, and the definition of Z-polycontributions. Also, to show that is a PoCK over Q, we need to show that there's a s.t. , and that always has 1 or less measure.

In order to do this, define Clearly, you get from restricting this to , because we have And we're done. And also, the measure is , because and, skipping over a few routine steps, And we're done, we figured out how to decompose into and .

Now for the second half of the proof. The first thing to establish is that, for all , we have . This occurs because, for all , And then by symmetry, the exact same holds for and , both were declared to be equal to . Now that this result is in place, we can begin. Now, we do something odd. We can reexpress this as Basically, what's going on here is that we can swap out the contribution for the counting measure (1 measure on each distinct point) and just scale down the expectation values accordingly. It's pretty much the same way that you can think of (expectation of w.r.t ) as (expectation of w.r.t the counting measure). Now, since is homogenous, we can move constants in or out of it, to get Now, at this point, we can use that , to get And just back up and reverse everything. And we're done!

Lemma 4: Let , and be finite sets, a relation, -PUCKs over , and . Then, is also a -PUCK over , and

Our first order of business is establishing that the mix of Z-PUCK's over Q is a Z-PUCK over Q. Here's what we'll do. We'll define a family of kernels, show that they're all Z-PoCK's, and that said family makes a Z-PUCK that's equal to the mix of and .

So, let be the set of Z-PoCK's associated with , and be the set of Z-PoCK's associated with . Elements of these sets are and . Define as .

By Definition 4.5, in order to establish that these are Z-PoCK's over Q, we need to make an appropriate choice of . In particular, the associated with is . It fufills definition 4.5 because By unpacking our definition, using how mixes of kernels work, applying definition 4.5 for and , and then just doing some simple regrouping and packing the definition back up, we get our result.

But wait, we still need to show that is a Z-Polycontribution on Q. Again, this isn't too hard to show, with Definition 4.4. And bam, we have our inequality demonstrated, everything works out. Now we just need to show that this family of Z-PoCK's makes the Z-PUCK that's the mixture of the two. We'll establish equality by showing equality for all functions and all . Done, we've shown equality of the Z-PUCK with the mixture of other Z-PUCKs, establishing that the mixture of Z-PUCKs is a Z-PUCK.

That leaves establishing our relevant inequality. But before we do that, we'll be wanting a nice handy form for that asterisk operator to manipulate things with. Given some that's a Z-PUCK over Q, remember from the previous proof that a valid choice for and to break down is and, abbreviating things a little bit, we have So, we can get a pleasant-to-manipulate form for as follows. And proceed further And then we move the constant into since it's homogenous, and then into the sum, and it cancels out with the fraction. This general form will be used whenever we need to unpack . Now, let's get started on the proof of our subset inclusion thingy. As usual, will be the set , and as we've shown, that's the set of Z-PoCK's associated with . Also, as we've already shown, the associated Z-polycontribution for is . This will be implicitly used in the following. Now we use our preferred unpacking of how that asterisk operator works. And unpack and appropriately. At this point, we use convexity of , since it's an ultradistribution. At this point, you can pack up things. Done!

Proposition 4.3: Let , and be finite sets, a relation, -PoCKs over , and . Then, is also a -PoCK over , and

Use Lemma 4, along with -PoCKs being a special case of -PUCKs.

Proposition 4.4: Let , and be finite sets, a relation and a -PUCK over . Denote . Define by , \textbf{}. Then

Proof: As usual, when establishing inequalities with downwards closures, we only have to verify the result for monotone functions. So, we may assume that is monotone, and attempt to show that Remember, bridge transforms cash out as a maximum over contributions, so to show the first inequality, we'll need to build a contribution that matches or exceeds that first term, and that lands in the bridge transform of . For the second inequality, it's considerably easier, we just use our Lemma 2 to figure out what sort of sets the bridge transform is supported on, swap out the sets it's supported on for a bigger set upper bound, and bam, monotonicity of takes over from there. From there, it's easy to show the second inequality. Let's unpack that first thing And at this point we unpack what is. And the . And then, can be broken down into some and , and that goes on both sides of as our previous proposition shows. Now we can start filling in some data. There's a maximizing , so we can substitute that in. That gives us a canonical choice for what and are. Making that substitution, And then, let be the function mapping each particular to the which maximizes . This lets us reexpress things as And now, we can start unpacking things a bit. And now we can write things as just a giant semidirect product. Now we'll show that this particular contribution lies in .

Checking the support condition, we want to check for sure that , ie, the event has measure 0. Let's begin. Substitute in the dirac-deltas. Unpack what is. Now, always occurs, so that indicator function is the same as just testing whether . Rephrasing things a little bit, Then, from proposition 4.2, we remember that is a PoCK over . Ie, for any particular , looks like a particular measure () restricted to . So, in particular, must be supported over . Put another way, with full measure, . So, this event failing has 0 measure. And we're done with that support condition.

Now to show the endofunction condition. As usual, we'll let , and let . Actually, for conceptual clarity, since can be viewed as a pair of functions and , we'll be using that formulation in our equation. As usual, we'll substitute in our dirac-deltas to simplify things. Substitute in what is. Now, if that "this pair of points lies in this set" indicator function goes off, then . So, we can substitute that into the term afterwards. And then get a inequality by making the indicator function less strict. And reexpress the indicator function a little bit At this point, we can use that is (ie, fixing and varying it just looks like you're taking one measure and conditioning on various ), so reexpress things as And then, view the indicator function as just more conditioning. And then, relax about what you're conditioning on. Rewrite it as a kernel again Pull out the dirac-delta Throw one more inequality at it Write it as top Write as a semidirect product Reexpress And we're done! endofunction condition shown. Our relevant contribution is in . Let's see, where were we... ah right, we had shown that for all monotone , For some choice of and . We know this is in , so we get And we're done! One inequality done. That just leaves showing the second inequality, where . It's actually not too bad to show. Start with And then, we recall our Lemma 2, that if had its support entirely on tuples where in (for some ), then all the would be supported on pairs where . And then, swapping out for , by monotonicity of , produces a larger value.

To invoke this argument, our choice of will be , where . We do need to show that is supported on such tuples. And then use that since it's a PoCK in Q, to get Hm, we updated on a set, and are evaluating the indicator function for not being in the set. Ok, so this means we can invoke Lemma 2. We were previously at So now we can invoke monotonicity and go And then invoke our endofunction property for the stuff in , letting be the identity function (and also occurs always) to establish a uniform upper bound of And we're done! Second inequality demonstrated.

Corollary 4.5: Suppose that for any , and s.t. , it holds that . That is, the observations predicts to receive from the computer are consistent with the chosen policy and 's beliefs about computations. Let be a Cartesian loss function and a policy. Define by . Then,

To a large extent, this will follow the proof of the previous corollary. We'll use for and for just the policy component.

I'm going to be proceeding very cautiously here. First off, make our value special by swapping it out for Now, by the identifications we made earlier, we can identify with , the space of policies and computations. Using that to unpack the function a little bit, we have Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack , we get Throughout we'll be applying that is "what would the agent do on if the agent is copying the behavior of " (remember, part of the math is "what does the agent do in response to this history" and is our term for that chunk of the math), so we'll just always rephrase things like that and won't bother to say every time we do it. Pull off the projection, and use for a destiny in . Applying definition 3.1, and using instead of to remember it's a destiny, we have Next up is unpacking . Using definition 3.1, it's Now we'll show that only depends on , ie, the projection of from to , and that it gets smaller as gets larger, so the function above is monotone. Let's use definition 1.5 to unpack the event . It shouldn't be too hard to tell that getting larger makes this set smaller, and that, since the doesn't matter, this condition is really the same as So, we'll use to remind us that our function only depends on the projection of . And now we can pull that out as a projection! We're now using for our set of policies, instead of for our set of policy/computation pairs. To proceed further, we're going to need to adapt our previous result which gets an upper and lower bound on the bridge transform of suitable . Our first order of business is checking that getting larger makes the function larger, which is easy to check since getting larger cuts down on the options to minimize over (increasing value), and makes the antecedent of the implication harder to fulfill, which makes the implication as a whole easier to fulfill, so the indicator function is 1 more often. Now we can proceed further. Abstracting away a bit from our specific function, which will be swapped out for some which is monotone in that last argument, we have Monotonicity was used to go back and forth between the downwards closure and the raw form. and are as they were in Proposition 4.4, and would be the relation on telling you whether a policy is consistent with a destiny. Now, by Proposition 4.4, since the bridge transform of is sandwiched between those two values, and they're both equal, we have The first equality was just relocating the projection. the second equality was from the bridge transform being sandwiched between two equal quantities, so it equals all the stuff on our previous big list of equalities (we went with the middle one). Then just express as a semidirect product, and you're done. Applying this to our previous point of We can reexpress it as And start unpacking the semidirect product Now, we're going to try to address that minimum, and show that the only that fulfill the conditions are exactly those . This requires showing that is a sufficient condition to fulfill the relevant properties, and then to show that implies a failure of one of the properties.

The proof for this is almost entirely identical to the corresponding proof in the non-turing-law case, there are no substantive differences besides one issue to clear up. We need to show that always being supported on the relation , for all (as one of our starting assumptions) implies that is supported on as well. Here's how we do it. We have . And then this ultracontribution (by the definition of -PUCK's) can be written as the convex hull of the union of a bunch of ultracontributions of the form , where is a -PoCK. So, if we can show all of these are supported on the relation , then the same holds for the convex hull of their union, ie, . By Proposition 4.2, we can reexpress this ultracontribution as , where , for any policy . Now, let's check the expectation value of the indicator function for being violated. Let's assume that the expectation of that indicator function is \emph{not} zero. Then there must be some particular in the support of where it is nonzero, and some particular that attains that nonzero expectation value. So, there's a in the support of and s.t. and so this means that we have Because that is assigned nonzero measure, and then this reshuffles to Which, via Proposition 4.2, is But this contradicts that (and so all the ) were supported on the event , so we have a contradiction, and our result follows.

Now that that issue is taken care of, we can swap out our fancy minimization with just minimizing over the . This minimization is attained by selecting itself. So then it turns into At this point, we'll upper-and-lower-bound this quantity by . Let's begin. Here we're using Proposition 4.2 on being able to split up Z-PoCK's (which is) a certain way. This equality happens because you can always just pick as your choice of , and since can only produce destinies consistent with . Now, we can swap out for the measure we're conditioning on an event Reshuffle the indicator function back in Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, Let be an arbitrary destiny in the support of , and be one of the history prefixes of . The two conditions for are fulfilled, because they are For the first condition, if , then , and so if is consistent with , it must take the same action in response to , the action that commands. So it is fulfilled. For the second condition, let be . holds, and holds certainly, because is supported on , as we've previously shown.

So, certainly, . Since we assumed our forall statement as prerequisite, this means that for all , . And means . Since mimics for all history prefixes of , this means , ie .

So, since this is a looser constraint, when we were previously at we can proceed further to which is the value we wanted to sandwich. Proceed further with And we've got the same upper and lower bound, so our overall quantity is And we're done.

New Comment