Finite Factored Sets: Polynomials and Probability

Fixed, Thanks.

Finite Factored Sets: Inferring Time

Yeah, also note that the history of given is not actually a well defined concept. There is only the history of given for . You could define it to be the union of all of those, but that would not actually be used in the definition of orthogonality. In this case , , and are all independent of choice of , but in general, you should be careful about that.

215dYeah, fair point. (I did get this right in the summary
[https://www.lesswrong.com/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets?commentId=Lu3ghHAHSXr5uQrRt]
; turns out if you try to explain things from first principles it becomes
blindingly obvious what you should and shouldn't be doing.)

Finite Factored Sets: Inferring Time

I think that works, I didn't look very hard. Yore histories of X given Y and V given Y are wrong, but it doesn't change the conclusion.

216dYeah, both of those should be{X,V}, if I'm not mistaken (a second time).

Finite Factored Sets: Orthogonality and Time

I could do that. I think it wouldn't be useful, and wouldn't generalize to sub partitions.

Garrabrant and Shah on human modeling in AGI

I don't know, the negation of the first thing? A system that can freely model humans, or at least perform computation indistinguishable from modeling humans.

11moNot modeling vs modeling. Thx.

Finite Factored Sets: Orthogonality and Time

Fixed, Thanks.

Finite Factored Sets

- Given a finite set of cardinality , find a computable upper bound on the largest finite factored set model that is combinatorially different from all smaller finite factored set models. (We say that two FFS models are combinatorially different if they say the same thing about the emptiness of all boolean combinations of histories and conditional histories of partitions of .) (Such an upper bound must exist because there are only finitely many combinatorially distinct FFS models, but a computable upper bound, would tell us that tempo

A simple example of conditional orthogonality in finite factored sets

Yeah, this is the point that orthogonality is a stronger notion than just all values being mutually compatible. Any x1 and x2 values are mutually compatible, but we don't call them orthogonal. This is similar to how we don't want to say that x1 and (the level sets of) x1+x2 are compatible.

The coordinate system has a collection of surgeries, you can take a point and change the x1 value without changing the other values. When you condition on E, that surgery is no longer well defined. However the surgery of only changing the x4 value is still well defi... (read more)

A simple example of conditional orthogonality in finite factored sets

Thanks for writing this.

On the finiteness point, I conjecture that "finite dimensional" (|B| is finite) is sufficient for all of my results so far, although some of my proofs actually use "finite" (|S| is finite). The example with real numbers is still finite dimensional, so I don't expect any problems.

13moSeems right. I still think it's funky that X_1 and X_2 are conditionally
non-orthogonal even when the range of the variables is unbounded.

Finite Factored Sets: Introduction and Factorizations

Fixed, Thanks.

Finite Factored Sets

Looks like you copied it wrong. Your B only has one 4.

Finite Factored Sets

I have not thought much about applying to things other than finite sets. (I looked at infinite sets enough to know there is nontrivial work to be done.) I do think it is good that you are thinking about it, but I don't have any promises that it will work out.

What I meant when I think that this can be done in a categorical way is that I think I can define a nice symmetric monodical category of finite factored sets such that things like orthogonality can be given nice categorical definitions. (I see why this was a confusing thing to say.)

Finite Factored Sets

If I understand correctly, that definition is not the same. In particular, it would say that you can get nontrivial factorizations of a 5 element set: {{{0,1},{2,3,4}},{{0,2,4},{1,3}}}.

14moThat misses element 4 right?
>>> from itertools import product
>>> B = [[{0, 1}, {2, 3, 4}], [{0, 2, 3}, {1, 3}]]
>>> list(product(*B))
[({0, 1}, {0, 2, 3}),
({0, 1}, {1, 3}),
({2, 3, 4}, {0, 2, 3}),
({2, 3, 4}, {1, 3})]
>>> [set.intersection(*tup) for tup in product(*B)]
[{0}, {1}, {2, 3}, {3}]
>>> set.union(*[set.intersection(*tup) for tup in product(*B)])
{0, 1, 2, 3}

Finite Factored Sets

When I prove it, I prove and use (a slight notational variation on) these two lemmas.

- If , then for all .
- .

(These are also the two lemmas that I have said elsewhere in the comments look suspiciously like entropy.)

These are not trivial to prove, but they might help.

Finite Factored Sets

I think that you are pointing out that you might get a bunch of false positives in your step 1 after you let a thermodynamical system run for a long time, but they are are only approximate false positives.

1[comment deleted]4mo

Finite Factored Sets

I think my model has macro states. In game of life, if you take the entire grid at time t, that will have full history regardless of t. It is only when you look at the macro states (individual cells) that my time increases with game of life time.

Finite Factored Sets

As for entropy, here is a cute observation (with unclear connection to my framework): whenever you take two independent coin flips (with probabilities not 0,1, or 1/2), their xor will always be high entropy than either of the individual coin flips.

Finite Factored Sets

Wait, I misunderstood, I was just thinking about the game of life combinatorially, and I think you were thinking about temporal inference from statistics. The reversible cellular automaton story is a lot nicer than you'd think.

if you take a general reversible cellular automaton (critters for concreteness), and have a distribution over computations in general position in which initial conditions cells are independent, the cells may not be independent at future time steps.

If all of the initial probabilities are 1/2, you will stay in the uniform distribution,... (read more)

14moWait, can you describe the temporal inference in more detail? Maybe that's where
I'm confused. I'm imagining something like this:
1. Check which variables look uncorrelated
2. Assume they are orthogonal
3. From that orthogonality database, prove "before" relationships
Which runs into the problem that if you let a thermodynamical system run for a
long time, it becomes a "soup" where nothing is obviously correlated to anything
else. Basically the final state would say "hey, I contain a whole lot of
orthogonal variables!" and that would stop you from proving any reasonable
"before" relationships. What am I missing?

24moI think my model has macro states. In game of life, if you take the entire grid
at time t, that will have full history regardless of t. It is only when you look
at the macro states (individual cells) that my time increases with game of life
time.

24moAs for entropy, here is a cute observation (with unclear connection to my
framework): whenever you take two independent coin flips (with probabilities not
0,1, or 1/2), their xor will always be high entropy than either of the
individual coin flips.

Finite Factored Sets

Yep, there is an obnoxious number of factorizations of a large game of life computation, and they all give different definitions of "before."

14moI think your argument about entropy might have the same problem. Since classical
physics is reversible, if we build something like a heat engine in your model,
all randomness will be already contained in the initial state. Total "entropy"
will stay constant, instead of growing as it's supposed to, and the final state
will be just as good a factorization as the initial. Usually in physics you get
time (and I suspect also causality) by pointing to a low probability macrostate
and saying "this is the start", but your model doesn't talk about macrostates
yet, so I'm not sure how much it can capture time or causality.
That said, I like really like how your model talks only about information,
without postulating any magical arrows. Maybe it has a natural way to recover
macrostates, and from them, time?

Finite Factored Sets

I don't have a great answer, which isn't a great sign.

I think the scientist can infer things like. "algorithms reasoning about the situation are more likely to know X but not Y than they are to know Y but not X, because reasonable processes for learning Y tend to learn learn enough information to determine X, but then forget some of that information." But why should I think of that as time?

I think the scientist can infer things like "If I were able to factor the world into variables, and draw a DAG (without determinism) that is consistent with the distribu... (read more)

24moThanks for the response! Part of my confusion went away, but some still remains.
In the game of life example, couldn't there be another factorization where a
later step is "before" an earlier one? (Because the game is non-reversible and
later steps contain less and less information.) And if we replace it with a
reversible game, don't we run into the problem that the final state is just as
good a factorization as the initial?

Finite Factored Sets

I partially agree, which is partially why I am saying time rather than causality.

I still feel like there is an ontological disagreement in that it feels like you are objecting to saying the physical thing that is Alice's knowledge is (not) before the physical thing that is Bob's knowledge.

In my ontology:

1) the information content of Alice's knowledge is before the information content of Bob's knowledge. (I am curios if this part is controversial.)

and then,

2) there is in some sense no more to say about the physical thing that is e.g. Alice's knowledge beyon... (read more)

44moNot sure we disagree, maybe I'm just confused. In the post you show that if X is
orthogonal to X XOR Y, then X is before Y, so you can "infer a temporal
relationship" that Pearl can't. I'm trying to understand the meaning of the
thing you're inferring - "X is before Y". In my example above, Bob tells Alice a
lossy function of his knowledge, and Alice ends up with knowledge that is
"before" Bob's. So in this case the "before" relationship doesn't agree with
time, causality, or what can be computed from what. But then what conclusions
can a scientist make from an inferred "before" relationship?

Finite Factored Sets

Now on OEIS: https://oeis.org/A338681

Finite Factored Sets

is the event you are conditioning on, so the thing you should expect is that , which does indeed hold.

24moThanks, that makes sense! Could you say a little about why the weak union axiom
holds? I've been struggling to prove that from your definitions. I was hoping
thathF(X|z,w)⊆hF(X|z)would hold, but I don't think thathF(X|z)satisfies the
second condition in the definition of conditional history forhF(X|z,w).

Finite Factored Sets

I think I (at least locally) endorse this view, and I think it is also a good pointer to what seems to me to be the largest crux between the my theory of time and Pearl's theory of time.

Finite Factored Sets

Hmm, I am not sure what to say about the fundamental theorem, because I am not really understanding the confusion. Is there something less motivating about the fundamental theorem, than the analogous theorem about d-seperation being equivalent to conditional independence in all distributions comparable with the DAG?

Maybe this helps? (probably not): I am mostly imagining interacting with only a single distributions in the class, and the claim about independence in all probability distributions comparable with the structure can be replaced with instead indep... (read more)

Finite Factored Sets

Makes sense. I think a bit of my naming and presentation was biased by being so surprised by the not on OEIS fact.

I think I disagree about the bipartite graph thing. I think it only feels more natural when comparing to Pearl. The talk frames everything in comparison to Pearl, but I think if you are not looking at Pearl, I think graphs don’t feel like the right representation here. Comparing to Pearl is obviously super important, and maybe the first introduction should just be about the path from Pearl to FFS, but once we are working within the FFS ontology... (read more)

24moI agree that bipartite graphs are only a natural way of thinking about it if you
are starting from Pearl. I'm not sure anything in the framework is really
properly analogous to the DAG in a causal model.

Finite Factored Sets

I was originally using the name Time Cube, but my internal PR center wouldn't let me follow through with that :)

Finite Factored Sets

Thanks Paul, this seems really helpful.

As for the name I feel like "FFS" is a good name for the analog of "DAG", which also doesn't communicate that much of the intuition, but maybe doesn't make as much sense for name of the framework.

44moI think FFS makes sense as an analog of DAG, and it seems reasonable to think of
the normal model as DAG time and this model as FFS time. I think the name made
me a bit confused by calling attention to one particular diff between this model
and Pearl (factored sets vs variables), whereas I actually feel like that diff
was basically a red herring and it would have been fastest to understand if the
presentation had gone in the opposite direction by demphasizing that diff (e.g.
by presenting the framework with variables instead of factors).
That said, even the DAG/FFS analogy still feels a bit off to me (with the caveat
that I may still not have a clear picture / don't have great aesthetic
intuitions about the domain).
Factorization seems analogous to describing a world as a set of variables (and
to the extent it's not analogous it seems like an aesthetic difference about
whether to take the world or variables as fundamental, rather than a substantive
difference in the formalism) rather than to the DAG that relates the variables.
The structural changes seem more like (i) replacing a DAG with a bipartite
graph, (ii) allowing arrows to be deterministic (I don't know how typically this
is done in causal models). And then those structural changes lead to
generalizing the usual concepts about causality so that they remain meaningful
in this setting.
All that said, I'm terrible at both naming things and presenting ideas, and so
don't want to make much of a bid for changes in either department.

I was originally using the name Time Cube, but my internal PR center wouldn't let me follow through with that :)

Finite Factored Sets

Here is a more concrete example of me using FFS the way I intend them to be used outside of the inference problem. (This is one specific application, but maybe it shows how I intend the concepts to be manipulated).

I can give an example of embedded observation maybe, but it will have to come after a more formal definition of observation (This is observation of a variable, rather than the observation of an event above):

Definition: Given a FFS , and , , , which are partitions of , where , we say ... (read more)

Finite Factored Sets

I'll try. My way of thinking doesn't use the examples, so I have to generate them for communication.

I can give an example of embedded observation maybe, but it will have to come after a more formal definition of observation (This is observation of a variable, rather than the observation of an event above):

Definition: Given a FFS , and , , , which are partitions of , where , we say observes relative to W if:

1) ,

2) can be expressed in the form , an... (read more)

Finite Factored Sets

Hmm, first I want to point out that the talk here sort of has natural boundaries around inference, but I also want to work in a larger frame that uses FFS for stuff other than inference.

If I focus on the inference question, one of the natural questions that I answer is where I talk about grue/bleen in the talk.

I think for inference, it makes the most sense to think about FFS relative to Pearl. We have this problem with looking at smoking/tar/cancer, which is what if we carved into variables the wrong way. What if instead of tar/cancer, we had a varia... (read more)

24moHere is a more concrete example of me using FFS the way I intend them to be used
outside of the inference problem. (This is one specific application, but maybe
it shows how I intend the concepts to be manipulated).
I can give an example of embedded observation maybe, but it will have to come
after a more formal definition of observation (This is observation of a
variable, rather than the observation of an event above):
Definition: Given a FFSF=(S,B), andA,W,X, which are partitions ofS, whereX={x1,…
,xn}, we sayAobservesXrelative to W if:
1)A⊥X,
2)Acan be expressed in the formA=A0∨S⋯∨SAn, and
3)Ai⊥W|(S∖xi).
(This should all be interpreted combinatorially, not probabilistically.)
The intuition of what is going on here is that to observe an event, you are
being promised that you 1) do not change whether the event holds, and 3) do not
change anything that matters in the case where that event does not hold. Then,
to observe a variable, you can basically 2) split yourself up into different
fragments of your policy, where each policy fragment observes a different value
of that variable. (This whole thing is very updateless.)
Example 1: (non-observation)
An agentA={L,R}does not observe a coinflipX={H,T}, and chooses to raise either
his left or right hand. Our FFSF=(S,B)is given byS=A×X, andB={A,X}. (I am
abusing notation here slightly by conflatingAwith the partition you get onA×Xby
projecting onto theAcoordinate.) Then W is the discrete partition onA×X.
In this example, we do not have observation. Proof: A only has two parts, so if
we express A as a common refinement of 2 partitions, at least one of these two
partitions must be equal to A. However, A is not orthogonal to W given H and A
is not orthogonal to W given T. (hF(A|H)=hF(W|H)=hF(A|T)=hF(W|T)={A}). Thus we
must violate condition 3.
Example 2: (observation)
An agentA={LL,LR,RL,RR}does observe a coinflipX={H,T}, and chooses to raise
either his left or right hand. We can think ofAas actually choosing a polic

3[comment deleted]4mo

Finite Factored Sets

I are note sure what you are asking (indeed I am not sure if you are responding to me or cousin_it.)

One thing that I think is going on is that I use "factorization" in two places. Once when I say Pearl is using factorization data, and once where I say we are inferring a FFS. I think this is a coincidence. "Factorization" is just a really general and useful concept.

So the carving into A and B and C is a factorization of the world into variables, but it is not the kind of factorization that shows up in the FFS, because disjoint factors should be independent ... (read more)

Finite Factored Sets

I think that the answers to both the concern about 7 elements, and the desire to have questions depend of previous questions come out of thinking about FFS models, rather than FFS.

If you want to have 7 elements in , that just means you will probably have more than 7 elements in .

If I want to model a situation where some questions I ask depend on other questions, I can just make a big FFS that asks all the questions, and have the model hide some of the answers.

For example, Let's say I flip a biased coin, and then if heads I roll a biased 6... (read more)

Finite Factored Sets

Yep, this all seems like a good way of thinking about it.

Finite Factored Sets

Hmm, I doubt the last paragraph about sets of partitions is going to be valuable, bet the eigenspace thinking might be useful.

Note that I gave my thoughts about how to deal with the uniform distribution over 4 elements in the thread responding to cousin_it.

Finite Factored Sets

Ok, makes sense. I think you are just pointing out that when I am saying "general position," that is relative to a given structure, like FFS or DAG or symmetric FFS.

If you have a probability distribution, it might be well modeled by a DAG, or a weaker condition is that it is well modeled by a FFS, or an even weaker condition is that it is well modeled by a SFFS (symmetric finite factored set).

We have a version of the fundamental theorem for DAGs and d-seperation, we have a version of the fundamental theorem for FFS and conditional orthogonality, and ... (read more)

24moCan you give some more examples to motivate your method? Like the
smoking/tar/cancer example for Pearl's causality, or Newcomb's problem and
counterfactual mugging for UDT.

Finite Factored Sets

It looks like X and V are independent binary variables with different probabilities in general position, and Y is defined to be X XOR V. (and thus V=X XOR Y).

Finite Factored Sets

Yeah, you are right. I will change it. Thanks.

Finite Factored Sets

I don't understand what conspiracy is required here.

X being orthogonal to X XOR Y implies X is before Y, we don't get the converse.

Well, imagine we have three boolean random variables. In "general position" there are no independence relations between them, so we can't say much. Constrain them so two of the variables are independent, that's a bit less "general", and we still can't say much. Constrain some more so the xor of all three variables is always 1, that's even less "general", now we can use your method to figure out that the third variable is downstream of the first two. Constrain some more so that some of the probabilities are 1/2, and the method stops working. What I'd like to understand is the intuition, which real world cases have the particular "general position" where the method works.

14moWhat would such a distribution look like? The version where X XOR Y is
independent of both X and Y makes sense but I’m struggling to envisage a case
where it’s independent of only 1 variable.

Finite Factored Sets

The swapping within a factor allows for considering rational probabilities to be in general position, and the swapping factors allows IID samples to be considered in general position. I think this is an awesome research direction to go in, but it will make the story more complicated, since will not be able to depend on the fundamental theorem, since we are allowing for a new source of independence that is not orthogonality. (I want to keep calling the independence that comes from disjoint factors orthogonality, and not use "orthogonality" to describe the new independences that come from the principle of indifference.)

34moYeah, that's what I thought, the method works as long as certain "conspiracies"
among probabilities don't happen. (1/2 is not the only problem case, it's easy
to find others, but you're right that they have measure zero.)
But there's still something I don't understand. In the general position, if X is
before Y, it's not always true that X is independent of X XOR Y. For example, if
X = "person has a car on Monday" and Y = "person has a car on Tuesday", and it's
more likely that a car-less person gets a car than the other way round, the
independence doesn't hold. It requires a conspiracy too. What's the intuitive
difference between "ok" and "not ok" conspiracies?

Finite Factored Sets

So you should probably not work with probabilities equal to 1/2 in this framework, unless you are doing so for a specific reason. Just like in Pearlian causality, we are mostly talking about probabilities in general position. I have some ideas about how to deal with probability 1/2 (Have a FFS, together with a group of symmetry constraints, which could swap factors, or swap parts within a factor), but that is outside of the scope of what I am doing here.

To give more detail, the uniform distribution on four elements does not satisfy the compositional semigr... (read more)

34moThe swapping within a factor allows for considering rational probabilities to be
in general position, and the swapping factors allows IID samples to be
considered in general position. I think this is an awesome research direction to
go in, but it will make the story more complicated, since will not be able to
depend on the fundamental theorem, since we are allowing for a new source of
independence that is not orthogonality. (I want to keep calling the independence
that comes from disjoint factors orthogonality, and not use "orthogonality" to
describe the new independences that come from the principle of indifference.)

Finite Factored Sets

If you look at the draft edits for this sequence that is still awaiting approval on OEIS, you'll find some formulas.

Eight Definitions of Observability

I am confused, why is it not identical to your other comment?

28moBecauseS1andS2are not a partition of the world here.
EDIT: but what we actually need in the proof isAssumeS1(C)&AssumeS2(C)&⋯≃AssumeS
1∪S2(C)&…where the…do result in a partition, so I think this will work out the
same as the other comment. I'm still not convinced about biextensional
equivalence between the frames without the rest of the product.

Eight Definitions of Observability

I think I fixed it. Thanks.

Eight Definitions of Observability

This was annoying to fix, so I just made nonempty in the intro to the post.

Eight Definitions of Observability

Fixed, thanks.

Eight Definitions of Observability

Fixed, thanks.

Eight Definitions of Observability

Yep, changed it to .

28moI haven't yet figured out why it's true under≃- I'll keep trying, but let me
know if there's a quick argument for why this holds. (Default next step for me
would be to see if I can restrict attention to the worldS1∪S2then do something
similar to my other comment
[https://www.lesswrong.com/posts/5R9dRqTREZriN9iL7/eight-definitions-of-observability?commentId=mQbB7jdnbvkDLMcgg]
.)

Committing, Assuming, Externalizing, and Internalizing

Fixed, thanks.

Note that the title is misleading. This is really countable dimension factored spaces, which is much better, since it allows for the possibility of something kind of like continuous time, where between any two points in time, you can specify a time strictly between them.