Epistemic status: I expect some people to say "this is obvious and trivial", and others to say "this makes no sense at all".

One fundamental difference between E.T. Jaynes’ probability textbook and most others is the emphasis on including our model in the prior information. When we write an expression like P[X=2|Y=3], that’s really shorthand for P[X=2|Y=3,M], where M is some probabilistic model in which X and Y appear. In practice, I’ve found this useful for two main use-cases: model comparison, and interventions/counterfactuals in causal models. This post is mainly about the latter.

The general idea is that an intervention in a causal model (e.g. do(Z=1)) takes in one model and returns a new model - it should really be written as M′=do(Z=1,M). When we write something like P[X=2|Y=3,do(Z=1)], that’s really shorthand for P[X=2|Y=3,do(Z=1,M)].

In order to make this all less hand-wavy, we need to make the model M a bit more explicit.

What’s in a Model?

The simplest way to represent a probabilistic model is as a table of possibilities - more explicitly, a list of exhaustive and mutually-exclusive logic statements. If I roll a standard die and call the outcome X, then I’d explicitly represent my model as M=(P[X=1|M]=16)&…&(P[X=6|M]=16).

In Probability as Minimal Map, we saw that P[X|P[X|Y]=p]=p. Interpretation: I obtain some data Y, calculate the probability P[X|Y]=p, then my computer crashes and I lose the data. But as long as I still know p, I should still assign the same probability to X. Thus: the probability of X, given P[X|Y]=p (but not given Y itself!) is just p.

(Note that I left the model M implicit in the previous paragraph - really we should write P[X|P[X|Y,M]=p,M]=p.)

Now let’s apply that idea to the expression P[X=1|M], with our die-model M=(P[X=1|M]=16)&…&(P[X=6|M]=16). Our given information includes P[X=1|M]=16, so

P[X=1|M]=P[X=1|P[X=1|M]=16,M]=16.

Representing models this way gives a much stronger logic-flavor to the calculations; our probability calculations are a derivation in an explicit logic. The axioms of that logic are the contents of M, along with the universal laws of probability (i.e. Bayes’ rule, sum rule, etc) and arithmetic.

Causality & Interventions

In the case of a causal model, M would look something like

i.e. M gives a graph G and an expression for the probability of each Xi in terms of i’s parents in G. (This would be for a Bayes net; structural equations are left as an exercise to the reader.)

A do() operation then works exactly like you’d expect: do(Xi=1,M) returns a new model M′ in which:

The arrows into node i in G have been removed

fi has been replaced with the indicator function I[Xi=1] (or, for continuous Xi, δ(Xi−1)dXi)

Counterfactuals work the same way, except they’re limited to structural models - i.e. every nondeterministic node must be a root. As long as the model satisfies that constraint, a counterfactual is exactly the same as an intervention: if we have some data (X1,...,Xn)=(2,1,...,−0.3,6), then to run the counterfactual X3=1, we calculate P[X|(X1,X2,X4,...,Xn)=(2,1,...,−0.3,6),do(X3=1,M)]. If we do this with a non-structural model - i.e. if some nondeterministic node has parents - then we’ll find that the result is sometimes undefined: our axioms do not fully determine the probability in question.

Why Does This Matter?

Hopefully this all seems pretty trivial. Why belabor it?

There are a handful of practical applications where explicitly including the model is useful.

The most important of these is model comparison, especially the Bayesian approach to learning causal structure. Another application is scenarios involving a mix of different experimental interventions and observational studies.

But the main reason I’m bringing it up is that agenty things have the type signature (A -> B) -> A. In English: agenty things have some model (A -> B) which predicts the results (B) of their own actions (A). They use that model to decide what actions to perform: (A -> B) -> A.

In the context of causal models, the model (A -> B) is our causal model M. (A -> B) -> A means performing some computation on M in order to find A - which is a lot simpler with an explicit representation of M.

Of course, we could just use the usual structural equation representation without explicitly making everything a statement in some logic - but then we’d have a lot more different types of things floating around. By explicitly making everything logic statements, we unify the formulation. Statements like “counterfactuals are underdefined for Bayes nets” become statements about provability within our logic, and can themselves be proven. Also, by formulating the model in terms of logic statements, we have a single unified language for probability queries P[X|Y] - the models M, M′, etc can be represented and manipulated in the same format as any other information.

Epistemic status: I expect some people to say "this is obvious and trivial", and others to say "this makes no sense at all".One fundamental difference between

E.T. Jaynes’ probability textbookand most others is the emphasis on including our model in the prior information. When we write an expression like P[X=2|Y=3], that’s really shorthand for P[X=2|Y=3,M], where M is some probabilistic model in which X and Y appear. In practice, I’ve found this useful for two main use-cases:model comparison, and interventions/counterfactuals in causal models. This post is mainly about the latter.The general idea is that an intervention in a causal model (e.g. do(Z=1)) takes in one model and returns a new model - it should really be written as M′=do(Z=1,M). When we write something like P[X=2|Y=3,do(Z=1)], that’s really shorthand for P[X=2|Y=3,do(Z=1,M)].

In order to make this all less hand-wavy, we need to make the model M a bit more explicit.

## What’s in a Model?

The simplest way to represent a probabilistic model is as a table of possibilities - more explicitly, a list of exhaustive and mutually-exclusive logic statements. If I roll a standard die and call the outcome X, then I’d explicitly represent my model as M=(P[X=1|M]=16)&…&(P[X=6|M]=16).

In Probability as Minimal Map, we saw that P[X|P[X|Y]=p]=p. Interpretation: I obtain some data Y, calculate the probability P[X|Y]=p, then my computer crashes and I lose the data. But as long as I still know p, I should still assign the same probability to X. Thus: the probability of X, given P[X|Y]=p (but not given Y itself!) is just p.

(Note that I left the model M implicit in the previous paragraph - really we should write P[X|P[X|Y,M]=p,M]=p.)

Now let’s apply that idea to the expression P[X=1|M], with our die-model M=(P[X=1|M]=16)&…&(P[X=6|M]=16). Our given information includes P[X=1|M]=16, so

P[X=1|M]=P[X=1|P[X=1|M]=16,M]=16.

Representing models this way gives a much stronger logic-flavor to the calculations; our probability calculations are a derivation in an explicit logic. The axioms of that logic are the contents of M, along with the universal laws of probability (i.e. Bayes’ rule, sum rule, etc) and arithmetic.

## Causality & Interventions

In the case of a causal model, M would look something like

M=(G=(...graph...))&(P[X1|Xpa(1,G),M]=f1(X1,Xpa(1,G)))&…&(P[Xn|Xpa(n,G),M]=fn(Xn,Xpa(n,G)))

i.e. M gives a graph G and an expression for the probability of each Xi in terms of i’s parents in G. (This would be for a Bayes net; structural equations are left as an exercise to the reader.)

A do() operation then works exactly like you’d expect: do(Xi=1,M) returns a new model M′ in which:

Counterfactuals work the same way, except they’re limited to structural models - i.e. every nondeterministic node must be a root. As long as the model satisfies that constraint, a counterfactual is exactly the same as an intervention: if we have some data (X1,...,Xn)=(2,1,...,−0.3,6), then to run the counterfactual X3=1, we calculate P[X|(X1,X2,X4,...,Xn)=(2,1,...,−0.3,6),do(X3=1,M)]. If we do this with a non-structural model - i.e. if some nondeterministic node has parents - then we’ll find that the result is sometimes undefined: our axioms do not fully determine the probability in question.

## Why Does This Matter?

Hopefully this all seems pretty trivial. Why belabor it?

There are a handful of practical applications where explicitly including the model is useful.

The most important of these is model comparison, especially the Bayesian approach to learning causal structure. Another application is scenarios involving a mix of different experimental interventions and observational studies.

But the main reason I’m bringing it up is that

agenty things have the type signature (A -> B) -> A. In English: agenty things have some model (A -> B) which predicts the results (B) of their own actions (A). They use that model to decide what actions to perform: (A -> B) -> A.In the context of causal models, the model (A -> B) is our causal model M. (A -> B) -> A means performing some computation on M in order to find A - which is a lot simpler with an explicit representation of M.

Of course, we could just use the usual structural equation representation without explicitly making everything a statement in some logic - but then we’d have a lot more different types of things floating around. By explicitly making everything logic statements, we unify the formulation. Statements like “counterfactuals are underdefined for Bayes nets” become statements about provability within our logic, and can themselves be proven. Also, by formulating the model in terms of logic statements, we have a single unified language for probability queries P[X|Y] - the models M, M′, etc can be represented and manipulated in the same format as any other information.