In this post we extend the notion of propositional graphs to cover any first order theory.

Consider some arbitrary first order theory. For example a theory of integers.

We can consider the program to deal with two basic types. B={⊤,⊥} and N={0,1,2,...}. All the functions and transformation rules in the previous post apply to B. For example, ⊤ is of type B. ∧ is a function from B2→B.

3 and 7 are of type N, and < is a function from N2→B. So 3<7 is a valid Boolean that can be treated just like p with respect to the propositional transformations. ie (3<7)∧p≡p∧(3<7).

We can also add S:N→N the successor function. Here are some more equivalence rules you might want.

Sα<Sβ≡α<β

α<0≡⊥

0<Sα≡⊤

¬(α<β)≡β<Sα

In general, there is a finite list of types. Call them X1,X2,...Xn

All functions have a fixed number of inputs, each with a fixed type, and output something of a fixed type. In our parse tree from before, this means that our edges have one of finitely many different colours, and that each symbol node has a fixed pattern of colors for its inputs and outputs. + is a node of type N and both its children are of type N. < is a node of type B, and both its children are of type N. ¬ is of type B and has one child of type B. Where the output type is not clear, it may be indicated by a subscript. <B(3N,5N)

Before we can get to predicate logic, we will introduce a set of implicit variable symbols VX for each type X in use. V=∪nx=1VX is the set of all implicit variables. An implicit variable is an abstract symbol like ⊥ or 3. Specific implicit variables will be represented by capital roman letters (ABC). Substitution rules can also contain references to arbitrary members of some VX. Eg

Here Sub is the substitution function, the αN,βN,δN represent any expression that has a real type. The ΓN represents any single symbol from the set VN.

If 3 is a shorthand for S(S(S(0))) then the rules Sub(ΓN,δN,S(αN))≡S(Sub(ΓN,δN,αN)) and Sub(ΓN,δN,0N)≡0N allow deduction all the way to +(Sub(X,5,3),5)≡8

A general first order theory makes use of = and ∀ as well as [/] for substitution. These do not need any special properties, and can be defined by the normal substitution rules. Technically there is a collection of Sub's, one for each pair of types.

We have 4 axioms, which can be turned into rules by making them all equivalent to ⊤.

∀B(ΓN,=B,(ΓN,ΓN))≡⊤

∀B(ΓN,∀(ΔN,⟹B(=B(αN,βN),=B(ϕB,Sub(ΓN,ΔN,ϕB)))))≡⊤

⟹B(∀B(ΓN,ϕB),Sub(ΓN,αN,ϕB))≡⊤

⟹B(∀N(ΓN,⟹B(ϕB,ψB)),⟹B(∀N(ΓN,ϕB),∀N(ΓN,ψB)))

And a rule that allows generalization.

∀N(ΓN,⊤)≡⊤

I believe that this set of substitution rules, or something like it, can produce predicate logic. I also believe that more substitution rules and symbols can be added to make any first order theory. The intuition is that you can take any proof in any first order logical theory and convert it into a series of substitution rules by anding in any axioms, and applying moduls ponens. Generalization can be done by turning ⊤ into ∀(X,⊤) and then using substitutions not dependent on X to turn it into ∀(X,p).

Theorem

For any set of symbols with appropriate type signatures, for any recursively enumerable equivalence relation H. (With the property that a≡b⟹f(a)≡f(b) )There exists a finite set of extra symbols and a finite set of substitution rules such that these equivalences hold.

Proof Outline

Introduce the symbols 0,1,end so that binary strings can be stored 0(1(1(0(end))))

Give them some new type Y, and add a load of type converters, symbols that take in an object of type Y, and output some other type.

Introduce + for concatenation.

+(0(αY),βY)≡0(+(αY,βY))

And the same for 1

+(end,αY)≡αY

By adding the rules

P(α,0(β))≡0(P(0(α),β))

P(α,1(β))≡0(P(1(α),β))

P(α,end)≡1(α)

Then if |α|=n then P(end,α)≡0n1reverse(α).

This means that J(α,β)≡+(P(end,α),P(end,β)) can unambiguously express pairs of bitstrings as a single bitstring.

Arbitrary syntax trees can be encoded like ∗(ConvN(αY),ConvN(βY))≡ConvN(J(010end,J(αY,βY)))

Where 010 is the unique bitstring representing "*", an arbitrary symbol.

Then add a rule that

ConvN(αY)≡rightN(TM1(left(J(αY,βY))))

As βY can be anything, it could be an encoding of what you want. Then we let left(0(α))≡0(left(α)) and the same for one, to run left to the far end of the bitstring. Then left(end)≡n(left(end)) where n is a null symbol for tape that doesn't store data and we can extend tape as needed. Using rules like TM1(n(α))≡1(TM3(α)) we can run any turing machine with one symbol per state. Then finally we have a rule like TM4(0(α))≡success(TM4(0(α))). Let success commute with 0,1 so it can go up to the top. Let right(success(α))≡right(success(n(α))).

Then you can unwind all the computations, and rightN(success(TM1(left(J(αY,βY)))))≡ConvN(βy) means that you can take the bit stream that you computed to be equal, and decode it back into its symbols.

Lemma

Any possible computably checkable formal proof system can be converted into an equivalence graph.

Post Ending.

Next time we will see when different proof systems are provably the same. I know the formatting is rough and the explanations aren't always as clear as could be. Tell me about any unusually opaque parts. If these ideas turn out to be important, someone will need to rework this. I know the axioms used are not the simplest possible, and proofs are not fully formalized.

If you haven't read part 1, read it here.

In this post we extend the notion of propositional graphs to cover any first order theory.

Consider some arbitrary first order theory. For example a theory of integers.

We can consider the program to deal with two basic types. B={⊤,⊥} and N={0,1,2,...}. All the functions and transformation rules in the previous post apply to B. For example, ⊤ is of type B. ∧ is a function from B2→B.

3 and 7 are of type N, and < is a function from N2→B. So 3<7 is a valid Boolean that can be treated just like p with respect to the propositional transformations. ie (3<7)∧p≡p∧(3<7).

We can also add S:N→N the successor function. Here are some more equivalence rules you might want.

Sα<Sβ≡α<β

α<0≡⊥

0<Sα≡⊤

¬(α<β)≡β<Sα

In general, there is a finite list of types. Call them X1,X2,...Xn

All functions have a fixed number of inputs, each with a fixed type, and output something of a fixed type. In our parse tree from before, this means that our edges have one of finitely many different colours, and that each symbol node has a fixed pattern of colors for its inputs and outputs. + is a node of type N and both its children are of type N. < is a node of type B, and both its children are of type N. ¬ is of type B and has one child of type B. Where the output type is not clear, it may be indicated by a subscript. <B(3N,5N)

Before we can get to predicate logic, we will introduce a set of implicit variable symbols VX for each type X in use. V=∪nx=1VX is the set of all implicit variables. An implicit variable is an abstract symbol like ⊥ or 3. Specific implicit variables will be represented by capital roman letters (ABC). Substitution rules can also contain references to arbitrary members of some VX. Eg

Sub(ΓN,δN,+N(αN,βN))≡+N(Sub(ΓN,δN,αN),Sub(ΓN,δN,βN))

Sub(ΓN,δN,ΓN)≡δN

Here Sub is the substitution function, the αN,βN,δN represent any expression that has a real type. The ΓN represents any single symbol from the set VN.

Applying these equivalences tells us that

Sub(X,5,+(3,X))≡+(Sub(X,5,3),Sub(X,5,X))≡+(Sub(X,5,3),5)

If 3 is a shorthand for S(S(S(0))) then the rules Sub(ΓN,δN,S(αN))≡S(Sub(ΓN,δN,αN)) and Sub(ΓN,δN,0N)≡0N allow deduction all the way to +(Sub(X,5,3),5)≡8

A general first order theory makes use of = and ∀ as well as [/] for substitution. These do not need any special properties, and can be defined by the normal substitution rules. Technically there is a collection of Sub's, one for each pair of types.

We have 4 axioms, which can be turned into rules by making them all equivalent to ⊤.

∀B(ΓN,=B,(ΓN,ΓN))≡⊤

∀B(ΓN,∀(ΔN,⟹B(=B(αN,βN),=B(ϕB,Sub(ΓN,ΔN,ϕB)))))≡⊤

⟹B(∀B(ΓN,ϕB),Sub(ΓN,αN,ϕB))≡⊤

⟹B(∀N(ΓN,⟹B(ϕB,ψB)),⟹B(∀N(ΓN,ϕB),∀N(ΓN,ψB)))

And a rule that allows generalization.

∀N(ΓN,⊤)≡⊤

I believe that this set of substitution rules, or something like it, can produce predicate logic. I also believe that more substitution rules and symbols can be added to make any first order theory. The intuition is that you can take any proof in any first order logical theory and convert it into a series of substitution rules by anding in any axioms, and applying moduls ponens. Generalization can be done by turning ⊤ into ∀(X,⊤) and then using substitutions not dependent on X to turn it into ∀(X,p).

## Theorem

For any set of symbols with appropriate type signatures, for any recursively enumerable equivalence relation H. (With the property that a≡b⟹f(a)≡f(b) )There exists a finite set of extra symbols and a finite set of substitution rules such that these equivalences hold.

## Proof Outline

Introduce the symbols 0,1,end so that binary strings can be stored 0(1(1(0(end))))

Give them some new type Y, and add a load of type converters, symbols that take in an object of type Y, and output some other type.

Introduce + for concatenation.

+(0(αY),βY)≡0(+(αY,βY))

And the same for 1

+(end,αY)≡αY

By adding the rules

P(α,0(β))≡0(P(0(α),β))

P(α,1(β))≡0(P(1(α),β))

P(α,end)≡1(α)

Then if |α|=n then P(end,α)≡0n1 reverse(α).

This means that J(α,β)≡+(P(end,α),P(end,β)) can unambiguously express pairs of bitstrings as a single bitstring.

Arbitrary syntax trees can be encoded like ∗(ConvN(αY),ConvN(βY))≡ConvN(J(010end,J(αY,βY)))

Where 010 is the unique bitstring representing "*", an arbitrary symbol.

Then add a rule that

ConvN(αY)≡rightN(TM1(left(J(αY,βY))))

As βY can be anything, it could be an encoding of what you want. Then we let left(0(α))≡0(left(α)) and the same for one, to run left to the far end of the bitstring. Then left(end)≡n(left(end)) where n is a null symbol for tape that doesn't store data and we can extend tape as needed. Using rules like TM1(n(α))≡1(TM3(α)) we can run any turing machine with one symbol per state. Then finally we have a rule like TM4(0(α))≡success(TM4(0(α))). Let success commute with 0,1 so it can go up to the top. Let right(success(α))≡right(success(n(α))).

Then you can unwind all the computations, and rightN(success(TM1(left(J(αY,βY)))))≡ConvN(βy) means that you can take the bit stream that you computed to be equal, and decode it back into its symbols.

## Lemma

Any possible computably checkable formal proof system can be converted into an equivalence graph.

## Post Ending.

Next time we will see when different proof systems are provably the same. I know the formatting is rough and the explanations aren't always as clear as could be. Tell me about any unusually opaque parts. If these ideas turn out to be important, someone will need to rework this. I know the axioms used are not the simplest possible, and proofs are not fully formalized.