Parikh’s Theorem: A simple and direct automaton construction
Abstract
Parikh’s theorem states that the Parikh image of a contextfree language is semilinear or, equivalently, that every contextfree language has the same Parikh image as some regular language. We present a very simple construction that, given a contextfree grammar, produces a finite automaton recognizing such a regular language.
definitionDefinition[section] \newproofproofProof
The Parikh image of a word over an alphabet is the vector such that is the number of occurrences of in . For example, the Parikh image of over the alphabet is . The Parikh image of a language is the set of Parikh images of its words. Parikh images are named after Rohit Parikh, who in 1966 proved a classical theorem of formal language theory which also carries his name. Parikh’s theorem Parikh66 states that the Parikh image of any contextfree language is semilinear. Since semilinear sets coincide with the Parikh images of regular languages, the theorem is equivalent to the statement that every contextfree language has the same Parikh image as some regular language. For instance, the language has the same Parikh image as . This statement is also often referred to as Parikh’s theorem, see e.g. HK99 , and in fact it has been considered a more natural formulation Pil73 .
Parikh’s proof of the theorem, as many other subsequent proofs Gre72 ; Pil73 ; Lee74 ; Gol77 ; HK99 ; AEI02 , is constructive: given a contextfree grammar , the proof produces (at least implicitly) an automaton or regular expression whose language has the same Parikh image as . However, these constructions are relatively complicated, not given explicitly, or yield crude upper bounds: automata of size for grammars in Chomsky normal form with variables (see Section 4 for a detailed discussion). In this note we present an explicit and very simple construction yielding an automaton with states, for a lower bound of . An application of the automaton is briefly discussed in Section 3: the automaton can be used to algorithmically derive the semilinear set, and, using recent results on Parikh images of NFAs ToThesis ; KT10 , it leads to the best known upper bounds on the size of the semilinear set for a given contextfree grammar.
1 The Construction
We follow the notation of (HMU06, , Chapter 5). Let be a contextfree grammar with a set of variables or nonterminals, a set of terminals, a set of productions, and an axiom . We construct a nondeterministic finite automaton (NFA) whose language has the same Parikh image as . The transitions of this automaton will be labeled with words of , but note that by adding intermediate states (when the words have length greater than one) and removing transitions (i.e., when the words have length zero), such an NFA can be easily brought in the more common form where transition labels are elements of .
We need to introduce a few notions. For we denote by (resp. ) the Parikh image of where the components not in (resp. ) have been projected away. Moreover, let (resp. ) denote the projection of onto (resp. ). For instance, if , , and , then , and . A pair is a step, denoted by
Let be a contextfree grammar and let . The Parikh automaton of is the NFA defined as follows:

;


;

.
It is easily seen that has exactly states.
Figure 1 shows the Parikh automaton of the contextfree grammar with productions . The states are all pairs such that . Transition and axiom
We define the degree of by ; i.e., is the maximal number of variables on the right hand sides. For instance, the degree of the grammar in Fig. 1 is . Notice that if is in Chomsky normal form then , and iff is regular.
In the rest of the note we prove:
Theorem 1.1
If is a contextfree grammar with variables and degree , then and have the same Parikh image.
For the grammar of Figure 1 we have and , and Theorem 1.1 yields . So the language of the automaton of the figure has the same Parikh image as the language of the grammar.
It is easily seen that has exactly states. Using standard properties of binomial coefficients, for and we get an upper bound of states. For (e.g. for grammars in Chomsky normal form), the automaton has states. On the other hand, for every the grammar in Chomsky normal with productions and axiom satisfies , and therefore the smallest Parikhequivalent NFA has states. This shows that our construction is close to optimal.
2 The Proof
Given , we write (resp. ), to denote that the Parikh image of is equal to (resp. included in) the Parikh image of . Also, given , we abbreviate to .
We fix a contextfree grammar with variables and degree . In terms of the notation we have just introduced, we have to prove . One inclusion is easy:
Proposition 2.1
For every we have .
Let arbitrary, and let
Now, let be an arbitrary word with . Then there is a run
The proof of the second inclusion is more involved. To explain its structure we need a definition.
A derivation of has index if for every , the word has length at most . The set of words derivable through derivations of index is denoted by . For example, the derivation has index two. Clearly, we have and .
The proof of is divided into two parts. We first prove the Collapse Lemma, Lemma 2.3, stating that , and then we prove, in Lemma 2.4, that holds for every . A similar result has been proved in EKL09:newtProgAn with different notation and in a different context. We reformulate its proof here for the reader interested in a selfcontained proof.
The Collapse Lemma
We need a few preliminaries. We assume the reader is familiar with the fact that every derivation can be parsed into a parse tree (HMU06, , Chapter 5), whose yield is the word produced by the derivation. We denote the yield of a parse tree by , and the set of yields of a set of trees by . Figure 2 shows the parse tree of the derivation . We introduce the notion of dimension of a parse tree.
Let be a parse tree. A child of is a subtree of whose root is a child of the root of . A child of is called proper if its root is not a leaf, i.e., if it is labeled with a variable. The dimension of a parse tree is inductively defined as follows. If has no proper children, then . Otherwise, let be the proper children of sorted such that . Then
The set of parse trees of of dimension is denoted by , and the set of all parse trees of by . The parse tree of Fig. 2 has two children, both of them proper. It has dimension 1 and height 3. Observe also the following fact, which can be easily proved by induction.
Fact 2.0
Denote by the height of a tree . Then .
For the proof of the collapse lemma, , observe first that, since every word in is the yield of some parse tree, we have , and so it suffices to show . The proof is divided into two parts. We first show in Lemma 2.1, and then we show in Lemma 2.2. Actually, the latter proves the stronger result that parse trees of dimension have derivations of index , i.e., for all .
Lemma 2.1
.
In this proof we write to denote that is a parse tree except that exactly one leaf is labelled by a variable, say , instead of a terminal; the tree is a parse tree with root ; and the tree is obtained from and by replacing the leaf of by the tree . Figure 3 shows an example.
In the rest of the proof we abbreviate parse tree to tree. We need to prove that for every tree there exists a tree such that and . We shall prove the stronger result that moreover and have the same number of nodes, and the set of variables appearing in and coincide.
Say that two trees are equivalent if they have the same number of nodes, the sets of variables appearing in and coincide, and holds. Say further that a tree is compact if , where denotes the number of variables that appear in . Since for every , it suffices to show that every tree is equivalent to a compact tree. We describe a recursive “compactification procedure” that transforms a tree into an equivalent compact tree, and prove that it is welldefined and correct. By welldefined we mean that some assumptions made by the procedure about the existence of some objects indeed hold.
consists of the following steps:

If is compact then return and terminate.

If is not compact then

Let be the proper children of , .

For every : .
(I.e., replace in the subtree by the result of compactifying ).
Let be the smallest index such that . 
Choose an index such that .

Choose subtrees of and subtrees of such that

and ; and

the roots of and are labelled by the same variable.


(Loosely speaking, remove from and insert it into .) . 
Goto (1).

We first prove that the assumptions at lines (2.1), (2.3), and (2.4) about
the existence of certain subtrees hold.
(2.1) If is not compact, then has at least one proper child.
Assume that has no proper child. Then, by the definitions of
dimension and , we have , and so
is compact.
(2.3) Assume that is not compact, has at least one proper child, and all its proper children
are compact. Let be the smallest index such that . There
there exists an index such that .
Let (where for the moment possibly ) be an index
such that . We have
(by definition of dimension and of )  (1)  
(as is compact)  
(by definition of )  
(as is a child of )  
(as is not compact), 
so all inequalities in (1) are in fact equalities.
In particular, we have and so, by the definitions of dimension and
of , there exists such that . Hence or ,
and w.l.o.g. we can choose such that .
(2.4) Assume that is not compact, all its proper children are compact, and it has
two distinct proper children such that and .
There exist subtrees of and subtrees of
satisfying conditions (i) and (ii).
By the equalities in (1) we have .
By Fact 1 we have . So , and therefore some
path of from the root to a leaf visits at least two nodes labelled
with the same variable, say . So can be factored into
such that the roots of and are labelled by . Since by the equalities in
(1) we also have , every variable that appears in appears also in , and so contains a node labelled by . So can be factored
into with the root of labelled by .
This concludes the proof that the procedure is welldefined. It remains to show that
it terminates and returns an equivalent compact tree. We start by proving the following
lemma:
If terminates and returns a tree , then
and are equivalent.
We proceed by induction on the number of calls to
during the execution of . If is called only once, then
only line (1) is executed, is compact, no step modifies , and we are done.
Assume now that is called more than once.
The only lines that modify are (2.2) and (2.5). Consider first line (2.2.).
By induction hypothesis, each call to during the execution of
returns a compact tree that is equivalent to . Let and be the
values of before and after the execution of . Then
is the result of replacing by in . By the definition of
equivalence, and since is equivalent to , we get that
is equivalent to . Consider now line (2.5), and let and
be the values of before and after the execution of
followed by the execution of .
Since the subtree that is added to is subsequently removed
from , the Parikhimage of , the number of nodes of ,
and the set of variables appearing in do not change. This completes the proof of the lemma.
The lemma shows in particular that if the procedure terminates, then it returns an equivalent tree. So it only remains to prove that the procedure always terminates. Assume there is a tree such that does not terminate. W.l.o.g. we further assume that has a minimal number of nodes. In this case all the calls to line (2.2) terminate, and so the execution contains infinitely many steps that do not belong to any deeper call in the call tree, and in particular infinitely many executions of the block (2.3)(2.5). We claim that in all executions of this block the index has the same value. For this, observe first that, by the lemma, the execution of line (2.2) does not change the number of nodes or the set of variables occurring in each of . In particular, it preserves the value of . Observe further that each time line (2.5) is executed, the procedure adds nodes to , and either does not change or removes nodes from any other proper children of . In particular, the value of does not decrease, and for every the value of does not increase. So at the next execution of the block the index of the former execution is still the smallest index satisfying . Now, since has the same value at every execution of the block, each execution strictly decreases the number of nodes of some proper child different from , and only increases the number of nodes of . This contradicts the fact that all proper children of have a finite number of nodes. ∎
Lemma 2.2
For every .
In this proof we will use the following notation. If is a derivation and , then we define to be the step sequence .
Let be a parse tree such that . We show that there is a derivation for of index . We proceed by induction on the number of nonleaf nodes in . In the base case, has no proper child. Then we have and represents a derivation of index . For the induction step, assume that has proper children where the root of is assumed to be labeled by ; i.e., we assume that the topmost level of is induced by a rule for . Note that . By definition of dimension, at most one child has dimension , while the other children have dimension at most . W.l.o.g. assume and . By induction hypothesis, for all there is a derivation for such that has index , and have index . Define, for each , the step sequence
If the notion of index is extended to step sequences in the obvious way, then has index , and for , the step sequence has index . By concatenating the step sequences and in that order, we obtain a derivation for of index . ∎
Lemma 2.3
[Collapse Lemma] .
∎
Lemma 2.4
For every : .
We show that if is a prefix of a derivation of index then has a run such that and . The proof is by induction on the length of the prefix.
. In this case , and since and we are done.
. Since there exist and a production such that and . By induction hypothesis, there exists a run of such that and . Then the definition of and the fact that is of index show that there exists a transition , hence we find that . Next we conclude from and that and we are done.
Finally, if so that is a derivation, then where is an accepting state and .∎
We now have all we need to prove the other inclusion.
Proposition 2.2
.
∎
3 An Application: Bounding the Size of Semilinear Sets
Recall that a set , , is linear if there is an offset and periods such that . A set is semilinear if it is the union of a finite number of linear sets. It is easily seen that the Parikh image of a regular language is semilinear. Procedures for computing the semilinear representation of the language starting from a regular expression or an automaton are wellknown (see e.g. Pil73 ). Combined with Theorem 1.1 they provide an algorithm for computing the Parikh image of a contextfree language.
Recently, To has obtained an upper bound on the size of the semilinear representation of the Parikh image of a regular language (see Theorem 7.3.1 of ToThesis ):
Theorem 3.1
Let be an NFA with states over an alphabet of letters. Then is a union of linear sets with at most periods; the maximum entry of any offset is , and the maximum entry of any period is at most .
Plugging Theorem 1.1 into Theorem 3.1, we get the (to our knowledge) best existing upper bound on the size of the semilinear set representation of the Parikh image of a contextfree language. Let be a contextfree grammar of degree with and . Let be the total number of occurrences of terminals in the productions of , i.e., . The number of states of is . Recall that the transitions of are labelled with words of the form , where is the righthandside of some production. Splitting transitions, adding intermediate states, and then removing transitions yields an NFA with states. So we finally obtain for the parameters and in Theorem 3.1 the values , and . This result (in fact a slightly stronger one) has been used in EG11 to provide a polynomial algorithm for a languagetheoretic problem relevant for the automatic verification of concurrent programs.
4 Conclusions and Related Work
For the sake of comparison we will assume throughout this section that all grammars have degree . Given a contextfree grammar with variables, we have shown how to construct an NFA with states such that and have the same Parikh image. We compare this result with previous proofs of Parikh’s theorem.
Parikh’s proof Parikh66 (essentially the same proof is given in Sal73 ) shows how to obtain a Parikhequivalent regular expression from a finite set of parse trees of . The complexity of the resulting construction is not studied. By its definition, the regular expression basically consists of the sum of words obtained from the parse trees of height at most . This leads to the admittedly rough bound that the regular expression consists of at most words each of length at most .
Greibach Gre72 shows that a particular substitution operator on language classes preserves semilinearity of the languages. This result implies Parikh’s theorem, if the substitution operator is applied to the class of regular languages. It is hard to extract a construction from this proof, as it relies on previously proved closure properties of language classes.
Pilling’s proof Pil73 (also given in Con71 ) of Parikh’s theorem uses algebraic properties of commutative regular languages. From a constructive point of view, his proof leads to a procedure that iteratively replaces a variable of the grammar by a regular expression over the terminals and the other variables. This procedure finally generates a regular expression which is Parikhequivalent to . Van Leeuwen Lee74 extends Parikh’s theorem to other language classes, but, while using very different concepts and terminology, his proof leads to the same construction as Pilling’s. Neither Pil73 nor Lee74 study the size of the resulting regular expression.
Goldstine Gol77 simplifies Parikh’s original proof. An explicit construction can be derived from the proof, but it is involved: for instance, it requires to compute for each subset of variables, the computation of all derivations with these variables up to a certain size depending on a pumping constant.
Hopkins and Kozen HK99 generalize Parikh’s theorem to commutative Kleene algebra. Like in Pilling Pil73 their procedure to compute a Parikhequivalent regular expression is iterative; but rather than eliminating one variable in each step, they treat all variables in a symmetric way. Their construction can be adapted to compute a Parikhequivalent finite automaton. Hopkins and Kozen show (by algebraic means) that their iterative procedure terminates after iterations for a grammar with variables. In EKL09:newtProgAn we reduce this bound (by combinatorial means) to iterations. The construction yields an automaton, but it is much harder to explain than ours. The automaton has size .
In AEI02 Parikh’s theorem is derived from a small set of purely equational axioms involving fixed points. It is hard to derive a construction from this proof.
In esp97 Parikh’s theorem is proved by showing that the Parikh image of a contextfree language is the union of the sets of solutions of a finite number of systems of linear equations. In seidl05 the theorem is also implicitly proved, this time by showing that the Parikh image is the set of models of an existential formula of Presburger arithmetic. While the constructions yielding the systems of equations and the Presburger formulas are very useful, they are also more complicated than our construction of the Parikh automaton. Also, neither esp97 nor seidl05 give bounds on the size of the semilinear set.
Acknowledgments
We thank two anonymous referees for very useful suggestions.
References
 (1) R. J. Parikh, On contextfree languages, Journal of the ACM 13 (4) (1966) 570–581.
 (2) L. Aceto, Z. Ésik, A. Ingólfsdóttir, A fully equational proof of Parikh’s Theorem, ITA 36 (2) (2002) 129–153.
 (3) J. E. Hopcroft, R. Motwani, J. D. Ullman, Introduction to Automata Theory, Languages, and Computation, 3rd Edition, AddisonWesley (2006).
 (4) J. H. Conway, Regular algebra and finite machines, Chapman and Hall, 1971.
 (5) J. Esparza, Petri Nets, Commutative ContextFree Grammars, and Basic Parallel Processes, Fundamentæ Informaticæ (1997).
 (6) J. Esparza, P. Ganty, Complexity of PatternBased Verification for Multithreaded Programs, POPL, Proceedings, ACM (2011), 499–510.
 (7) J. Esparza, S. Kiefer, M. Luttenberger, Newtonian program analysis, Journal of the ACM 57 (6) (2010) 33:1–33:47.
 (8) S. A. Greibach, A generalization of Parikh’s semilinear theorem, Discrete Mathematics 2 (4) (1972) 347–355.
 (9) J. Goldstine, A simplified proof of Parikh’s Theorem, Discrete Mathematics 19 (3) (1977) 235–239.
 (10) M. W. Hopkins, D. C. Kozen, Parikh’s Theorem in commutative Kleene algebra, LICS (1999), 394–401.
 (11) E. Kopczynski, A. W. To, Parikh Images of Grammars: Complexity and Applications, LICS, Proceedings, IEEE Computer Society (2010), 80–89.
 (12) M. Lange, H. Leiß, To CNF or not to CNF? An Efficient Yet Presentable Version of the CYK Algorithm, Informatica Didactica (8) (2008–2010).
 (13) J. van Leeuwen, A generalisation of Parikh’s Theorem in formal language theory, ICALP, LNCS 14 (1974) 17–26.
 (14) D. L. Pilling, Commutative regular equations and Parikh’s Theorem, J. London Math. Soc. 2 (6) (1973) 663–666.
 (15) A. Salomaa, Formal Languages, Academic Press, 1973.
 (16) A. W. To, ModelChecking InfiniteState Systems: Generic and Specific Approaches, PhD Thesis, University of Edinburgh (2010).
 (17) K. N. Verma, H. Seidl, T. Schwentick, On the Complexity of Equational Horn Clauses, CADE, LNCS 1831 (2005) 337–352.