[next] [prev] [prev-tail] [tail] [up]

### 3.3 Context-Free Languages

Pushdown automata can be characterized by Type 2 grammars or, equivalently, by context-free grammars.

Specifically, a Type 0 grammar G = <N, , P, S> is said to be context-free if each of its production rules has exactly one nonterminal symbol on its left hand side, that is, if each of its production rules is of the form A .

The grammar is called context-free because it provides no mechanism to restrict the usage of a production rule A within some specific context. However, in a Type 0 grammar such a restriction can be achieved by using a production rule of the form A to specify that A is to be used only within the context of and .

The languages that context-free grammars generate are called context-free languages.

Example 3.3.1    The language { ai1bi1ai2bi2 ainbin | n, i1, . . . , in 0 } is generated by the context-free grammar <N, , P, S>, whose production rules are given below.

Recall that a Type 2 grammar is a context-free grammar G = <N, , P, S> in which A in P implies that A = S and that no right-hand side of the production rules contains S. By the following theorem it follows that context-free grammars and Type 2 grammars act as "maximal" and "minimal" grammars for the same class of languages.

Theorem 3.3.1    Each context-free language is also a Type 2 language.

Proof     Consider any context-free grammar G1 = <N, , P1, S1>. A Type 2 grammar G2 = <N {S2}, , P2, S2> satisfies L(G2) = L(G1), if S2 is a new symbol and P2 is obtained from P1 in the following way.

Initialize P2 to equal P1 {S2 S1}. Then, as long as P2 contains a production rule of the form A for some A S2, modify P2 as follows.

1.  Delete the production rule A from P2.
2.  Add a production rule to P2 of the form B A as long as such new production rules can be formed. A is assumed to be the string with one appearance of A omitted in it, and is assumed to be the right-hand side of a production rule of the form B that is already in P2. If A = and the production rule B has been removed earlier from P2, then the production rule is not reinserted to P2.
No addition of a production rule of the form B A to P2 changes the generated language, because any usage of the production rule can be simulated by the pair B and A of production rules.

Similarly, no deletion of a production rule A from P2 affects the generated language, because each subderivation C 1A2 * 1A2 12 which uses A can be replaced with an equivalent subderivation of the form C 12 * 12.

Example 3.3.2    Let G1 be the context-free grammar whose production rules are listed below.

The construction in the proof of Theorem 3.3.1 implies the following equivalent grammars, where G2 is a Type 2 grammar.

Pushdown automata and recursive finite-domain programs process their inputs from left to right. To enable such entities to trace derivations of context-free grammars, the following lemma considers a similar property in the derivations of context-free grammars.

Lemma 3.3.1    If a nonterminal symbol A derives a string of terminal symbols in a context-free grammar G, then has a leftmost derivation from A in G.

Proof     The proof is by contradiction. Recall that in context-free grammars the leftmost derivations 1 2 n replace the leftmost nonterminal symbol in each sentential form i, i = 1, 2, . . . , n - 1.

The proof relies on the observation that the ordering in which the nonterminal symbols are replaced in the sentential forms is of no importance for the derivations in context-free grammars. Each nonterminal symbol in each sentential form is expanded without any correlation to its context in the sentential form.

Consider any context-free grammar G. For the purpose of the proof assume that a string of terminal symbols has a derivation of length n from a nonterminal symbol A. In addition, assume that has no leftmost derivation from A.

Let A 1 m n = be a derivation of length n in which A 1 m is a leftmost subderivation. In addition, assume that m is maximized over the derivations A * of length n. By the assumption that has no leftmost derivation from A, it follows that m < n - 1.

The derivation in question satisfies m = wBm, m+1 = wBm+1, . . . , k = wBk, k+1 = wk for some string w of terminal symbols, production rule B , m < k < n, and m, . . . , k. Thus A 1 m-1 m = wBm wm wm+1 wk = k+1 n = is also a derivation of from A of length n.

However, in this new derivation A 1 m wm is a leftmost subderivation of length m + 1. Consequently, contradicting the existence of a maximal m as implied above, from the assumption that has only nonleftmost derivations from A.

As a result, the assumption that has no leftmost derivation from A is also contradicted.

The proof of the following theorem shows how pushdown automata can trace the derivations of context-free grammars.

Theorem 3.3.2    Each context-free language is accepted by a pushdown automaton.

Proof     Consider any context-free grammar G = <N, , P, S>. With no loss of generality assume that Z0 is not in N . L(G) is accepted by the pushdown automaton M = <Q, , N {Z0}, , q0, Z0, {qf}> whose transition table consists of the following derivation rules.

1.  A transition rule of the form (q0, , , q1, S).
2.  A sequence of transition rules for each A in P. Each such sequence starts and ends at state q1, and replaces a nonterminal symbol A on top of the pushdown store with the string in reverse order.
3.  A transition rule of the form (q1, a, a, q1, ), for each terminal symbol a in the alphabet .
4.  A transition rule of the form (q1, , Z0, qf, Z0).
Intuitively, we know that on a given input x the pushdown automaton M nondeterministically traces a leftmost derivation in G that starts at S and ends at x. At each stage of the tracing, the portion of the input that has already been read together with the content of the pushdown store in reverse order, record the sentential form in the corresponding stage of the derivation.

The transition rule in (a) is used for pushing the first sentential form S into the pushdown store. The transition rules in (b) are used for replacing the leftmost nonterminal symbol in a given sentential form with the right-hand side of an appropriate production rule. The transition rules in (c) are used for matching the leading terminal symbols in the sentential forms with the corresponding symbols in the given input x. The purpose of the production rule in (d) is to move the pushdown automaton into an accepting state upon reaching the end of a derivation.

By induction on n it can be verified that x has a leftmost derivation in G if and only if M has an accepting computation on x, where the derivation and the computation have the following forms with uivi = x for 1 i < n.

Example 3.3.3    If G is the context-free grammar of Example 3.3.1, then the language L(G) is accepted by the pushdown automaton M, whose transition diagram is given in Figure 3.3.1(a).

(a) (b)
 Figure 3.3.1 (a) A pushdown automaton that accepts the language generated by the grammar of Example 3.3.3. (b) A leftmost derivation in the grammar and the corresponding computation by the pushdown automaton.

aabbab has the leftmost derivation S SS AS aAbS aabbS aabbA aabbab in G. Figure 3.3.1(b) shows the corresponding configurations of M in its computation on such an input.

By Theorems 3.2.1 and 3.3.2 each context-free language is accepted by a recursive finite-domain program. For a given context-free grammar G = <N, , P, S>, the recursive finite-domain program T that accepts L(G) can be of the following form.

T on a given input x nondeterministically traces a leftmost derivation that starts at S. If the leftmost derivation provides the string x, then T accepts its input. Otherwise, T rejects the input.

T has one procedure for each nonterminal symbol in N, and one procedure for each terminal symbol in . A procedure that corresponds to a nonterminal symbol A is responsible for initiating a tracing of a leftmost subderivation that starts at A. The procedure does so by nondeterministically choosing a production rule of the form A X1 Xm, and then calling the procedures that correspond to X1, . . . , Xm in the given order. On the other hand, each procedure that corresponds to a terminal symbol is responsible for reading an input symbol and verifying that the symbol is equal to its corresponding terminal symbol.

Each of the procedures above returns the control to the point of invocation, upon successfully completing the given responsibilities. However, each of the procedures terminates the computation at a nonaccepting configuration upon determining that the given responsibility cannot be carried out.

The main program starts a computation by invoking the procedure that corresponds to the start symbol S. Upon the return of control the main program terminates the computation, where the termination is in an accepting configuration if and only if the remainder of the input is empty.

The recursive finite-domain program T can be as depicted in Figure 3.3.2.

 ```call S() if eof then accept reject procedure A() /* For each nonterminal symbol A. */ do or /* For each production rule of the form A X1 Xm.    */ call X1() call Xm() return or until true end procedure a() /* For each terminal symbol a. */ read symbol if symbol = a then return reject end ```

 Figure 3.3.2 A scheme of recursive finite-domain programs that simulate context-free grammars.

Example 3.3.4    If G is the context-free grammar of Example 3.3.1, then L(G) is accepted by the recursive finite-domain program in Figure 3.3.3.

 ```call S() if eof then accept reject procedure S() do /* S SS  */ call S() call S() return or /* S A  */ call A() return or /* S  */ return until true end procedure A() do /* A aAb */ call a() call A() call b() return or /* A ab  */ call a() call b() return until true end procedure a() read symbol if symbol = a then return reject end procedure b() read symbol if symbol = b then return reject end ```

 Figure 3.3.3 A recursive finite-domain program for the grammar of Example 3.3.1.

On input aabbab the recursive finite-domain program traces the derivation S SS AS aAbS aabbS aabbA aabbab by calling its procedures in the order indicated in Figure 3.3.4.

 Figure 3.3.4 The calls to procedures that the program of Figure 3.3.3 makes on input aabbab.

A close look at the proof of Theorem 2.3.2 indicates how a given finite-memory program P can be simulated by a Type 3 grammar G = <N, , P, S>.

The grammar uses its nonterminal symbols to record the states of P. Each production rule of the form A aB in the grammar is used to simulate a subcomputation of P that starts at the state recorded by A, ends at the state recorded by B, and reads an input symbol a. However, each production rule of the form A a in the grammar is used to simulate a subcomputation of P that starts at the state that is recorded by A, ends at an accepting state, and reads an input symbol a. The start symbol S of G is used to record the initial state of P. The production rule S is used to simulate an accepting computation of P in which no input value is read.

The proof of the following theorem relies on a similar approach.

Theorem 3.3.3    Every language that is accepted by a recursive finite-domain program is a context-free language.

Proof     Consider any recursive finite-domain program P. With no loss of generality it can be assumed that the program has no write instructions. The language that is accepted by P can be generated by a context-free grammar G that simulates the computations of P. The nonterminal symbols of G are used to indicate the start and end states of subcomputations of P that have to be simulated, and the production rules of G are used for simulating transitions between states of P.

Specifically, the nonterminal symbols of G consist of

1.  A nonterminal symbol Aq, for each state q of P. Each such nonterminal symbol Aq is used for indicating that a subcomputation of P, which starts at state q and ends at an accepting state, has to be simulated. Moreover, each execution of a return instruction in the subcomputation must be for a call that is made previously during the subcomputation.

The start symbol of G is the nonterminal symbol Aq0 that corresponds to the initial state q0 of P.

2.  A nonterminal symbol Aq,p, for each pair of states q and p corresponding to instruction segments that are in the same procedure of P. Each such nonterminal symbol Aq,p is introduced for indicating that a subcomputation, which starts at state q and ends at state p, has to be simulated. In the subcomputation the number of executions of return instructions has to equal the number of executions of call instructions. Moreover, each execution of a return instruction in the subcomputation must be for a call that is made previously during the subcomputation.

The production rules of G consist of

1.  A production rule of the form Aq Ar, and a production rule of the form Aq,p Ar,p, for each q, r, p, and that satisfy the following condition. The instruction segment that corresponds to state q is neither a call instruction nor a return instruction, and its execution can take the program from state q to state r while reading .

A production rule of the form Aq Ar replaces the objective of reaching an accepting state from state q with the objective of reaching an accepting state from state r.

A production rule of the form Aq,p Ar,p replaces the objective of reaching state p from state q with the objective of reaching state p from state r.

2.  A production rule of the form Aq , for each state q that corresponds to an if eof then accept instruction.
3.  A production rule of the form Aq Ar, for each state q that corresponds to a call instruction, where r is the state reached from q. Each such production rule simulates an execution of a call which is not matched by an execution of a return.
4.  A production rule of the form Aq Ar,sAt, and a production rule of the form Aq,p Ar,sAt,p, for each q, r, s, t, and p such that the following conditions hold.
1.  State q corresponds to a call instruction whose execution at such a state causes the program to enter state r.
2.  State s corresponds to a return instruction in the called procedure, and the execution of the return instruction at such a state takes the program to state t that is compatible with r.
That is, the subcomputation that starts at state q is decomposed into two subcomputations. One is to be performed by an invoked procedure, starting at state r and ending at state s; the other takes on from the instant that the control returns from the invoked procedure, starting at state t.
5.  A production rule of the form Aq,q for each state q that corresponds to a return instruction.

Each of the production rules above is used for terminating a successful simulation of a subcomputation performed by an invoked procedure.

A proof by induction can be used to show that the construction above implies L(G) = L(P).

To show that L(G), is contained in L(P) it is sufficient to show that the following two conditions hold for each string of terminal symbols.

1.  If Aq * in G then P can reach from state q an accepting state while reading , and in any prefix of the subexecution sequence there must be at least as many executions of call instructions as executions of return instructions.
2.  If Aq,p * in G, then P can reach state p from state q while reading . In the subexecution sequence the number of executions of return instructions must equal the number of executions of call instructions, and in any prefix of the subexecution sequence there must be at least as many executions of call instructions as executions of return instructions.

The proof can be by induction on the number of steps i in the derivations. For i = 1, the only feasible derivations are those that have either the form Aq or the form Ap,p . In the first case q corresponds to an accept instruction, and in the second case p corresponds to a return instruction. In both cases the subexecution sequences of the program are empty.

For i > 1 the derivations must have either of the following forms.

1.  Aq 1Ar * 12 = , or Aq,p 1Ar,p * 12 = . In either case, by definition Aq 1Ar and Aq,p 1Ar,p correspond to subexecution sequences that start at state q, end at state r, consume the input 1, and execute neither a call instruction nor a return instruction. However, by the induction hypothesis Ar * 2 and Ar,p * 2 correspond to subexecution sequences that have the desired properties. Consequently, Aq * and Aq,p * also correspond to subexecution sequences that have the desired properties.
2.  Aq Ar,sAt * 12, or Aq,p Ar,sAt,p * 12, where Ar,s * 1. In either case, by definition q corresponds to a call instruction, r is the state that P reaches from state q, s corresponds to a return instruction, and t is the state that P reaches from state s. However, by the induction hypothesis Ar,s * 1, At * 2, and At,p * 2 correspond to subexecution sequences that have the desired properties. Consequently, Aq * 12 and Aq,p * 12 also correspond to subexecution sequences that have the desired properties.

To show that L(P) is contained in L(G) it is sufficient to show that either of the following conditions holds for each subexecution sequence that reads , starts at state q, ends at state p, and has at least as many executions of return instructions as of call instructions in each of the prefixes.

1.  If p corresponds to an accepting state, then G has a derivation of the form Aq * .
2.  If p corresponds to a return instruction and the subexecution sequence has as many executions of call instructions as of return instructions, then G has a derivation of the form Aq,p * .
The proof is by induction on the number of moves i in the subexecution sequences. For i = 0 the subexecution sequences consume no input, and for them G has the corresponding derivations Ap and Ap,p , respectively.

For i > 0 either of the following cases must hold.

1.  q does not correspond to a call instruction, or q corresponds to a call instruction that is not matched in the subexecution sequence by a return instruction. In such a case, by executing a single instruction segment the subexecution sequences in question enter some state r from state q while consuming some input 1.

Consequently, by definition, the grammar G has a production rule of the form Aq 1Ar if p is an accepting state, and a production rule of the form Aq,p 1Ar,p if p corresponds to a return instruction.

However, by the induction hypothesis the i-1 moves that start in state r have in G a corresponding derivation of the form Ar * 2 if p is an accepting state, and of the form Ar,p * 2 if p corresponds to a return instruction. 2 is assumed to satisfy 12 = .

2.  q corresponds to a call instruction that is matched in the subexecution sequence by a return instruction. In such a case the subexecution sequence from state q enters some state r by executing the call instruction that corresponds to state q. Moreover, the subexecution sequence has a corresponding execution of a return instruction that takes the subexecution sequence from some state s to some state t.

Consequently, by definition, the grammar G has a production rule of the form Aq Ar,sAt if p is an accepting state, and a production rule of the form Aq,p Ar,sAt,p if p corresponds to a return instruction.

However, by the induction hypothesis, the grammar G has a derivation of the form Ar,s * 1 for the input 1 that the subexecution sequence consumes between states r and s. In addition, G has either a derivation of the form At * 2 or a derivation of the form At,p * 2, respectively, for the input 2 that the subexecution sequence consumes between states t and p, depending on whether p is an accepting state or not.

Example 3.3.5    Let P be the recursive finite-domain program in Figure 3.3.5(a), with {a, b} as a domain of the variables and a as initial value.

 ```call f(x) /* I1 */ if eof then accept /* I2 */ reject /* I3 */ procedure f(x) do /* I4 */ return /* I5 */ or read x /* I6 */ call f(x) /* I7 */ until x = a /* I8 */ end ```
(a)
(b)
 Figure 3.3.5 The grammar in (b) generates the language accepted by the program in (a).

L(P) is generated by the grammar G, which has the production rules in Figure 3.3.5(b). [i, x] denotes a state of P that corresponds to the instruction segment Ii, and value x in x.

The derivation tree for the string abb in the grammar G, and the corresponding transitions between the states of the program P on input "a, b, b", are shown in Figure 3.3.6. The symbol A[1,a] states that the computation of P has to start at state [1, a] and end at an accepting state. The production rule A[1,a] A[4,a][5,b]A[2,b] corresponds to a call to f which returns the value b.

 Figure 3.3.6 A correspondence between a derivation tree and a computation of a recursive finite-domain program.

Context-free grammars do not resemble pushdown automata, the way Type 3 grammars resemble finite-state automata. The difference arises because derivations in context-free grammars are recursive in nature, whereas computations of pushdown automata are iterative.

Consequently, some context-free languages can be more easily characterized by context-free grammars, and other context-free languages can be more easily characterized by pushdown automata.

[next] [prev] [prev-tail] [front] [up]