[next][prev] [prev-tail] [tail] [up]

### 4.6 Turing Machines and Type 0 Languages

The classes of languages that are accepted by finite-state automata on the one hand and pushdown automata on the other hand were shown earlier to be the classes of Type 3 and Type 2 languages, respectively. The following two theorems show that the class of languages accepted by Turing machines is the class of Type 0 languages.

Theorem 4.6.1    Each Type 0 language is a recursively enumerable language.

Proof     Consider any Type 0 grammar G = <N, , P, S>. From G construct a two auxiliary-work-tape Turing machine MG that on a given input x nondeterministically generates some string w in L(G), and then accepts x if and only if x = w.

The Turing machine MG generates the string w by tracing a derivation in G of w from S. MG starts by placing the sentential form S in the first auxiliary work tape. Then MG repeatedly replaces the sentential form stored on the first auxiliary work tape with the one that succeeds it in the derivation. The second auxiliary work tape is used as an intermediate memory, while deriving the successor of each of the sentential forms.

The successor of each sentential form is obtained by nondeterministically searching for a substring , such that is a production rule in G, and then replacing by in .

MG uses a subcomponent M1 to copy the prefix of that precedes onto the second auxiliary work tape.

MG uses a subcomponent M2 to read from the first auxiliary work tape and replace it by on the second.

MG uses a subcomponent M3 to copy the suffix of that succeeds onto the second auxiliary work tape.

MG uses a subcomponent M4 to copy the sentential form created on the second auxiliary work tape onto the first. In addition, MG uses M4 to determine whether the new sentential form is a string in L(G). If w is in L(G), then the control is passed to a subcomponent M5. Otherwise, the control is passed to M1.

MG uses the subcomponent M5 to determine whether the input string x is equal to the string w stored on the first auxiliary work tape.

Example 4.6.1    Consider the grammar G which has the following production rules.

The language L(G) is accepted by the Turing machine MG, whose transition diagram is given in Figure 4.6.1.

 Figure 4.6.1 A Turing machine MG for simulating the grammar G that has the production rules S aSbS and Sb .

The components M1, M2, and M3 scan from left to right the sentential form stored on the first auxiliary work tape. As the components scan the tape they erase its content.

The component M2 of MG uses two different sequences of transition rules for the first and second production rules: S aSbS and Sb . The sequence of transition rules that corresponds to S aSbS removes S from the first auxiliary work tape and stores aSbS on the second. The sequence of transition rules that corresponds to Sb removes Sb from the first auxiliary work tape and stores nothing on the second.

The component M4 scans from right to left the sentential form in the second auxiliary work tape, erasing the content of the tape during the scanning. M4 starts scanning the sentential form in its first state, determining that the sentential form is a string of terminal symbols if it reaches the blank symbol B while in the first state. In such a case, M4 transfers the control to M5. M4 determines that the sentential form is not a string of terminal symbols if it reaches a nonterminal symbol. In this case, M4 switches from its first to its second state.

Theorem 4.6.2    Each recursively enumerable language is a Type 0 language.

Proof     The proof consists of constructing from a given Turing machine M a grammar that can simulate the computations of M. The constructed grammar G consists of three groups of production rules.

The purpose of the first group is to determine the following three items.

1.  An initial configuration of M on some input.
2.  Some segment for each auxiliary work tape of M. Each segment must include the location under the head of the corresponding tape.
3.  Some sequence of transition rules of M. The sequence of transition rules must start at the initial state, end at an accepting state, and be compatible in the transitions that it allows between the states.
The group of production rules can specify any initial configuration of M, any segment of an auxiliary work tape that satisfies the above conditions, and any sequence of transition rules that satisfies the above conditions.

The purpose of the second group of production rules is to simulate a computation of M. The simulation must start at the configuration determined by the first group. In addition, the simulation must be in accordance with the sequence of transition rules, and within the segments of the auxiliary work tapes determined by the first group.

The purpose of the third group of production rules is to extract the input whenever an accepting computation has been simulated, and to leave nonterminal symbols in the sentential form in the other cases. Consequently, the grammar can generate a given string if and only if the Turing machine M has an accepting computation on the string.

Consider any Turing machine M = <Q, , , , q0, B, F>. With no loss of generality it can be assumed that M is a two auxiliary-work-tape Turing machine (see Theorem 4.3.1 and Proposition 4.3.1), that no transition rule originates at an accepting state, and that N = { | is in } { [q] | q is in Q } {¢, \$, , , , #, S, A, C, D, E, F, K} is a multiset whose symbols are all distinct.

From M construct a grammar G = <N, , P, S> that generates L(M), by tracing in its derivations the configurations that M goes through in its accepting computations. The production rules in P are of the following form.

1.  Production rules for generating any sentential form that has the following pattern.

Each such sentential form corresponds to an initial configuration (¢q0a1 an\$, q0, q0) of M, and a sequence of transition rules i1 it. The transition rules define a sequence of compatible states that starts at the initial state and ends at an accepting state. represents the input head, represents the head of the first auxiliary work tape, and represents the head of the second auxiliary work tape. The string B BB B corresponds to a segment of the first auxiliary work tape, and the string B BB B to a segment of the second.

A string in the language is derivable from the sentential form if and only if the following three conditions hold.

1.  The string is equal to a1 an.
2.  M accepts a1 an in a computation that uses the sequence of transition rules i1 it.
3.  B BB B corresponds to a segment of the ith auxiliary work tape that is sufficiently large for the considered computation of M, 1 i 2. The position of in the segment indicates the initial location of the corresponding auxiliary work-tape head in the segment.
The production rules are of the following form.

The production rules for the nonterminal symbols S and A can generate a string of the form ¢a1 an\$C for each possible input a1 an of M. The production rules for the nonterminal symbols C and D can generate a string of the form B BB B#E for each possible segment B BB B of the first auxiliary work tape that contains the corresponding head location. The production rules for E and F can generate a string of the form B BB B#[q0] for each possible segment B BB B of the second tape that contains the corresponding head location. The production rules for the nonterminal symbols that correspond to the states of M can generate any sequence i1 it of transition rules of M that starts at the initial state, ends at an accepting state, and is compatible in the transition between the states.

2.  Production rules for deriving from a sentential form

which corresponds to configuration = (uqv\$, u1qv1, u2qv2), a sentential form

which corresponds to configuration = (û\$, û11, û22). and are assumed to be two configurations of M such that is reachable from by a move that uses the transition rule ij.

For each transition rule the set of production rules have

1.  A production rule of the form X X for each X in {¢, \$, #}.
2.  A production rule of the form a a, for each symbol a in {¢, \$} that satisfies the following condition:  is a transition rule that scans the symbol a on the input tape without moving the input head.
3.  A production rule of the form a a, for each symbol a in {¢, \$} that satisfies the following condition:  is a transition rule that scans the symbol a in the input tape while moving the input head one position to the right.
4.  A production rule of the form ab ab, for each pair of symbols a and b in {¢, \$} that satisfy the following condition:  is a transition rule that scans the symbol b in the input tape while moving the input head one position to the left.
5.  A production rule of the form X Y for each 1 i 2, and for each pair of symbols X and Y in that satisfy the following condition:  is a transition rule that replaces X with Y in the ith auxiliary work tape without changing the head position.
6.  A production rule of the form X Y for each 1 i 2, and for each pair of symbols X and Y in that satisfy the following condition:  is a transition rule that replaces X with Y in the ith auxiliary work tape while moving the corresponding head one position to the right.
7.  A production rule of the form XY XZ for each 1 i 2, and for each triplet of symbols X, Y, and Z in that satisfy the following condition: is a transition rule that replaces the symbol Y with Z in the ith auxiliary work tape while moving the corresponding head one position to the left.
The purpose of the production rules in (1) is to transport from right to left over the nonhead symbols in {, , }, across a representation

of a configuration of M. gets across the head symbols , , and by using the production rules in (2) through (7). As gets across the head symbols, the production rules in (2) through (7) "simulate" the changes in the tapes of M, and the corresponding heads position, because of the transition rule .

3.  Production rules for extracting from a sentential form

which corresponds to an accepting configuration of M, the input that M accepts. The production rules are as follows.

Example 4.6.2    Let M be the Turing machine whose transition diagram is given in Figure 4.5.6(a). L(M) is generated by the grammar G that consists of the following production rules.

1.  Production rules that find a sentential form that corresponds to the initial configuration of M, according to (a) in the proof of Theorem 4.6.2.

2.  "Transporting" production rules that correspond to (b.1) in the proof of Theorem 4.6.2, 1 i 5.

3.  "Simulating" production rules that correspond to (b.2-b.4) in the proof of Theorem 4.6.2.

4.  "Simulating" production rules that correspond to (b.5-b.7) in the proof of Theorem 4.6.2.

5.  "Extracting" production rules that correspond to (c) in the proof of Theorem 4.6.2.

The string abc has a leftmost derivation of the following form in G.

Theorem 4.6.2, together with Theorem 4.5.3, implies the following result.

Corollary 4.6.1    The membership problem is undecidable for Type 0 grammars or, equivalently, for { (G, x) | G is a Type 0 grammar, and x is in L(G) }.

A context-sensitive grammar is a Type 1 grammar in which each production rule has the form 1A2 12 for some nonterminal symbol A. Intuitively, a production rule of the form 1A2 12 indicates that A can be used only if it is within the left context of 1 and the right context of 2. A language is said to be a context-sensitive language, if it can be generated by a context-sensitive grammar.

A language is context-sensitive if and only if it is a Type 1 language (Exercise 4.6.4), and if and only if it is accepted by a linear bounded automaton (Exercise 4.6.5). By definition and Theorem 3.3.1, each context-free language is also context-sensitive, but the converse is false because the non-context-free language { aibici | i 0 } is context-sensitive. It can also be shown that each context-sensitive language is recursive (Exercise 1.4.4), and that the recursive language LLBA_reject = { x | x = xi and Mi does not have accepting computations on input xi in which at most |xi| locations are visited in each auxiliary work tape } is not context-sensitive (Exercise 4.5.6).

Figure 4.6.2

 Figure 4.6.2 Hierarchy of some classes of languages. Each of the indicated languages belongs to the corresponding class but not to the class just below it in the hierarchy.

gives the hierarchy of some classes of languages. All the inclusions in the hierarchy are proper.

[next][prev] [prev-tail] [front] [up]