The classes of languages that are accepted by finitestate automata on the one hand and pushdown automata on the other hand were shown earlier to be the classes of Type 3 and Type 2 languages, respectively. The following two theorems show that the class of languages accepted by Turing machines is the class of Type 0 languages.
Theorem 4.6.1 Each Type 0 language is a recursively enumerable language.
Proof Consider any Type 0 grammar G = <N, , P, S>. From G construct a two auxiliaryworktape Turing machine M_{G} that on a given input x nondeterministically generates some string w in L(G), and then accepts x if and only if x = w.
The Turing machine M_{G} generates the string w by tracing a derivation in G of w from S. M_{G} starts by placing the sentential form S in the first auxiliary work tape. Then M_{G} repeatedly replaces the sentential form stored on the first auxiliary work tape with the one that succeeds it in the derivation. The second auxiliary work tape is used as an intermediate memory, while deriving the successor of each of the sentential forms.
The successor of each sentential form is obtained by nondeterministically searching for a substring , such that is a production rule in G, and then replacing by in .
M_{G} uses a subcomponent M_{1} to copy the prefix of that precedes onto the second auxiliary work tape.
M_{G} uses a subcomponent M_{2} to read from the first auxiliary work tape and replace it by on the second.
M_{G} uses a subcomponent M_{3} to copy the suffix of that succeeds onto the second auxiliary work tape.
M_{G} uses a subcomponent M_{4} to copy the sentential form created on the second auxiliary work tape onto the first. In addition, M_{G} uses M_{4} to determine whether the new sentential form is a string in L(G). If w is in L(G), then the control is passed to a subcomponent M_{5}. Otherwise, the control is passed to M_{1}.
M_{G} uses the subcomponent M_{5} to determine whether the input string x is equal to the string w stored on the first auxiliary work tape.
Example 4.6.1 Consider the grammar G which has the following production rules.
The language L(G) is accepted by the Turing machine M_{G}, whose transition diagram is given in Figure 4.6.1.

The components M_{1}, M_{2}, and M_{3} scan from left to right the sentential form stored on the first auxiliary work tape. As the components scan the tape they erase its content.
The component M_{2} of M_{G} uses two different sequences of transition rules for the first and second production rules: S aSbS and Sb . The sequence of transition rules that corresponds to S aSbS removes S from the first auxiliary work tape and stores aSbS on the second. The sequence of transition rules that corresponds to Sb removes Sb from the first auxiliary work tape and stores nothing on the second.
The component M_{4} scans from right to left the sentential form in the second auxiliary work tape, erasing the content of the tape during the scanning. M_{4} starts scanning the sentential form in its first state, determining that the sentential form is a string of terminal symbols if it reaches the blank symbol B while in the first state. In such a case, M_{4} transfers the control to M_{5}. M_{4} determines that the sentential form is not a string of terminal symbols if it reaches a nonterminal symbol. In this case, M_{4} switches from its first to its second state.
Theorem 4.6.2 Each recursively enumerable language is a Type 0 language.
Proof The proof consists of constructing from a given Turing machine M a grammar that can simulate the computations of M. The constructed grammar G consists of three groups of production rules.
The purpose of the first group is to determine the following three items.
The purpose of the second group of production rules is to simulate a computation of M. The simulation must start at the configuration determined by the first group. In addition, the simulation must be in accordance with the sequence of transition rules, and within the segments of the auxiliary work tapes determined by the first group.
The purpose of the third group of production rules is to extract the input whenever an accepting computation has been simulated, and to leave nonterminal symbols in the sentential form in the other cases. Consequently, the grammar can generate a given string if and only if the Turing machine M has an accepting computation on the string.
Consider any Turing machine M = <Q, , , , q_{0}, B, F>. With no loss of generality it can be assumed that M is a two auxiliaryworktape Turing machine (see Theorem 4.3.1 and Proposition 4.3.1), that no transition rule originates at an accepting state, and that N = {  is in } { [q]  q is in Q } {¢, $, , , , #, S, A, C, D, E, F, K} is a multiset whose symbols are all distinct.
From M construct a grammar G = <N, , P, S> that generates L(M), by tracing in its derivations the configurations that M goes through in its accepting computations. The production rules in P are of the following form.
Each such sentential form corresponds to an initial configuration (¢q_{0}a_{1} a_{n}$, q_{0}, q_{0}) of M, and a sequence of transition rules _{i1} _{it}. The transition rules define a sequence of compatible states that starts at the initial state and ends at an accepting state. represents the input head, represents the head of the first auxiliary work tape, and represents the head of the second auxiliary work tape. The string B BB B corresponds to a segment of the first auxiliary work tape, and the string B BB B to a segment of the second.
A string in the language is derivable from the sentential form if and only if the following three conditions hold.
The production rules for the nonterminal symbols S and A can generate a string of the form ¢a_{1} a_{n}$C for each possible input a_{1} a_{n} of M. The production rules for the nonterminal symbols C and D can generate a string of the form B BB B#E for each possible segment B BB B of the first auxiliary work tape that contains the corresponding head location. The production rules for E and F can generate a string of the form B BB B#[q_{0}] for each possible segment B BB B of the second tape that contains the corresponding head location. The production rules for the nonterminal symbols that correspond to the states of M can generate any sequence _{i1} _{it} of transition rules of M that starts at the initial state, ends at an accepting state, and is compatible in the transition between the states.
which corresponds to configuration = (uqv$, u_{1}qv_{1}, u_{2}qv_{2}), a sentential form
which corresponds to configuration = (û$, û_{1}_{1}, û_{2}_{2}). and are assumed to be two configurations of M such that is reachable from by a move that uses the transition rule _{ij}.
For each transition rule the set of production rules have
of a configuration of M. gets across the head symbols , , and by using the production rules in (2) through (7). As gets across the head symbols, the production rules in (2) through (7) "simulate" the changes in the tapes of M, and the corresponding heads position, because of the transition rule .
which corresponds to an accepting configuration of M, the input that M accepts. The production rules are as follows.
Example 4.6.2 Let M be the Turing machine whose transition diagram is given in Figure 4.5.6(a). L(M) is generated by the grammar G that consists of the following production rules.
The string abc has a leftmost derivation of the following form in G.
Theorem 4.6.2, together with Theorem 4.5.3, implies the following result.
Corollary 4.6.1 The membership problem is undecidable for Type 0 grammars or, equivalently, for { (G, x)  G is a Type 0 grammar, and x is in L(G) }.
A contextsensitive grammar is a Type 1 grammar in which each production rule has the form _{1}A_{2} _{1}_{2} for some nonterminal symbol A. Intuitively, a production rule of the form _{1}A_{2} _{1}_{2} indicates that A can be used only if it is within the left context of _{1} and the right context of _{2}. A language is said to be a contextsensitive language, if it can be generated by a contextsensitive grammar.
A language is contextsensitive if and only if it is a Type 1 language (Exercise 4.6.4), and if and only if it is accepted by a linear bounded automaton (Exercise 4.6.5). By definition and Theorem 3.3.1, each contextfree language is also contextsensitive, but the converse is false because the noncontextfree language { a^{i}b^{i}c^{i}  i 0 } is contextsensitive. It can also be shown that each contextsensitive language is recursive (Exercise 1.4.4), and that the recursive language L_{LBA_reject} = { x  x = x_{i} and M_{i} does not have accepting computations on input x_{i} in which at most x_{i} locations are visited in each auxiliary work tape } is not contextsensitive (Exercise 4.5.6).
Figure 4.6.2
