JOURNAL
OF COMPUTER
AND
SYSTEM
SCIENCES
22, 312-327 (1981)
On Time versus Space II*
W. PAUL+ AND R. REISCHUK~
Fakulttit
fir
Mathematik,
Universitiir
Bielefeld,
4800
Bielefeld
I, Germany
Received January 5, 1981
Every t(n)-time bounded RAM (assuming the logarithmic cost measure) can be simulated
by a t(n)/log t(n)-space bounded Turing machine and every t(n)-time bounded Turing
machine with d-dimensional tapes by a t(n) 5 d”or*“n’/log t(n)-space bounded machine, where
n is the length of the input. A class Q of storage structures which generalizes multidimensional tapes is defined. Every t(n)-time bounded Turing machine whose storage structures are
in Q can be simulated by a t(n) loglog t(n)/log t(n)-space bounded Turing machine.
1. INTRODUCTION AND RESULTS
A basic question of computational complexity is whether storage space is a more
powerful resource than computation time. For l-tape Turing machines the question
was answered in the affirmative by Hopcroft and Ullman in [6] and improved by
Paterson in 191. Recently Loui [8] generalized this result to multidimensional l-tape
machines. In 1975 Hopcroft et al. [5] proved that every t(n)-time bounded multitape
Turing machine can be simulated by a t(n)/log t(n)-space bounded Turing machine.
Here we extend this result to several other machine models, in particular to random
access machines (RAMS) [ 1 ] and multidimensional
[4] Turing machines. In the
instruction
set of a RAM
additions/subtractions
are allowed,
but
multiplications/divisions
are not. In this paper RAM time shall always mean time
with logarithmic cost measure until otherwise noted, and all logarithms are taken to
the base 2. log* n is defined as
min
I
LEN
12”‘.*>n
k-times
.
I
For all kinds of Turing machine models we consider in this paper, it is assumed that
the machines have an extra linear two-way read-only input tape and a linear writeonly output tape. We don’t obtain the result for RAMS and multidimensional Turing
* Paper presented at the 20th IEEE-FOCS.
‘Part of this research was done while the first author was visiting the “Laboratoire de recherche en
informatique de I’universite de Paris sud,” supported by DAAD Grant 3 1I-f-HSLA-soe.
$ Research supported by DFG Grant Pa 248/l.
0022-0000/S l/O303 12-l 6$02.00/O
Copyright 8 1981 by Academic Press, Inc.
All rights of reproduction in any form reserved.
312
313
ON TIME VERSUS SPACE II
machines by a direct simulation of such machines; instead we consider tree-machines
]4, 101.
These are Turing machines whose tapes are infinite rooted complete binary trees.
The nodes of the trees correspond to tape cells; they can store symbols of a finite
alphabet. There is one head for each tape which starts at the root, and the head
motions are r, 1 (descend one step into the right, left subtree) and b (backtrack one
step towards the root).
The reason to look at such machines is that the geometry of their storage is
relatively simple; we don’t have indirect addressing as for RAMS, nor possibly loops
in the path covered by a head as for multidimensional machines.
We describe a space-efficient simulation of time bounded tree-machines which
gives
THEOREM 1. Every t(n)-time bounded tree-machine
t(n)/log t(n)-space bounded Turing machine.
can be simulated
by a
A simple construction shows how to simulate time bounded RAMS by tree
machines without loss of time. These results imply
THEOREM 2. Every f(n)-time bounded RAM can be simulated by a t(n)/log t(n)space bounded Turing machine.
In [ 111 it is proved that every t(n)-time bounded d-dimensional Turing machine
(d E FJ) can be simulated by a 5d”og*f(n)t(n)-time bounded tree-machine. Therefore we
also get
THEOREM
3.
Every t(n)-time bounded d-dimensional Turing
simulated by a 5d”og*t(n’ t(n)/log t(n)-space bounded machine.
Next a class G? of storage structures for Turing
generalizes multidimensional tapes. We then prove
machine can be
machines is defined which
4.
Every t(n)-time bounded Turing machine with storage structures in
F can be simulated by a t(n) loglog t(n)/log t(n)-space bounded Turing machine.
THEOREM
Theorem 2 and 3 imply
COROLLARY
1. If t(n) is tape constructible then t(n)-space bounded Turing
machines can accept more languages than o(t(n) log t(n))-time bounded RAMS and
o(t(n) log t(n)/5d”og*r(“) )-time bounded d-dimensional multitape Turing machines.
COROLLARY 2. The context sensitive languages cannot be recognized by
o(n log n)-time bounded RAMS or o(n log n/5 d”Og’“)-time bounded d-dimensional
multitape Turing machines.
(The proofs follow from the hierarchy theorems. See, for example, Theorems 10.9
and 8.2 in [7].)
314
PAUL AND RElSCHUK
2. SPACE-EFFICIENTSIMULATION OF TREE-MACHINES
Let R be a t(n)-time bounded tree-machine. For each node in a storage tree of R
we assume the edge to the left son to be labelled by 0 atrd the edge to the right son to
be labelled by 1. The address of a node u in a tree T is defined to be the label of the’
path from the root of T to v. The address of a subtree T’ of T is the adress of the
root of T’.
When trying a simulation of t steps of R in the spirit of (51 even the obvious
description of a single head position (by the address of the ,node visited) needs Q(t)
bits if a head moves far from the root. We therefore define a tree machine to be d(n)depth bounded if for all inputs of length 12it only visits nodes of depth at most d(n)
on any of its storage trees.
The simulation described below uses less space than the simulated machine R uses
time, only if R is o(t(n))-depth bounded. A tree machine may not work within that
depth bound, but it is possible to “compress” its computation to logarithmic depth;
that means:
LEMMA 1. Every t(n)-time bounded tree machine can be simulated by a
simultaneously O(t(n))-time bounded and O(log t(n))-depth bounded tree machine.
A proof of this lemma will be given in a later section. Theorem 1 now follows from
the following
LEMMA 2. Every simultaneously
t(n)-time
bounded and O(log t(n))-depth
bounded tree machine can be simulated by a t(n)/log t(n)-space bounded Turing
machine.
The proof uses simulation by recomputation and the argument of the pebble lemma
from [5]; the latter will have such a special form that we will analyze the simulation
algorithm directly rather than introduce a generalized pebble game.
Let R be a tree machine as in the hypothesis of the lemma with storage trees
T I ,..., Tk. Let w be an input of length n for R and let us first assume t = t(n) to be
known. The computation of R started with w can be described as a sequence of
configurations C,, C, ,..., C, which specify the contents of the tape cells, the head
positions and the state of the machine. The transition from Ci- 1 to Ci (0 < i < t) is
called the ith step of that computation. We say a node of a storage tree T of R is
visited in configuration Ci if the head working in T scans that node in Ci. In what
follows we will often use the following facts:
If S is a set of nodes of T visited in coxwecutive c0&gurati0n.s C,,
C (2+,,..., C, (0 ( a < b < t) then the subgraph rs a# T that is induced
by S, is a subtree of T with at most (S] nodes. Since R is depth
bounded the address of Ts has length at most O(log t). The shape of Ts
and the contents of S in a given configuration can be encoded on space
OtIsI).
(2.1)
ON TIME
VERSUS
SPACE
II
If S’ is another set of nodes of T visited in C,, C,, i ,..., C,
(0 < r ,< s < t) then Ts n Ts, is a tree whose shape, address and
contents can be encoded on space O(l S n S’ 1+ log t).
315
(2.2)
We remark that for multidimensional Turing machines a statement similar to (2.1)
holds, but since multidimensional tapes regarded as graphs contain loops, (2.2) does
not hold.
It will be convenient to use the following notations: For 0 < a < t &(a) denotes
the state and head positions of R in configuration C, and the contents at C, of those
cells which are visited in C,. Because of the depth bound any she(a) can be encoded
on space O(log t). Let x be a node of a storage tree and 1 <a <b <t, then
con(x; a, 6) is defined to be the contents of x in configuration C, if x is visited during
c a-19 c,,..., c,- ,, otherwise it is the empty string.
The computation graph G of R given w is the graph with nodes (l,..., t}. For u < L:
there is an edge between nodes u and u if u = u + 1 or if R given w visits some
storage location in configuration C, _, and C,- , , but not in between. For 1 < a < i <
b < t we denote by e(a, i) the number of edges in G between nodes from (a,..., i) and
by ouf(a, i, b) the number of edges from {a ,..., i) to {i + I,..., b). Clearly e(u, b) =:
e(u, i) + e(i + 1, b) + ovl(u, i, b).
ovl(u, i, b) measures some sort of overlap: the number of times information, which
was produced in steps a, a + l,..., i, is looked up in steps i + l,..., b. Since the
indegree of G is bounded by k + 1, we have for all 1 < a < b < t:
(b - a) < e(u, 6) ,< (k + l)(b - a); in particular:
t-l<e(l,t)<(k+l)t.
(2.3)
The simulation is done by a recursive procedure SIM(u, b, shc(u - l), x), where u
and b are integers, 1 < u < b < t and x is a storage location of R. It simulates steps a
up to b and produces as output (she(b), con(x; a, b)).
When started with parameters (a, b, shc(u - l), x) the procedure is allowed to look
up at no cost to the call the contents of any storage location of R in configuration
C O-1, say by using an oracle tape. When the procedure makes use of this option, we
say that it asks an (a - 1)-question.
There will be a global space bound. Whenever any invocation of the procedure
tries to exceed this space bound, the invocation fails. We now describe the procedure
and prove then that SIM(l, t, she(O), x) does not fail if the global space bound is
O(t/log t).
Procedure SIM(u, b, shc(u - l), x)
Run successively the three strategies below until one does not fail. For the last two
strategies another parameter i is needed. Try successively i = a, a + l,..., b - 1. If no
strategy succeeds fail.
Strategy 1 (chronological step by step simulation).
The first strategy simulates the
steps a up to b of R in chronological order (compare, for example, with the proof of
316
PAUL
AND
REISCHUK
Theorem 8 in [3]). The simulation starts by storing the head positions, the state of
the machine and the symbols scanned just before step a (which is given by
shc(a - 1)) on the workspace of SIM(a, b, shc(a - l), x).
For u = a, a t l,..., b the headmotions and the symbols printed in step u are also
stored one after another. If in step u a head moves to a cell y the contents of y in C,
is needed for she(b) if u = b, else for the simulation of step u f 1. First observe
whether y has been visited between C,-, and C,-, . This and the contents of y (if
need be) can be determined easily from the information stored up to now. If y has not
been visited between C,- I and CU-, the contents of y is asked by an (a - l)question. After simulation of step b the output (she(b), con(x; a, b)) can easily be
computed from the stored headmotions and symbols.
When this direct simulation tries to exceed the global space bound, then strategy 1
fails.
Strategy 2 (recompute overlap). Invoke SIM(u, i, shc(u - l), x) to obtain (she(i),
con(x; a, i)). Invoke SIM(i t 1, b, she(i), x) to obtain (she(b), con(x; i t 1,6)).
Whenever this call determines the contents of a cell y in configuration Ci by an iquestion, invoke instead SIM(u, i, shc(u - l), y). If y has not been visited in
co-, ,a**,Ci-, its contents in C, equals its contents in C,-, , which can be asked by
an (a - l)-question. If any of these calls fails, then strategy 2 fails. Otherwise the
output (she(b), con(x; a, b)) can easily be computed.
Strategy 3 (precompute overlap). Invoke SIM(u, i, shc(u - l), x) to obtain
(she(i), con(x; a, i)). Ovl := 0.
For each head position y in Ci call SIM(a, i, shc(u - l), y) to determine whether y
was visited between C,- i and Ci-, . If so store y and its contents in Ci in Ovl.
The following statement (2.4) holds at this point for u = i and stays valid alter
each execution of the loop (2.5).
Ovl encodes the shape and contents in C, of all those cells which were
visited during C,-, ,..., Cisl and revisited during Ci ,..,, C,.
(2.4)
for u := i + 1 step 1 until b do begin
From shc(u - 1) simulate step u of R.
To obtain she(u) it remains to compute for each head position y in
C,, but not in C,- 1, the contents of y at C,. Since this equals the
contents of y in C,- , , invoke for u > i + 1 SIM(i t 1, u - 1,
she(i), y) to obtain con(y, i t 1, u - 1).
If y was not visited between Ci and C,-, then invoke SIM(u, i,
shc(u - l), y).
If y was visited between C,- I and C,-, , then extend Ovl by y and
con(y; a, i) (statement (2.4) holds again), else the contents of y in
C, equals its contents in C,- , and can be determined by an
(a - I)-question.
end.
(2.5)
ON TfME
VERSUS
SPACE
317
II
Finally invoke SIM(I’ + 1, b, she(i), x) replacing all i-questions by look-ups in Ovl
or if that does not work by (a - 1)-questions, to get (she(b), con(x; i + 1, b)).
Compute con(x; a, b) from con(x; a, i) and con(x; i + 1, b).
If any of the recursive calls fails, strategy 3 fails.
end of procedure.
Obviously the whole computation of R can be simulated by SIM( 1, t, she(O), x),
where x may be any storage loclation. Note that for a = 1 (a - I)-questions have a
trivial answer (all storage locations are empty at that time).
We now analyze the space requirements of this procedure. Let P(a, b) denote the
maximum space used by SIM(a, b, shc(a - l), x) for any tape cell x. For 1 < a < i <
b < t, appropriate constants A, B, C > 1 and f(t) = A log t, we get by (2.2)
f’(a, b) <f(t)
1strategy 1)
+ a~j~b {B . (b -a),
istrategy 2 1
P(a, i) + P(i + 1, b),
max
i+l<U<b
(P(a, i), P(i + 1, u)) + C 3ovl(a, i, b)}.
[strategy 3 ]
If e(a, b) is a large enough multiple off(t), then focus on the maximum i such that
e(a, i) is less than e(a, b)/2 - ~(a, b), where r(a, b) = e(a, b)/log(e(a, b)/‘(t)). Since
increasing any i by 1 increases e(a, i) by at most k + 1 we have
$a, 6)
2
----r(a,b)-(k+
l)<e(a,i)<
e(a, b)
___-2
r(a, b).
(2.6)
This implies for e(i + 1, b) = e(a, b) - e(a, i) - ovl(a, i, b):
eh b)
e(i + 1, b) < ~ 2
+ r(a, b) + (k + 1) - ovl(a, i, b).
(2.7)
If the overlap ovl(a, i, b) is relatively large, that means ovl(a, i, b) > 2r(a, 6) + k, we
focus on strategy 2 that recomputes the overlap, otherwise on strategy 3 to
precompute it. Let us define
Q(m)=
2;~
P(a, b).
e(o.b)<m
Then we get, using (2.6) and (2.7)
P(a, b) < f(t)
if
+ P(a, i) + P(i + 1, b)
ovl(u, i, b) > 2r(a, b) + k + 1,
318
PAULANDREISCHUK
e(a, b)
G f(f) + Q 2
if
+ r(a, 6) + k + p + C(2r(a, b) -t k)
ovl(a, i, b) < 2r(a, b) + k.
Since the bounds are monotonic in e(a, b) we get
f2m
Q (T+
lo&/(r))
+ k + l ) + c ( log(m/f(t))
+ k)I
P-8)
if m is a large enough multiple off(t).
From this and the general bound Q(m) <f(t) + B . m, it follows by a messy
induction (Section 5) that Q(m) = O(m/log(m/f(t))).
In particular P(1,1) = Q(O(t)) =
O(t/log t).
If the time-bound t = t(lz) of the simulated machine R is not known at the
beginning of the simulation we try t = n, 2n, 4n,... until SIM succeeds. 1
Theorem 2 follows from
LEMMA 3. Every t(n)-time bounded RAM can be simulated by an O@(n))-time
bounded tree machine.
Proof. We describe how to represent the contents of the registers of a RAM in a
tree. For i E N let bin(i) be the binary representation of i without leading zeros. For
x = x1 )...) X,) xi E (0, 1) let h(x) :=x,0x,0
... Ox,. Now for each address i of a
register of the RAM store the contents of register i on the rightmost branch of the
subtree with address h(bin(i)).
It is now easy to verify that each instruction of the RAM with logarithmic cost c
can be simulated in O(c) steps with the help of one additional linear tape. I
3. DEFINITION AND SIMULATION OF GENERALIZED TURING MACHINES
A storage structure is a directed (usually infinite) graph in which each node has
the same outdegree d and the d edges leaving a node are labelled with the directions
(l,..., d}. There is one head and one of the nodes is labelled as the starting point of
that head. The nodes correspond to tape cells and can store symbols of a finite
alphabet. Head motions are {O,..., d}, where 0 means no move and i E {I,..., d} means
to use the edge labelled with i to get to an adjacent cell.
The depth d(u) of a node u is the length of the shortest path from the starting point
to u.
ON
TIME
VERSUS
SPACE
319
II
The class %? consists of those storage structures where the nodes can be addressed
by strings over a finite alphabet such that (3.1) and (3.2) hold.
j address of u I= O(~(U)~) for some 0 < a < 1 independent of u.
(3.1)
For each node u and direction i, the address of the neighbour of u in
direction i can be computed from i and the address of u within space
W(U)“).
(3.2)
Note, for example, that multidimensional tapes are in g, tree tapes are not.
We now come to the proof of Theorem 4. Let R’ be a t(n)-time bounded Turing
machine whose k tapes are storage structures contained in @, and 0 < a < 1 be
chosen according to (3.1). To simulate a computation of R’ started with an input of
length n we again use procedure SIM from the proof of Theorem 1. We assume
t = t(n) to be known. For storage structures in ‘8 (2.1) and (2.2) does not necessarily
hold, but we have by (3.1):
If S’ is a set of nodes of a storage structure T’ of R’ visited in
configurations Ci, Ci+, ,..., Cj of the computation of R’ then the
subgraph T; induced by S’ and the contents of S’ at a given
configuration can be encoded within space O(lS’\ + t”).
(3.3)
It is easy to see how strategy 1 has to be modified to simulate steps in T’ in
chronological order. By (3.1)-(3.3) the space used to simulate steps a, a + l,..., b is
O(b - a + t”). Strategy 2 remains unchanged, but strategy 3 will be slightly different.
As the geometry of T’ may be quite complicated it may cost too much space to
encode overlap locations. Therefore this time we encode overlap steps. Thus (2.4)
becomes
Ovl encodes the set J = {G, s, r) / i < j < u, 1 < 1< k, there is a cell of
the Ith tape of R’ visited between Cam,,..., Ci-l and first revisited in
Cj, s is the contents of that cell in Ci}.
(3.4)
LEMMA
4. Zf log t < IJI < (b - a)/2 then J can be encoded within space C’ . (IJI .
log((b - a)/~J~)) for an appropriate constant C’.
Proof.
Let J = {(ji, Si, li) 1 1 < i < p}, where j, < .*. < j, and p = IJl. Let di =
ji - ji- 1 for i > 2. Encode the ji)s by j, and the ordered sequence of the di’s. The
length of the resulting encoding is of the order of
i=2
by concavity of the logarithm function.
With Cfz2 di = jP - j, < b - a the lemma follows.
The analysis for the space requirement of the modified
I
procedure SIM follows the
PAUL AND REISCHUK
320
earlier one, with two exceptions: Instead of A log t, the common overhead termf(t) is
A . tn (with no effect on the solution); and instead of C ovl(u, i, b), the special
overhead term in strategy 3 is the space to encode J, where 1J1< ovl(a, i, b). The
analysis used the latter term only in the case that ovl(u, i, b) was at most 2r(a, b) + k
and e(a, b) was a large enough multiple off(t); by Lemma 4, in that case, the term is
at most proportional to
2r(a,
6) log(e(a,b)/2r(a, b))
(since the function z log(e(a, b)/z) is
increasing up to z = 2r(u, b) when e(u, b)
is a large enough multiple of f(t))
< Wa, b) log(e(a,b)/r(a, b))
= 2e(u b) WogW9
4/f(O)
3
lode@~ O!!(t))
*
The resulting recurrence of Q now is
I-
e(;+log($(t))
+ k + 1 + 2C’ . m l”~~$$~)/
1
(3.5)
if m is a large enough multiple off(t),
Q(m) <f(t)
+ B a m in any case;
and the solution is
In particular, P( 1, t) = Q(O(t)) = O(t loglog t/log t).
1
4. DEPTH REDUCTION OF TREE MACHINES
In this section we will prove Lemma 1 and exhibit some consequences of it.
Proof of Lemmu 1. Let R be a t = t(n)-time bounded tree machine. We’ll describe
an O(log t)-depth bounded tree machine R’ that simulates R. R’ has the same number
of tree tapes as R and O(log t) auxiliary storage. For each tree tape T of R the
inscription of T is stored in a corresponding tree tape T’ of R’. We may assume that t
is known and is a power of two, otherwise try t = 2, 4,8, 16,... [2].
ON TIME VERSUS SPACE II
321
FIGURE 1
A A-tree is a complete depth-A subtree of storage tree. Leaf i of a A-tree U is the
leaf of U with address bin(i)’ (with respect to the root of v).
Let T,, be the 2 log t-tree of T whose root is the root of T. T’_, denotes the log ttree of T’ whose root is the root of T’. For 0 < i < t let T/ be the 2 log t-tree of T’
whose root is leaf i of Tl_, (Fig. 1).
The idea of the simulation is the following.
As long as R uses storage locations in T,,, R’ uses the corresponding locations in
Tb. If R tries to leave T,,, that means R’ has to leave Tb, the log t-tree U of Y0
containing the last head position is copied into the upper part of the 2 log t-tree T’,
and the simulation is continued in T’, (Fig. 2). If the machine has to leave T’, passing
one of its leaves we do the same as above with T’, replaced by T; ; if Ti is left at the
root we go back to Tb.
We now give a more detailed description. Three tracks are used on 7’. The first
contains tape inscriptions, the second is used for forward and the third for backward
addresses of subtrees.
Begin to simulate R in T& on track 1. Leaf 0 of Tl_, is now occupied.
Let f = min(i ) leaf i of T’i is not occupied); f is now one.
T
A
U
b
T’
FIGURE 2
’ Here we assume bin(i) to have an appropriate number of leading zeros.
(4.1)
322
PAUL
AND
REISCHUK
When R is simulated in q and a son of leaf k of T; has to be accessed
interrupt the simulation; store bin(k). Backtracking log t steps from leaf
k of T: one reaches a node u which is the root of a log f-tree U. (See
Fig. 3.) Two cases are possible:
(4.2)
Track 2 of the left branch of U is empty. (U has no forward
address.) Then copy track 1 of U into track 1 of Ti making the
root of Tj the root of the new copy of U. Write bin(f) on track 2
of the left branch of U, this, the address of the new copy of U, is
the forward address of U. Write bin(i) followed by the first half of
bin(k) on track 3 of the left branch of T;; this is the backward
address of the new copy of U. Follow in T; the path whose label is
the second half of bin(k); this brings the head in the new copy of
U to the leaf where the simulation was interrupted in the old copy
of U. Erase bin(k); increaseS by one; continue to simulate R.
(4.2.1)
Track 2 of the left branch of U contains some bin(j) (its forward
address). Then copy track 1 of U into track 1 of Tj making the
root of Tj the root of the copy of U. Follow in Tj the path whose
label is the second half of bin(k); erase bin(k); continue to
(4.2.2)
simulate R.
When R is simulated in Ti (i > 0) and the father of the root of T,! has
to be accessed interrupt the simulation. Track 3 of the left branch of T;
contains the address of some node u at distance 2 log t from the root of
T' (backward address). Copy track 1 of the log t-tree whose root is the
root of T,f on track 1 of the log f-tree whose root is U. Move the head
to u; continue to simulate R.
(4.3)
We call (4.2) and (4.3) copy operations. Apart from the work for the actual
copying of log t-trees the overhead for a copy operation is bounded by O(log t) steps.
As between two copy operations at least log t steps must be simulated, the total
overhead is bounded by O(f) plus the total time for copying the log t trees. It remains
to show how to reduce the latter to O(t).
This is done by copying only those parts of a log t-tree which were changed after
FIGURE
3
323
ON TIME VERSUS SPACE II
that tree was copied for the last time. Thus on an additional fourth track mark each
cell which is visited in a simulation step which does not belong to a copy operation.
Whenever a copy operation begins the marked portion of the log r-tree U which is
copied forms a subtree U’ of U whose root is the root of U. Instead of copying all of
U copy only U’. While copying erase the markers on track 4. Copying a subtree U’ of
s nodes takes O(s) steps. A marked subtree of s nodes of U can only occur if at least
s steps of R have been simulated in U since any subtree of U was copied the last
time. This shows that the total time for copying subtrees is bounded by O(t). I
This result about depth reduction of tree machines turns out to be useful also in
some other respects.
COROLLARY
3. Every t(n)-time bounded k-tree machine can be simulated by an
O(t(n) log t(n))-time bounded l-tree machine.
Proof.
By Lemma 1 and an obvious modification
of the proof of Theorem 10.4
in 171. I
THEOREM
O(t(n)/loglog
Every t(n)-time bounded tree machine can be simulated
t(n)) unit cost time bounded RAM.
5.
by an
This is proven by a 4-Russian algorithm in the spirit of the proof of Theorem 4 in
IS]. Choose A = loglog t(n) -c,, where cr is determined later as a function of the
alphabet size a, number of states s and number of tapes k of the simulated
machine M. Cover the storage trees of M with overlapping A-trees such that for every
m E N every node of depth mA/2 in a storage tree is a root of one of these A-trees.
On the simulating machine S implement for each storage tree T of M a tree, where
each node u corresponds to one of the A-trees in T and u, is son of u2 iff the root of
the A-tree corresponding to u, is a node of level A/2 in the A-tree corresonding to u2.
In each node u store during the simulation an encoding of the inscription of the A-tree
corresponding to u.
By Lemma 1 we can assume that M is O(log t(n))-depth bounded. Thus the trees
implemented on the simulating machine have depth O(log t(n)/A).
Before the actual simulation S precomputes three tables which will then enable it
to simulate A/2-steps of M by O(k) table look ups.
(i) A local configuration of it4 is a 2k + 1-tuple (z, i, ,..., i,, p1 ,..., pJ, where z
is a state of M, the ij are inscriptions of A-trees and the pi specify headpositions in the
A-trees. There are L = s . a’.*’ . 2kA local configurations. Clearly L < t(n)1’4 if cI is
large enough. For each local configuration c, simulate M started in configuration c
until one of the heads tries to leave the A-tree where it started or until a cycle in the
behaviour of it4 is detected. For each c this takes at most O(L) steps. Store the
resulting local configuration in a table.
(ii) For every 4-tuple (il, i,, p, q) where i , , i, are inscriptions of A-trees, p is
an address of a node of depth A/2 in a A-tree and q E (0, 1 } compute and store:
324
PAUL AND REISCHUK
If q = 0 then the result of substituting in i, the A/Ztree with address p by the A/2tree of i, whose root is the root of iz.
If q = 1 then the result of substituting in i, the A/2-tree whose root is the root of i,
by the A/Ztree of i, with address p.
(iii) For every triple (pl, pz, q) where p, is an address of a node in a tree
implemented on the simulating machine, p2 is an address of a leaf of a A-tree and
q E (0, I} compute and store.
If q = 0 then the result of cancelling the last A/2 bits of pI .
If q = 1 then p1 concatenated with the first A/2 bits of p2.
This table has only 20cog ‘(“v’) +A+ ’ entries and can be computed in time r(n)’ for
every E > 0.
From these remarks it should be clear how a proof of Theorem 5 might look like.
Theorem 5, Lemma 3 and standard diagonalization imply
COROLLARY 4. If t(n) is time constructible, then t(n) unit cost time bounded
RAMS can accept more languages than o(t(n) loglog t(n)) logarithmic cost time
bounded RAMS.
5. SOLUTION OF THE DIFFERENCE EQUATIONS
We first prove (2.8). Define
c, = max{ 2’, 2(k + l)},
c2 = max( (4 + B) log c,, 24C).
Suppose that t is big enough such that f(t) > log c,. Then the following hold for all
c>c,:
(5.2)
(5.3)
For 2f(t) < m < c,f(t)
the general bound
Q<m)Sf(t)+B~m
ON TIME
VERSUS
SPACE
325
II
gives
Now let M > cif(t) and let’s suppose by induction that Q(m’) < cz(m’/log(m’/f(t)))
holds for all 2f(t) < m’ < m.
Since m > c,f(t) we have by definition of c, and (5.3)
--m
m
2
m
m
mm
lofdmlf (0) ’ 1 + log(m/f(t))
m
+k+l<r+z+16<m
and
hence we can use the induction hypothesis in (2.8). If the maximum is achieved by
the first term, this gives
--m
Q(m)
~ 2c,
log
2
y-
N
m
sways))
log(m/f(t))
+f (t>
1“(‘) )
2m
m - log(wW>) +f(t)
“*
log(m/f(t))
- 1.5
because of (5.1)
m
“*
log(m/f(t))
~wWfW) - 2 + J- .flIf2 log(m/f(t))
- 1.5 c2 m
I
0.5
h&vW>) - 2
[ log(mlf(t))
m
’ c2 log(m/f(t))
[ log(m/f(t>) - 1.5 + log(m/f(t))
because of (5.2) and c2 > 1
=c2lo&$(t)).
- 1.5 I
326
PAUL AND REISCHUK
In the other case we get
T+
Q(m) < c2
h3
((
$+
log(:,f(I))
+k+ l
log(mTf(r))
+ k+ l
2m
+ ’ ( log(m/f(t))
y+
<
c2
+ k 1 + ‘(‘I
1.5m
Wm/!W)
+ c
log Ifl
( 2f (4 )
2.5m
( Wm/!W)
1 + 2 ~0bYGYY))
because of (5.3) and m > c,f(t)
G c2 log($(r))
[
0.5 log(m/f(t)) + 1.5 I 3C
c2
h@v!fW)
- 1
I
since C > 1
m
’ c2 log(mlf(t))
4
+ 12 1
8log(m/f(t)>
log(m/f(t>> - 8 + s
1
by detinition of c2
=”
since log(m/‘(t))
m
log(wW>)
> log c, > 8.
5 ~og(wYW
8 h@lfW)
+ 11
- 8
1
1
Assertion (3.5) can be proved in almost the same way with C replaced by C’.
I
ACKNOWLEDGMENTS
To simplify the procedure SIM A. Meyer and M. Loui proposed running the three strategies “in
parallel” instead of computing first which strategy will use the least space. We also thank the referees for
their helpful comments.
ON TIME VERSUS SPACE II
327
REFERENCES
I. A. V. AHO, J. E. HOPCROFT, AND J. D. ULLMAN, “The Design and Analysis of Computer
Algorithms,” Addison-Wesley, Reading, Mass., 1974.
2. 2. GALIL, Two fast simulations which imply some fast string matching and palindrome-recognition
algorithms, Inform. Process. Lett. 85-87 (1976).
3. J. HARTMANIS AND R. STEARNS, On the computational complexity of algorithms, Trans. Amer.
Math. Sot. 117 (1965) 288-306.
4. F. C. HENMIE, On-line Turing machine computations, IEEE Trans. EC 15, I (1966) 3444.
5. J. E. HOPCROFT, W. PAUL, AND L. G. VALIANT, On time versus space, in “15th IEEE-FOCS.
1975”; also in J. Assoc. Comput. Much. 24 (1977), 332-337.
6. J. E. HOPCROFT AND J. D. ULLMAN, Relations between time and tape complexities, J. Assoc.
Comput.
Mach.
15 (1968), 414-427.
7. J. E. HOPCROFT AND J. D. ULLMAN, “Formal Languages and Their Relation to Automata.”
Addison-Wesley, Reading, Mass., 1969.
8. M. LOUI, “A Space Bound for One-Tape Multidimensional Turing Machines,” M.1.T. Technical
Report, 1979.
9. M. S. PATERSON,Tape bounds for time-bounded turing machines. J. Comput. Sysfem Sci. 6 (1077).
116-124.
lo. M. J. PIPPENCER AND N. FISCHER, Relations among complexity measures, J. Assoc. Comput. Mach.
23 (1979), 361-381.
Il. R. REISCHUK, A “Fast Implementation” of a multidimensional storage into a tree storage, in “7th
ICALP.” Lecture Notes in computer Science No. 85, pp. 53 l-542, Springer-Verlag, Berlin/New
York, 1980.