Synthesis
Synthesis
Synthesis
BACKGROUND
Onjkit la science avec des faits, comme on fair une maison avrc despier~-es:mais une accumu-
lation de faits n'esr pas plus une science qu'un ras de pierres n'est une maison.
Science is built up with factr, as a house is with srones. But a collection of facts is no more a
science than a heap of stoner is a house.
H._Poincare.La Science et 1' HypothBse.
The purpose of this chapter is to review some fundamental definitions, problems and
algorithms that are necessary for the understanding of the material in the rest of this
book. Sections 2.1, 2.2, 2.3 and 2.4 are meant to be a brief refresher for the reader;
those who need more explanations should consult the related textbooks mentioned in
the reference section.
2.1 NOTATION
A set is a collection of elements. We denote sets by uppercase letters and elements
by lowercase ones. We use braces (i.e., { ] ) to denote unordered sets and parentheses
[J.c., ( ) ] to denote ordered sets. For example, an unordered set of three elements
is denoted as S = ( a ,b, c ) . The cardinality of a set is the number of its elements,
denoted by 1 1. Given a set S, a cover of S is a set of subsets of S whose union
is S. A partition of S is a cover by disjoint subsets, called blocks of the partition.
Set membership of an element is denoted by E, set inclusion by C or E, set union
by U and intersection by n. The symbol V is the universal quantifier; the symbol 3
the existential quantifier. Implication is denoted by =+ and co-implication by +.The
symbol ":" means "such that."
The set of real numbers is denoted by R, the set of integers is denoted by Z
and the set of positive integers by Z+. The set of binary values (0. 1) is denoted by
6 . Vectors and matrices are ordered sets. They are denoted by lower- and uppercase
bold characters, respectively. For example, x denotes a vector and A a matrix. We
denote a vector with all 0 entries by 0 and one with all 1 entries by 1.
The Cartesian product of two sets X and Y, denoted by X x Y, is the set
of all ordered pairs (x, y), such that x E X and y E Y. A relation R between
two sets X and Y is a subset of X x Y. We write x R y when x E X, y E Y and
(x, y) E R. An equivnlence relation is reflexive 1i.e.. (x. x) E R], symmetric 1i.e..
(x, y) E R =+ (y. x ) E R] and transitirv [i.e., (x. j) E R and (y, i) E R =+ (x, Z) E
R]. A partial order is a relation that is reflexive, auti-symmetric [i.e., (x, y) E R
and (y, x ) E R + x = y] and transitive. A partially ordered ser (or puset) is the
combination of a set and a partial order relation on the set. A rotally (or linearly)
ordered set is a poset with the property that any pair of elements can he ordered
such that they are members of the relation. A consisrenr enumeration of a partially
ordered set S is an integer labeling i(S) such that s,,Rsh implies ;(st,) < i(sh), V pair
S.
( s < ~S, ~ ) E
A function (or map) between two sets X and Y is a relation having the property
that each element of X appears as the first element in one and only one pair of the
relation. A function between two sets X and Y is denoted by f : X + Y. The
sets X and Y are called the domain and co-domain of the function, respectively. The
function f assigns to every element x E X a unique element f (x) E Y. The set
f (X) = { f (x) : x E X} is called the range of the function. A function is onto
or rirjective if the range is equal to the co-domain. A function is one-ro-one or
injecrive if each element of its range has a unique elemept of the domain that maps
to it, i.e., f (.x,) = f (x2) implies xl = x2. In this case, the function has an inver,~e,
,f-' : f ( X ) + X. A function is bijective if it is both surjective and injective. Given
a function f : X -t Y and a subset of its domain A & X, the imuge of A under f is
f ( A ) = { f (x) : x E A } . Conversely, given a function f : X + Y and a subset of its
co-domain A g Y, the inverse image of A under f is f - ' ( A ) = [x E X : f (x) E A}.
2.2 GRAPHS
A graph G(V, E ) is a p a i r (V. E ) , where V is a set and E is a hinary relation on V
[4, 13, 161. We consider only finite graphs, i.e., those where set V is bounded. The
elements of the set V are called venires and those of the set E arc called edges of the
graph ( ~ i ~ u r2.1
e (a)]. In a directed graph (or digraph) the edges are ordered pairs
of vertices; in an undirected graph the edges are unordered pairs [Figure 2.1 (b)]. A
directed edge from vertex L'; E V to LI, 6 V is denoted by ( u , . q) and an undirected
edge with the same end-points by { u , . q).We also say that an edge (directed or
indirected) is incident to a vertex when the vertex is one of its end-points. The degree
of a vertex is the number of edges incident to it. A hypergraph is an extension of a
graph where edges may be incident to any number of vertices [Figure 2.1 (c)].
FIGURE 2.1
(a) Undtrzcted graph. (b) Direcred graph. ( c ) ~ y ~ e r g a p h
Every graph has one (or more) corresponding diagrams. A graph is said to he
planar if it has a diagram on a plane surface such that no two edges cross. Two graphs
are said to he isomorphic if there is a one-to-one correspondence between their vertex
sets that preserves adjacency (Figures 2.2 and 2.3).
FIGURE 2.2
(a) A diagram of the complete graph of
four vertices K4. (b) Another diagram
(4 (bJ for Kq: the graph is planar.
(3 (b)
FIGURE 2.3
(a) A diagram of the three-dimensional cube graph. (b) Another diagram of the cube graph: the graph is
planar and bipartite.
ui when v; is a successor of u,. Similarly, a vertex ui is called the predecessor (or
ancestor) of a vertex v j if ui is the tail of a path whose head is v,. In addition, vertex
uj is a direct successor of, or adjacenr to. vertex vi if u; is the head of an edge whose
tail is ui. Direct predecessors are similarly defined.
A polar dag is a graph having two distinguished vertices, a source and a sink,
and where all vertices are reachable from the source and where the sink is reachable
from all venices [Figure 2.4 (a)].
Undirected and directed graphs can also be represented by matrices. The inci-
dence matrix of a simple undirected graph is a matrix with as many rows as vertices
and as many columns as edges. An entry in position ( i . j ) is 1 if the jth edge is
incident to vertex vj; else it is 0. For directed graphs, an entry in position ( i , j ) is I
if vertex vj is the head of the jth edge, -I if it is its tail and otherwise 0 [Figure 2.4
(b)]. For both undirected and directed graphs, an adjacency matrix is a square matrix
with dimensions equal to the vertex set cardinality. An entry in position ( i , j ) is 1
if vertex u, is adjacent to vertex u,; else it is 0 [Figure 2.4 (c)]. The adjacency ma-
trix is symmetric only for undirected graphs, because the corresponding definition of
adjacency relation is symmeuic.
Directed and undirected graphs can be weighted. Weights can be associated
with vertices and/or with edges, i.e., the graph can be vertex weighted and/or edge
wighted.
source
sink
la) (bl (c)
FIGURE 2.4
(a) A polar graph. (b) Its incidence matrix. (c) It? adjacency matrix
A stable set, or independent set, is a subset of vertices with the property that
no two vertices in the stable set are adjacent. The stabiliw number u ( G ) of a graph
is the cardinality of its largest stable set.
A coloring of a graph is a partition of the vertices into subsets, such that each
is a stable set. The chromatic number x ( G ) is the smallest number that can be the
cardinality of such a partition. Visually, it is the minimum number of colors needed
to color the vertices, such that no edge has end-points with the same color.
The size of the maximum clique is a lower bound for the chromatic number,
because all vertices in that clique must be colored differently. Namely:
Similarly, the stability number is a lower bound for the clique cover number, since
each vertex of the stable set must belong to a different clique of a clique cover. Thus:
Example 2.2.1. Consider the graph of Figure 2.1 (a). reported again for convenience in
Figure 2.5 (a). The size of the maximum clique ( u l , u 2 , ui) is w ( G ) = 3. The graph can
be partitioned into cliques ( u , , 1'2, v,] and { u d ] . Altem~tively,it can be covered by cliques
(u,, u2, u3) and ( v , , v,, uq]. The clique cover number K ( G ) = 2. The largest stable set
is ( u 2 , u.,). Thus, the stability number is a ( G ) = 2. A minimum coloring would require
three colors for ( u , . v 2 , u3J.Vertex u4 can have the same color as u2. Hence the chromatic
number x ( G ) = 3. Therefore the graph is perfect.
Some special perfect graphs are worth considering. A-graph is said to be chordal,
or triangulated, if every cycle with more than three edges possesses a chord, i.e., an
edge joining two non-consecutive vertices in the cycle.
A subclass of chordal graphs is the one of interval graphs. An interval graph is a
graph whose vertices can be put in one-to-one correspondence with a set of intervals,
so that two vertices are adjacent if and only if the corresponding intervals intersect.
A graph G ( V , E ) is a comparabiliry graph if it satisfies the transitive orientation
property, i.e., if it has an orientation such that in the resulting directed graph G ( V , F ) ,
((ui, uj) E F and (u,, ur-) 6 F1 =$ (ui. ur) 6 F.
FIGURE 2.5
(a) Undirected graph. (b) An orientation. (c) A transitive orienlarion.
An important theorem, by Gilmore and Hoffman, relates comparability to inter-
val graphs [151.
Theorem 2.2.1. An undirected graph is an interval graph if and only if it is chordal and
its complement is a comparability graph.
Example 2.2.2. The graph of Figure 2.5 (a) is chordal. Figure 2.5 (b) shows an orientation
that does not satisfy the transitive property (because there is no edge between vertices
uz and u4). Nevertheless, the orientation shown in Figure 2.5 (c) (obtained by reversing
the direction of the edge between uz and u4) has the transitive property. Hence the graph
of Figure 2.5 (a) is a comparability graph. In addition, the complement of the graph
of Figure 2.5 (a) is also a comparability graph, and by Gilmore's theorem the graph of
Figure 2.1 (a) is an interval graph. Indeed. the graph can represent the intersection among
a set of intervals, for example, { [ I , 81. [ I , 31. [2, 61, [S, 71) associated with u l , u2, u3 and
uq, respectively.
2.3.2 Algorithms
An algurithm is a computational procedure that has a set of inputs and outputs, has
afirrite number of unambiguously defined steps and terminates in a finite number of
steps. Well-defined problems that can be solved by algorithms are called decidable.
Some undecidable problems do exist, like the halting problem [281.
Algorithms can be described in natural languages or in softwaxe programming
languages. In this book, we use a pseudo-code notation reminiscent of the C pro-
gramming language. Algorithms can also be seen as the abstract models of the com-
putational engines provided by computer programs, which are just descriptions of
algorithms in appropriate executable languages. Algorithms can be evaluated accord-
ing to two major criteria: (i) the quality of the solution and (ii) the computational
effort required to achieve the solution. In order to quantify these two factors, some
additional clarifications are needed.
Consider all instances of decision or optimization problems of a given type. An
exact algorithm always provides the exact solution. It is obvious that exact algorithms
are desirable and indeed they can be conceived for all decidable problems. Unfortu-
nately, some exact algorithms may have such a high computational cost that their use
can be prohibitive on problems of typical size. Hence there is a need for approximation
algorithms, which are not guaranteed to find the exact solution to all instances of the
problem but can provide good approximations to the exact solutions. These good hut
inexact solutions may be valid for practical applications. Approximation algorithms
are often called heuristic algorithms, because they use problem-solving techniques
based on experience.
The comp~ltationaleffort of an algorithm is measured in terms of time and
memory-space consumption for both the worst and average cases. Often only the
worst-case time complexity is reported, and therefore when we mention complex-
ity, we shall be referring to worst-case time complexity unless otherwise specified.
Complexity is measured in terms of elementary operations which are repeated in
the algorithm. It is convenient to measure the growth in the number of elementary
operations as a function of the problem size, i.e., as the size of some input to the
algorithm. Let us denote by n the relevant input size. We say that the complex-
ity is of the order o f f (n) if there exists a constant c such that cf (n) is an upper
bound on the number of elementary operations. We denote the order as O(f(n)),
e.g., O(n); O(nlogn); 0(n2): O(2").
It is often believed that algorithms with polynomial complexity are viable, while
exponential ones are not. This is generally true, but attention must also be paid to
the constant c. For example, a cubic algorithm with a very large constant may not
he viable for problems of large size and may perform worse than an exponential
algorithm on problems of small size. It is also important to hear in mind that the
average-case complexity may be much lower than the worst-case complexity.
The efficiency of an exact algorithm can be measured by comparing its com-
plexity to that of the prohlem being solved. The complexity of the problem is a lower
bound on the number of operations required to solve it. For example, searching for
the largest element in an unordered set of integers of cardinality n requires O(n)
44 ClRCUllS A N D MODELS
Example 2.3.1. A vertex cover of a graph is a subset of vertices such that each edge has
an end-point in the set. A mitzimum cover is a cover of minimum cardinality. A vertex
cover is mininlal with respect to containment, when there is no vertex of the cover that
can be removed. while preserving the covering property (see Figure 2.13).
Example 2.3.2. Consider the problem of arranging n objects in a sequence while mini-
mizing the associated cost. An uprimal solution. with respect to a single painvise swap,
is an arrangement whose cost is lower than or equal to the cost of any other arrangement
reachable from that arrangement by a single painvise swap.
As an example, consider the linear arrangement of n = 3 objects (a, b, c ] . There
are 3 ! = 6 arrangements. Figure 2.6 shows a graph representing the solution space. Each
vertex corresponds to an arrangement. A directed edge between two vertices denotes that
-
the head vertex is an arrangement that can he derived from the tail arrangement by a
painvise swap.
For the sake of the example, we assume an arbitrary cost of the arrangements,
as shown in Figure 2.6. Edges are shown in Figure 2.6 with solid lines when a swap
increases the cost and with dashed lines when it decreases the cost. It is straightfor-
ward to verify that configuration ( a . b, c) is a global minimum, whereas configurations
((a,b, c). ( c , a. b ) .(b. c . a ) ]are local minima.
Consider the class of algorithms which perform a local search and whose moves
are painvise swaps. Some of these algorithms can guarantee an optimal solution with
respect to a single painvise swap. For example, an algorithm may attempt to improve
upon the cost of a current arrangement by considering all swaps decreasing its cost and
choosing the local best. i.e.. maximizing the cost decrease. If such an algorithm stops
when no swap can decrease the cost, then the solution is a local optimum.
On the other hand, consider an algorithm that chooses swaps at random and that
stops when a move increases the cost. Then, the solution may not be optimal with respect
to a single painvise swap.
-
----- decreasing cost
increasing colt
Solulion space for an optimum linear arrangement
problem of n = 3 objects.
The discrete nature of the problem makes it intractable. A lower bound on the
solution of an ILP can be found by relaxing the integer constraint 2.7. Note that
BACKGROUND 47
rounding a real solution vector does not guarantee the achievement of an optimum.
This is especially true when ILPs model decision problems and the variables are
restricted to be binary valued, i.e., 0 or I. These ILPs are called 0-I-ILPs, or ZOLPs,
or binary linear programs.
Integer linear programs are solved by a variety of algorithms [27, 281, including
the branch-and-bound algorithm described in the next section. The solution of the
linear program derived by relaxing the integer constraint is often (Ised as a starting
point for these algorithms. We refer the reader to references 127, 281 for details.
Definition 2.3.1. An optimal sequence of decisions has the property that whatever the
initial state and decisions are, the remaining decisions must constitute an optimal decision
-
sequence with regard to the state resulting from the first decision.
trees cover the subject tree without covering any edge more than once. A minimum
cover is one where the sum of the cost of the covering patterns is minimum.
-
Example 2.3.4. Consider the subject graph in Figure 2.9 (a) and a set of pattern trees
shown in Figure 2.9 (b) and labeled it,. t2.13. la). The first two patterns are the basic
ones, and they are sufficient to cover any binary tree. Other patterns can yield lower cost
solutions. An example of a cover is shown in Figure 2.9 (c).
t
t
ALGORITHM 2.3.2
Example 2.3.5. Consider again the subject and pattern trees of Figure 2.9 (a,b). Assume
that the costs of the pattern trees 1,. i = 1.2.3,4 are (2.3.4.51, respectively.
Vertex u can he matched by r2 only. Its cost is 3. Vertex I can be matched by t,,
with cost 4, or by I , , with cost equal to the cost of 1, (i.e., 2) plus the cost of u (i.e.,
3). yielding an overall cost of 5. The first choice is minimum and thus the cost o f t is
set to 4. Vertex s can be matched by 1 , only and its cost is 2. The root vertex v can be
+
matched by t2 with cost 3 + 2 + 4 = 9. It can also be matched by t4 with cost 5 3 = 8.
The second match is optimum. An optimum cover is shown in Figure 2.9 (c).
We consider here as an example the sequencing problem with release times and
deadlines. The problem can be stated as follows [14]. We are given a set of tasks T
and for each task t E T a positive length / ( I ) 6 Z', a release time r ( t ) E Z+ and a
deadline d ( t ) E Z'. We would like to find an ordering of the tasks, i.e., a schedule on
a single processor such that the release times and deadlines are satisfied. This decision
problem can be transformed into an optimization problem by requiring the completion
time of the last task to be minimum.
Example 2.3.6. Consider a set of three tasks 7 = {a.b , r J with release times r ( n ) =
I: r ( b ) = I : r ( c ) = 3 and deadlines d ( a ) = 4: d ( b ) = m: d ( c ) = 6. There is only one
sequence satisfying the deadlines: (a. c . h). as shosn in Figure 2.10.
We consider now a greedy scheduling algorithm (Algorithm 2.3.3) for the se-
quencing problem that assigns a start time to the tasks. We assume (arbitrarily) that
the tasks start at time I .
BACKGROUND 53
The algorithm increments the time counter i until there are tasks that can be
scheduled. Then it schedules the most constrained task, i.e., the one with the smallest
deadline. If at any time i we are left with unscheduled tasks whose deadline can no
longer he met, the algorithm terminates, stating that no schedule exists. The algorithm
has no backtracking capability, because scheduled tasks cannot he rescheduled. The
complexity of the algorithm is O(1TI).
Unfortunately the algorithm is not exact in the general case. It always constructs
a schedule satisfying the release times and deadlines when it terminates by returning
TRUE. Nevertheless, it may happen that it returns FALSE even though a solution to the
problem exists. The algorithm just missed it, because of its greed. In addition, the
computed sequence may not yield a minimum completion time for the last task. This
result should not be surprising, because sequencing with release times and deadlines
is intractable in the general case.
Example 2.3.7. Consider the tasks of Example 2.3.6. At time i = 1 the algorithm
can schedule Q = ( a , b ] . It chooses u because it has the earliest deadline and it sets
i = I + / ( a ) = 2. At time i = 2 the algorithm schedules Q = b. It sets i = 2 + l ( b ) = 4.
At time i = 4, Q = c. Since i + l ( c ) = 7 > d ( c ) = 6, the algorithm terminates without
completing a valid sequence.
Example 2.3.8. Consider again the tasks of Example 2.3.6, but assume that all release
times are 1 . At time i = 1 the algorithm can schedule Q = (a, b. c ) . It chwses again
a because it has the earliest deadline and it sets i = I + / ( a ) = 2. At time i = 2 the
algcrithm can schedule Q = ( b , c ] . It chwses c because it has the earliest deadline. It
sets i = 2 + I(c) = 5 . At time i = 5 it schedules Q = b and terminates successfully.
-
- min
k:(v,.~.,)tE
(sr + w ~ , ) ; i = 1.2,. . . , n (2.9)
The simplest case for solving Bellman's equations occurs when the graph is
acyclic. Then the vertices can be ordered in a way consistent with the partial order
represented by the dag, and the equations can be solved in that order. Finding a
consistent vertex enumeration is called topological sort and can be achieved with
+
complexity O(IVI IEI) 5 O(n2) [ I , 121.
When the graph has cycles, Bellman's equations are cross-coupled and harder to
solve. In the special case when all weights are positive, the shortest path problem can
he solved by the greedy algorithm proposed by Dijkstra [I]. The algorithm keeps a list
of tentative shortest paths, which are then iteratively refined. Initially, all vertices are
unmarked and Is, = wo;,i = 1 , 2 , . . . , nJ. Thus, the path weights to each vertex are
either the weights on the edges from the source or infinity. Then the algorithm iterates
the following steps until all vertices are marked. It selects and marks the vertex that
is head of a path from the source whose weight is minimal among those paths whose
heads are unmarked. The corresponding tentative path is declared final. It updates
the other (tentative) path weights by computing the minimum between the previous
(tentative) path weights and the sum of the (final) path weight to the newly marked
vertex plus the weights on the edges from that vertex (see Algorithm 2.4.1).
BACKGROUND 55
D I J K S T R A ( G ( V . E . W)) I
m=O
for (i= I r o n) I* path weight initialization *I
5, = 1110,,:
repeat (
Select an unmarked venex cq such that I* is minimal:
Mark u,:
foreach (unmarked vertex u , )
s, = minis;. (so + a'U.;)i:
I
until (all venices are marked):
I
ALGORITHM 2.4.1
BELLMAN-FORD(G(V.E. W ) )(
s; = 0;
for (i= 1 lo n ) I* path weight initialization *I
s; = ,", ;
far ( j = l to n) (
far (i= I l o n) (
.s:+' =min(.~!.
l+i
(5; + x,t.,)l; I* path weight updale *I
I
if (.Ti+'
== .! ~ return
i ) (TRUI): - I* convergence *I
I
return (~ALSE); 1' inconsistent problem *I
ALGORITHM 2.4.2
The greedy strategy pays off by providing an exact solution with computational
complexity O(IE1 + IVl log /VI) 5 O(n2). Details and examples are provided in
several textbooks [ I , 221.
Let us consider now the general case where the graph can have cycles and the
sign of the weights is not restricted. The Bellman-Ford algorithm, seen in Algorithm
2.4.2, can he used to detect if the problem is consistent and to compute the shortest
paths for consistent problems. The Bellman-Ford algorithm solves Bellman's equa-
tions by relaxation. It initializes the shortest path weights to upper bounds provided
by the weights on the edges from the source. It then refines all path weights itera-
tively. When the problem is consistent, the estimates of the path weights (denoted by
superscripts) converge to stable values, corresponding to the weights of the shortest
paths. If convergence is not achieved in n = / VI - 1 outer iterations, the problem is
provably inconsistent.
The complexity of Bellman-Ford's algorithm is O(IVIIEI) 5 O(n3).
FIGURE 2.11
(a) Graph with cycles and mixed-
sign weights. (b) Graph with
opposite weighls.
Example 2.4.1. Consider the graph of Figure 2.1 1 (a). The Bellman-Ford algorithm
initializes the shoRest paths to si = 0: s: = -3. . F ' - -1- , s', --M .
At the f i s t iteration (i.e., j = I), the path weights are updated as follows:
Since the path weights match those computed at the previous iteration, the algorithm
terminates successfully.
We consider next the longest path problem. A longest path problem can be
transformed into a shortest path problem by changing the sign of the weights. Thus it
can be solved by the same family of algorithms described before. Alternatively, the
shortest path algorithms can be transformed into longest path algorithms by replacing
the nrin operator with the max operator. The following points are worth noticing. As
with the shortest path problem, directed acyclic graphs are the simplest cases, because
a consistent enumeration of the vertices can be computed by topological sort. Dijkstra's
BACKGROUND 57
LIAO.WONG(G(V. E U F, W))(
f o r ( j = I to F I + I I (
foreach venex u,
I/+' =longest path in G ( V . E . WE); 1' compute longest *I
/* path in G(V, E . E w ) *I
flon = TRUE;
fareach edge ( u p , u,) 6 F ( I* verify each feedback edge *I
if (li+l < [,+I
p + Up.q)( I* constraint violation */
flag = FALSE;
E = E U ("0. u q ) ; /* add edge */
wo,q = (I;+' + w ~ . ~ ) ;
I
I
if ( flag ) rehlra mu^); I* convergence */
1
rehlrn (FALSE) /' inconsistent problem *I
1
ALGORITHM 2.4.3
algorithm can be applied to graphs where all weights are negative. CAD problems
modeled by negative-weighted graphs are rare, and therefore Dijkstra's algorithm is
not often used for computing the longest paths. The Bellman-Ford algorithm can be
applied to both the shortest and longest path problems mutatis mutandis.
-
Example 2.42. Consider the problem of computing the longest paths in the graph of
Figure 2.1 1 (b). The shortest path algorithm can be applied to the graph of Figure 2.11
(a), where the signs are reversed. The weights of the longest paths in Figure 2.1 1 (b) are
the opposite of the shortest paths of the graph of Figure 2.1 1 (a).
We describe now a special case of the longest path problem that has several
practical applications in CAD. Let us consider a graph G ( V , E U F . W ) where E is
the subset of the edges with non-negative weights and F is the subset of the remaining
ones, called feedback edges because each one closes a cycle. Moreover, let E induce
dag G ( V , E, W E )and let IEl >> IFI.
Liao and Wong [25] proposed a variation of the Bellman-Ford algorithm that
is specific to this situation and that has lower computational complexity than the
Bellman-Ford algorithm. The algorithm exploits the specific knowledge that the edges
in F clonethe cycles. We denote in Algorithm 2.4.3 the set of longest path weights
from the source to each vertex as ( l i ; i = 1,2,. . . ,nl.
The rationale of the algorithm is the following. It attempts to neglect the feed-
back edges and to derive a solution in terms of the forward edges only. If this solution
happens to he correct when the feedback edges are reintroduced, then the algorithm
terminates successfully. Otherwise, it corrects the problem by setting a lower bound
on the weight of the incorrectly computed paths. The bound can be modeled by adding
weighted edges from the source. The process is iterated IF1 1 times. +
Algorithm 2.4.3 computes the longest path weights by construction when it
terminates favorably. Interestingly enough, Liao and Wong [25] were able to prove
that the problem is inconsistent when the algorithm fails.
The algorithm performs at most F [ + 1 longest path computations of the acyclic
graph G(V, E , Ew), where at most (Fl edges can be added. Hence the overall com-
plexity is O((IVI+IEI+IFI)IFI) 5 O(n3). Note that when IF\ < c IEl, the algorithm
is much more efficient than Bellman-Ford's.
Example 2.4.3. Consider the longest path problem of Figure 2.1 1 (b) repeated again
for convenience in Figure 2.12 (a). At the tint iteration, the directed acyclic subgraph
induced hy the edges with positive weights is shown in Figure 2.12 (b). The longest
paths from the sources in G(V, E . WE) are 1: = 3: 1; = I : 1: = 5. When the algorithm
examines the feedback edges. it finds a constraint violation on edge ("I, u 2 ) Namely
1: = 1 < I? + w , , ?= 3 - 1 = 2. Thus, the algorithm adds edge ( u o . u2) to E with weight
111 =lj+w,.*=3-I=2.
At the second iteration the subgraph induced by the edges with positive weights
is shown in Figure 2.12 (c). T%e longest path from the sources in G(V, E, WE) are
1: = 3: 1: = 2: 1: = 6. When the algorithm examines the feedback edges, it does not find
any constraint violation and terminates successfully.
It is interesting to relate the longest (or shortest) path problem to the solution
of a linear program. Many design problems require the solution to a set of linear
inequality constraints of the type:
where x 6 R"; b 6 Rm , m equals the number of constraints and the entries of b are
[ILJ~,] for the values of I, j where a constraint is specified: A 6 FVXm.Thus, a set of
FIGURE 2.12
(a) Graph with cycles and mixed-sign weights. (b) Directed acyclic graph obtained by deleting the feedback
edges. (c) Directed acyclic graph obtained hy deleting the feedback edges and adding an edge resolving
the constraint violation.
linear inequalities can be expressed as the feasibility constraint of a linear program.
Matrix A can be viewed as the incidence matrix of a directed graph with n vertices
and m edges, each one weighted by the corresponding entry in b. Such a graph is
called the constraint graph of the problem.
Let:
and b = 13. I , 1 , -1, 1,4, -61T. Then the set of inequalities can be written as ATx > h.
Note that A is the incidence matrix of the graphs shown in Figure 2.1 1.
The longest path problem can then be modeled as a solution to the minimization
of cTx, where c = 1 (i.e., a vector of Is), under the constraint 2.1 1. Note that one
vertex of the graph must be chosen as the source and the corresponding ently in x
be set to zero to have a unique solution. Conversely, a shortest path problem can be
modeled by a set of inequalities with the reversed directions and solved by maximizing
cTx.
It is often important to know whether a set of inequalities 2.11 is satisfiable.
The following key theorem relates the satisfiability of a set of linear inequalities to
the properties of the corresponding constraint graph, which has A as the incidence
matrix and b as weights.
Theorem 2.4.1. A set of linear inequalities A' x ? b is satisfiable if and only if the
corresponding constraint graph has no positive cycles.
Example 2.4.5. Consider the graphs of Figure 2.13 (a). Minimum covers are shown
in Figure 2.13 (b) and non-minimum irredundant covers in Figure 2.13 (c). Redundant
covers are shown in Figure 2.13 (d).
2 3 4
3e FIGURE 2.13
(a1 Tun uzmnhr.
. , . Minimum
Ihl
covers. (c) lrredundant covers. (d)
Redundant covers.
62 ClRCUlTS AND MODELS
VERTEX.COLOR(G(V. E l ) (
for (i = 1 to IVI) (
c = 1;
while ( 3 a vertex adjacenl to a, with color c ) do I
c=c+l;
I
Label u, with color c;
ALGORITHM 2.4.6
Most algorithms for graph coloring are based on a sequential scan of the vertex
set, where vertices are colored one at a time. In Algorithm 2.4.6 we consider a simple
heuristic algorithm for minimum coloring first [3]. Colors are represented by positive
integers.
The total number of colors is the maximum value that c attains. It is larger than
(or equal to) the chromatic number ,y(G(V. E ) ) . Unfortunately it may be much larger
than ,y(G(V, E)), and it is sensitive to the order in which the vertices are colored. The
algorithm can be modified to cope with the coloring decision problem by providing
an exit (with negative response) when the color counter c exceeds a given threshold.
Unfortunately, due to the heuristic nature of the algorithm, a negative response can
be given even if a proper coloring exists with c colors.
Some improvements to this basic scheme have been proposed. An example is
to allow color interchange between two (already colored) vertices when this helps in
not increasing the number of colors. We refer the intzrested reader to reference [3]
for details and examples.
Exact algorithms for coloring have been proposed with computation complexity
above polynomial. Most techniques are enumerative in nature. For example, the algo-
rithm to solve the coloring decision problem reported in reference [I91 is also based
on a sequential scan and coloring of the vertices. When the number of possible colors
is exhausted, the algorithm backtracks and recolors previously colored vertices. In the
worst case, all color assignments to the vertices are considered.
Example 2.4.6. Consider the graph of Figure 2.14 (a). A minimum coloring is shown
in Figure 2.14 (h), where three colors are used.
Consider now the application of algorithm VERTEX-COLOR to this graph,
with the vertices sorted according to their identifiers. The first four vertices are labeled
with alternating colors. A third color is required for us and a fourth for u g . Thus the
solution, shown in Figure 2.14 (c), is not optimum.
An optimum solution would require some recoloring of previously colored venices.
For example, if we want to color the graph with three colors and we fail at coloring us
properly, then we can backtrack and try to recolor us (which is not possible) or ud (which
is possible). Once u4 is recolored, then a solution with three colors can be achieved, as
shown in Figure 2.14 (b).
A heuristic alternative to backtracking is to swap colors in colored vertex pairs.
In this case, the colors of u, and U S should have been swapped.
?*I Ih)
FIGURE 2.14
( 8 ) Gmph. (b) Minimum coloring. (c) Non-minimum coloring
The coloring problem is solvahle in polynomial time for chordal graphs, which
are perfect graphs. Vertex coloring is tractable in this case, because chordal graphs
have a perfect vertex elimination scheme 1151. Briefly, a perfect vertex elimination
scheme is a linear order of the vertices, denoted by o , such that, for each vertex
(ui:i = 1 , 2 , .. . , /VI], the vertex subset (uj E V : {ui,u i ] E E and o(uj) > o(ui))
induces a clique. For example, when considering the graph of Figure 2.5 (a), the
sequence (uz, ul, u3, uq) is perfect while ( u l , u?, ui, u4) is not.
Coloring chordal graphs can he done by considering the vertices according to
a perfect elimination scheme. This specific order guarantees that no backtracking is
necessary to find an optimum coloring, and thus optimum coloring can be achieved
by algorithms with polynomial complexity. In pmicular, checking whether a graph
is chordal, co~nputinga perfect elimination scheme and coloring can hc done in
O(IVI+ I El) time. Moreover, scanning the vertices according to a perfect elimination
scheme allows us to determine all maximal cliques with no additional effort, and
hence we can identify a clique cover. We refer the interested reader to Golumbic's
hook [I51 for details.
Let us now consider interval graphs. which model many relevant circuit opti-
mization problems. Since interval graphs are chordal, coloring and searching for all
maximal cliques can he done by exploiting a perfect elimination scheme, as men-
tioned before. Very often, problems are directly specified by the sets of intervals
I = ( [ I j , r,]: j = 1. 2 , . . . , (11) that induce the interval graph. Since intervals corre-
spond to vertices, coloring the intervals is equivalent to coloring the vertices.
A well-known algorithm for coloring interval graphs is the LEFTEDGE algo-
rithm (Algorithm 2.4.7), which was proposed by Hashimoto and Stevens for channel
routing 1171, where intervals correspond to wire segments to be arranged in a channel.
The name stems from the selection of the intervals based on their left-edge coordi-
nate, i.e., the minimum value of the interval. The algorithm first sorts the intervals
by their left edge. It then considers one color at a time and assigns as many intervals
as possible to that color by scanning a list of intervals in ascending order of the left
edge hefore incrementing the color counter.
64 cmcuns AND MODELS
LEFT-EDGE([) (
Sort elements of I in a list L in ascending order of l i ;
c = 0;
while (some interval has not been colored ) do I
S = 47;
r = 0; I* initialize coordinate of rightmost edge in S *I
while element in L whose left edge coordinate is larger than r ) do1
( 3 an
s = Finl element in the list L with I, > r;
s =su(5);
r =rx; update coordinate of
i* rightmost edge in S *I
Delete s from L;
I
c=c+l;
Label elements of S wilh color c;
t
t
ALGORITHM 2.4.7
The algorithm yields a provably minimum coloring of G(V, E ) , i.e., the maxi-
mum value of c is the chromatic number x(G(V, E)). The complexity of the algorithm
is O(lVllogIV1) [17].
Example 2.4.7. Consider the set of intervals specified in the following table. Each in-
terval is associated with a vertex, and the corresponding interval graph is shown in
Figure 2.15 (a). Recall that edges correspond to intersections of intervals, according to
the definition of the interval graph:
Vertex Left edge Right edge
"I 0 3
"2 3 5
U3 6 8
"4 0 7
"5 7 8
06 0 2
"7 2 6
The intervals, sorted by ascending order of the left edge, are shown in Figure 2.15 (b).
The algorithm will assign the first color to u l , v2 and q ;the second to us, v, and vs
and the last one to v 4 . The colored graph 1s shown in Figure 2.15 (c). The intervals can
. then be packed into as many windows (tracks) as the chromatic number, as shown in
Figure 2.15 (d).
1 6 1
1 I
7
(dl
Z
I
I D
FIGURE 2.15
(a) Interval graph (b) Soned lnlervals
(c) Colored graph (d) Packed
-
intervals
Example 2.4.8. The complement of the graph of Figure 2.14 (a), repeated again for
convenience in Figure 2.16 (a), is shown in Figure 2.16 (b). Emboldened edges correspond
to cl~ques.Three cliques cover the saph.
When directly solving the clique partitioning problem, a solution can be deter-
mined by iteratively searching for the maximum clique of the graph and then deleting
it from the graph until there are no more vertices, as seen in Algorithm 2.4.8.
Therefore the problem is reduced to that of finding the maximum clique, which
is also intractable [14]. A maximum clique corresponds to a maximum independent
set in the complement of the graph. A maximum independent set is the subset of
FTGURE 2.16
(a) Colored graph. (h) Clique
panilioning of the complement of
(b) the graph
CLIQUE.PARTITION(G(V, E)) (
n=% /* lnilialize partition */
while ( G ( V , E ) not empty ) do [
C = MAX-CLlQUE(G(V,E)): I* Compute maximum clique */
/* C c V in G ( V , E ) */ ;
n=nuc:
Dclcle C from G ( V , E ) ;
I
I
ALGORITHM 2.4.8
vertices not included in a vertex cover. Thus, exact a n d heuristic algorithms for vertex
cover can b e applied t o the maximum clique problem.
Example 2.4.9. Consider the graph of Figure 2.17 (a). Its complement is shown in
Figure 2.17 (h), and it can he covered by vertices {u2 u4J,which is a minimum vertex
cover. Thus the maximum independent set {v,, v3, u s ] identifies a maximum clique in
the original graph.
FIGURE 2.17
(a) Graph. (b) Vertex cover of the complement of the graph
MAX.CLIQUE(G(V. E ) ) (
C = vertex with largest degree:
repeat (
repeat (
U = ( u E V : u @ C and adjacent to all vertices of C);
if ( U = il)
return(C);
else (
Select vertex u t U :
C =C u [c);
I
ALGORITHM 2.4.9
When binary variables are associated with the dimensions of the Boolean space,
a point can be identified by the values of the corresponding variables. A literal is an
instance of a variable or of its complement. A product of n literals denotes a point in
the Boolean space: it is a zero-dimensional cube. Often, products of literals are called
cubes.
FIGURE 2.18
The three-dimensiunal Boolean mace
notation. An incompletely specified scalar Boolean function is defined over a sub-
set of B". The points where the function is not defined are called don'r care con-
ditions. They are related to the input patterns that can never occur and to those
for which the output is not observed. In the case of multiple-output functions (i.e.,
m > I), don't cure points may differ for each component, because different out-
puts may be sampled under different conditions. Therefore. incompletely specified
functions are represented as f : 8" + (0. I. *)'", where the symbol * denotes a
don'r care condition. For each output, the subsets of the domain for which the func-
tion takes the values 0, I and * are called the off set, on e r , and dc ser, respec-
tively.
A Boolean relutiorl is a relation between Boolean spaces. It can be seen as a
generalization of a Boolean function, where a point in the domain can be associated
with more than one point in the co-domain. Boolean relations and their optimization
play an important role in multiple-level logic synthesis and optimization, as shown in
Chapter 8.
We review some definitions and properties of Boolean functions. For the sake
of simplicity, we consider single-output functions. Let f ( X I x. 2 , .... I,,) be a Boolean
function of n variables. The set ( x i . .YZ. . . . . I , , ) is called the support of the function.
Definition 2.5.1. The cofactor of f (.Y,. I?.. . . . r,. . . . . r , , ) with respect to variable xi is
f,, = f ( . x 1 . r 2 . .. . , I . . . . .x,). The cofactor of f ( x , . x l . . . . . s,.. . . ,x,,) with respect
to variable x,'is f; = ,f(r;,. .r2. . . . . 0. . . . . .s,,).
In this book. we shall use mainly the short notation for cofactors as i n the
definition, e.g., f,,. In Chapter 8 we shall use an extended-notation with a bar and an
assignment, e.g., f to avoid confusion with the labeling of a function.
Boole's expansion of a function f , often called Shannon's expansion, is shown
by the following theorem, whose proof is reported in several textbooks, e.g., refer-
ence 191.
Note that stating f,, 2 f,; is equivalent to say that the set of minterms of f,,
includes the set of minterms of f,:,or f,,> f,;, for all possible assignments to the
other variables x, : j +
i. j = 1.2,. .. n. .
Example 2.5.3. The function f = o + b + c' is positive unate with respect to variable a ,
because ,fa = I > f,. = b+c' for any assignment to variables b and c. Alternatively, the
set of minterms associated with f,. i.e., {bc, b'c, bc', b'c'), contains the set of minterms
associated with fa., i.e.. (bc. bc', b'c'}. By a similar argument, f is positive unate with
respect to variable b and negative unate with respect to variable c.
Definition 2.5.3. The Boolean difference, or Boolean derivative, of f (x,, x2. . . . , xi,
. . . , x,) wlth respect to variable x, is af/ax, = f,, @ f,..
.
Definition 2.5.4. The consensus of f (x,. x 2 , .. . x;, . . . ,x.) with respect to variable xi
is ex,(f) = fL,- f,:.
Definition 2.5.5. The smoothing o f f (x,. I?... . . x,. . . . , x,,) with respect to variable xi
is S, (f = fL. + fx:.
The smoothing of a function with respect to a variable corresponds to dropping
that variable from further consideration. Informally, it corresponds to deleting all
appearances of that variable.
FIGURE 2.19
(a) The function f = a h + hc + oc. (b) The Bwlean difference af/ao. ( c ) me consensus C,(f). id) The
srnwthing S,(f).
cofactor is the usual cofactor. The generalized cofactor may not he unique and satisfies
the following bounds 191: f . @,G f+,5 f @,!. +
Example 2.5.5. Consider function f = ab + bc + ac. Let 41 = ab and $2 = a' b'. +
Then ah 5 fm, C 1 and a'bc + ob'c 5 f h c ab + bc + a c . By choosing f,,
= 1 and
f, = a'bc + ab'c, we can write:
f = 41fm, + 4?f*
= abl + (a' + b')(a'bc + ah'<)
Theorem 2.5.2. Let f.g be two Boolean functions expressed by an expansion with
.
respect to the same orthonormal basis 4;.i = 1. 2.. . . k . Let O be an arbitrary binary
operator representing a Boolean function of two arguments. Then:
The following corollary is widely used, and it relates to restricting the orthonor-
ma1 expansion to Boole's expansion.
TABULAR FORMS. Tabular forms are two-dimensional tables. The tables can be
partitioned into two parts, corresponding to the inputs and outputs of the function.
Tabular forms are also called personality matrices of a function.
The simplest tabular form is the truth table. A truth table is a complete listing of
all points in the Boolean input space and of the corresponding values of the outputs.
Therefore the input part is the set of all row vectors in 8" and it can be omitted if a
specific ordering of the points of the domain is assumed. The output part is the set of
corresponding row vectors in (0, 1, * I m . Since the size of a truth table is exponential
in the number of inputs, truth tables are used only for functions of small size.
The most common tabular form is a list of multiple-output implicants. A multiple-
output implicant has input and output parts. The input part represents a cube in the
domain and is a row vector in (0, 1, *)". The output part has a symbol in correspon-
dence to each output, denoting whether the corresponding input implies a value in the
set (0, 1, *] or not. There are different formats possible for multiple-output implicants;
those relevant to logic optimization will be described in detail in Chapter 7. Since
each implicant can represent a subset of the domain, the overall number of implicants
needed to represent a function can be made smaller than 2".
We show next a list of implicants for the same function that represents the on set of each
output. 'll~usa 1 entry in the output part denotes that the corresponding input implies a
TRUE value of that output.
- abc xy
001 10
*II I1
101 01
II* II
Definition 2.5.6. A factored form is one and only one of the following:
. A literal.
A sum of factored forms.
A product of factored forms.
In other words, factored forms are arbitrary parenthesized expressions with the
only constraint being that complementation can be applied only to single variables.
Note that factored forms include sum of products and producr of sums and therefore
they can represent arbitrary functions.
Example 2.5.8. Let us consider again the majority function f = a6 + uc + bc. The
function cannot be expressed in single-level form, hut it can of course be represented
in a two-level form as sum of producrs (as above) or as producr of sums [e.g., f =
(a+b)(a+c)(h+c)l. An example of a factored form representation is f = u(b+c)+hr.
Note that many factored form representations of the same function may exist.
The sum and product of logic functions represented by factored forms are
straightforward. Complementation of a Boolean function can be achieved by applying
De Morgan's law to its expression form. It is often important to perform operations
BACKGROUND 75
on two-level forms that yield results in two-level forms. In this case, it is best to trans-
form the two-level representations into implicant tables first, operate on the tables and
then translate the result back into a two-level expression form.
index = 1 fi
index = 2
index = 3
FIGURE 2.20
+
Binary decision diagrams far f = (a b)c: (a) OBDD for the variable order (a, b, c). (b) OBDD for the
variable order (a. c, b). (c) ROBDD for ihe vanable order ( a ,b, c).
reasonable examples of logic functions, a variable order can be found such that the
size of the OBDD is tractable. Therefore, for most practical cases, OBDDs provide a
computationally efficient means of manipulating Boolean functions.
There have been many additions to the OBDD model. We consider first the
original model presented by Bryant [lo], then its extension to the tabular representation
supporting multiple-output functions and last other additional features used by software
programs supporting OBDD models and computations.
To review the properties of OBDDs and to introduce operations on OBDDs, a
formal definition is required [lo].
Definition 2.5.7. An OBDD is a rooted directed graph with vertex set V. Each non-leaf
vertex has as attributes a pointer i,~dei(o)t (1.2.. . . , n] to an input variable in the
, . . . . x"), and two children lota(u). high(u) E V. A leaf vertex u has as an
set { X Ix?,
attribute a value valur(u) E B.
For any vertex pair (11.lou'(u)} (and (t,,high(u)]) such that no vertex is a leaf,
index(u) < indexilour(o)) (and index(,,) < inde.r(higl,(c))).
Note that the restriction on the variable ordering guarantees that the graph is
acyclic. A function can be associated with an OBDD as follows.
Example 2.5.9. An ordered binary decision diagraz is shown in Figure 2.20 (a). The
+
diagram corresponds to the function f = ( a b)c. The vertices are ordered in a way
that the top level corresponds to variable a , the second to b and the third to c. For the
sake of clarity, the vertices are labeled in the figure with the corresponding variables.
A rigorous interpretation of the definition would require us to label the vertices
as ( v , . 1 ' 2 . 113. 2 , . L . ~ ]and the variables as u, = U . J I = b. xi = c . Assume that v , is the
root. Then indrx(v,) = I , which just means that vertex 1.1is related to the first variable
in the order. i.e., to x, = a. And so on.
Figure 2.20 (b) shows the OBDD for a different variable order. namely ( a , r , b).
Two OBDDs are isomorphir if there is a one-to-one mapping between the vertex
sets that preserves adjacency, indices and leaf values. Thus two isomorphic OBDDs
represent the same function. Whereas an OBDD uniquely defines a Boolean function,
the converse is not true. To make an OBDD canonical. any redundancy in the OBDD
must 'be removed.
+
Example 2.5.10. Consider again function f = ( a b)c. Figure 2.20 ( c ) shows the
ROBDD corresponding to the OBDD of Figure 2.20 (a). Note that redundancies have
BACKGROUND 77
been eliminated from the diagram. This representation is unique for that function and the
variable order (a. b, c).
Bryant proved that ROBDDs are canonical forms [lo] and proposed an algorithm
to reduce an OBDD, which is described next. The algorithm visits the OBDD bottom
up, and labels each vertex u E V with an identifier id(u). An ROBDD is identified
by a subset of vertices with different identifiers.
The OBDD traversal is done as follows. Let us assume that the portion of the
OBDD corresponding to vertices with index larger than i has been processed and the
identifiers assigned. Let us consider the subset of vertices with index i , called V(i).
There are two conditions for redundancy. If id(low(u)) = id(high(u)), then vertex u
is redundant and we set id(u) = id(low(u)). Similarly, if there are two vertices u and
u such that id(low(u)) = id(low(u)) and id(high(u)) = id(high(u)), then the two
vertices are roots of isomorphic graphs and we set id(u) = id(u). In the remaining
cases, a different identifier is given to each vertex at level i. The algorithm terminates
when the root is reached.
In Algorithm 2.5.1 we present a simplified form of Bryant's reduction algorithm
[lo] that extends the previous ideas for the sake of efficiency. Namely, instead of
considering all pairs of vertices at each level and checking whether their children
match, a key is formed using the children's identifiers. The elements of V(i) are then
sorted by these keys. If a key matches the key of a previously considered vertex, the
vertex is discarded. Otherwise, the identifier (stored in variable nextid) is incremented
and assigned to the current vertex. At this point the vertex can be saved as part of the
ROBDD. The complexity of the algorithm is dominated by sorting the vertex subsets,
1.e.. O(IV1 IogIVl). -
Example 2.5.11. Let us apply the reduction algorithm to the OBDD of Figure 2.20 (a),
repeated for convenience in Figure 2.21 (a). Figure 2.21 (b) shows the same OBDD with
numerically labeled vertices, so that we can refer to them in this example. The identifiers
id("), V v t V, are also shown.
First, the algorithm labels the leaves as 1 and 2 and initializes the ROBDD with
two distinguished leaves. Then, the vertices with index = 3 ilre visited, i.e., those
corresponding to variable c and labeled u, and US. A key is constructed for these vertices
from the labels of the children. The key for both vertices is (1.2). because both have the
same right (i.e., high) and left (i.e., low) child. Since the key is the same, the first vertex
being sorted, say u4, is assigned the identifier id = 3 and is added to the ROBDD with
edges to the leaves. The other vertex (e.g., u 5 ) gets the same identifier but is not pan of
the ROBDD, because it is redundant. (Its key matches the key of the previously sorted
vertex.)
Next, the algorithm visits the vertices with index = 2, i.e., those corresponding
to variable b and labeled v2 and "3. The left and the right children of v, have the same
identifier. Therefore u, inherits their identifier, i.e., id = 3, and it is discarded. Vertex
v? receives identifier id = 4 and is added to the ROBDD, with edges to the O-leaf and
to u,, because the identifiers of the children of "2 are I and 3.
Finally, the vertex with inde.7 = 1 (i.e., the root) is visited. Since its children have
different identifiers, the root is added to the ROBDD. with edges to "2 and to "4, because
REDUCE(OBDD)[
Set i d ( u ) = I to all leaves o t V with ualue(u) = 0;
Set i d ( " ) = 2 to all leaves L. E V with aolr<e(c)= I;
Initialize ROBDD with two leaves with id = I and id = 2 respectively;
nexrid = 2: I* nexrid is the next available identifier value *I
for (i = n to 1 with i = i - I)(
V ( i ) = ( u E V : index(") = i ] :
foreaeh ( u t V ( i ) ) ( I* consider venices at level i *I
if ( i d ( i o w ( u ) )= id(high(u)))(
id(") = i d ( l o w ( v ) ) ; I* redundant venex *I
Drop u from V ( i ) :
else
key(") = id(lour(u)),id(high(u));
I* define keg(") as the identifier pair of u ' s children *I
I
oldkey = 0.0; /* initial key that cannot be matched by any venex *I
foreach u t V ( i ) sorted by key(") (
if (key(") = oldkey! I* graph rooted at u is redundant ' 1
id(,,) = nerrid
else ( I* "onredundant venex to receive new identifier value *I
nexrid = nexrid I: +
i d ( " ) = nexrid:
oldkey = key(");
Add I; to ROBDD with edges to venicesin ROBDD
whose id equal those of loru(u) and high("):
t
I
I
I
ALGORITHM 25.1
the identifiers of the children of L', are 4 and 3. Figure 2.21 (c) shows the completed
ROBDD.
index = 2
index = 3
;-,
0 0
c
I 0 0
id=! ; d = l id=2
4
1
id=3
id=l
5
id=2
id33
I
"Al
id=]
0 4
id = 2
id=3
FIGURE 1 2 1
+
Binary decision diagrams far f = (a b)c: (a) OBDD for the variable order (a. b, c). (b) OBDD with
identifiers. (c) ROBDD for the variable order (a. b. c).
redundant venex is added to the table. As a result, the unique table represents an
ROBDD.
The unique table has been shown to be a strong canonicalform' [ 5 , 61. Each
key can be labeled by a unique identifier, and two Boolean expressions (cast in this
form and with the same variable order) can be checked for equivalence by comparing
Lhc corresponding identifiers. Strong canonicity is important for developing efficient
algorithms that operate on Boolean functions. These algorithms are designed to keep
the ROBDDs in reduced form, thus preserving canonicity. The unique table can also be
used to represent multiple-output functions, because it can model multiple ROBDDs
with shared subgraphs. These are also called multi-rooted-decision diagrams or shared
ROBDDs.
Example 2.5.12. Consider first the ROBDD of Figure 2.22 (a), representing function
f = (a + b)c with the variable order ( o , b. c), associated with the vertex with id = 5.
Assuming that constants 0 and 1 have identifiers 1 and 2, respectively, the unique table
is:
Key
Identifier Variable Left child Right child
5 0 4 3
4 h I 3
+
Consider now the function pair f = (a b ) c ;g = bcd with the variable order
( d , a , b. c), where f is associated with the venex with id = 5. and R is associated
' A strong canonical form is a form of pre-conditioning which reduces the complexity of an equiva-
lence test between elements in a set. A unique identifier is assigned to each element in the set, so that an
equivalence check is just a test on the equality of the identitien.
FIGURE 2.22
(a) ROBDD for f = ( o + b l r with variable order (a, b, c ) . (bl Multi-rooted ROBDD for f = (a+b)c; 8 =
bcd with variable order (d, a , b, c).
with the vertex with id = 6. A multi-rooted decision diagram for f and g is shown in
Figure 2.22 (b). The unique table is:
Key
Identifier Variable Left child Right child
6 d 1 4
5
4
7'
b
4
I
3
3
-
3 C I 2
The ROBDD construction and manipulation can be done with the i t e operator,
which takes its name from its operative definition. Given three scalar Boolean func-
tions, f , g and h , ire( f , g , h ) = f - g + f ' . h which is equivalent to i f f then g else h .
In pmicular, functions f,g and h can be Boolean variables or constants.
We assume that all variables are ordered and we call the top variable of a
function (or set of functions) the first variable in the support set (or in the union of
the support sets). Let z = ite(f, g , h ) and let x be the top variable of functions f , g
and h. Then function z is associated with the venex whose variable is x and whose
children implement ire( f,,g,, h,) and i t e ( f, , g,., h,.). The reason can be explained
by censidering the identity:
TABLE 2.2
Boolean functions of two
arguments a n d equivalent
representation in terms of
the ite operator.
Operator Equivalent ire form
0 0
.f . s ire(f; g. 0)
f . n' ire(f, g'. 0)
and by the fact that x is the top variable of f,g and h. The relevant terminal cases of
this recursion are ire( f,1.0) = f , ire(], g, h) = g, ite(0, g, h) = h, ire( f,g, g ) = g
and i t e ( f . 0, 1) = f ' . Other terminal cases can be reduced to these, because they are
equivalent 15, 61. -
The usefulness of the ire operator stems from the fact that all Boolean functions
of two arguments can be represented in terms of ire, as shown in Table 2.2. Thus, the
ite operator can be used to consmct the ROBDD of a function, which is represented
as the application of binary operators to variables.
+
Example 2.5.13. Note tint that f - g = ire(f , g. 0) and that f g = ite(f, 1, g) for
any function pair f.g , as shown in Table 2.2.
Consider the construction of the ROBDD of the function f = ac+bc with variable
order (u, b, c). Figures 2.23 (a,b.c) show elementary ROBDDs related to variables a , b
and c, respectively. The ROBDD of uc can be computed as ire(a, c, 0) and is shown in
Figure 2.23 (dl. Similarly, the ROBDD of bc can be computed as ire(b, c, 0) and is shown
in Figure 2.23 (e). Finally, the ROBDD of f = ac+bc can be computed as itr(ac, I , bc).
Since ire(ac, I. bc) = ire(a. ire(c, 1, bc), ire(0. I. bc)) = ire(o, c, bc), the ROBDD has
a 5s lop variable, the ROBDD of bc [Figure 2.23 (e)] as the left child of the root and the
ROBDD of c [Figure 2.23 ( c ) ]as the right child of the root. Moreover, since a vertex
corresponding to variable c is already present within the ROBDD representing bc, which
is now the left child of the rwt, this vertex is used also as a right child of the root.
Therefore the OBDD is reduced.
It is interesting to note that the same ROBDD will be constructed from other
+
equivalent representations of the same function, e.g.. f = (n b)c, provided that the
same order of the variables is used.
FIGURE 2.23
(a) ROBDD of a. tb) ROBDD of b. (c) ROBDD of c. (dl ROBDD of nc. (el ROBDD of hc. (fl ROBDD
of nc + bc.
The ITE algorithm seen in Algorithm 2.5.2 implements the ire operator and
allows us to construct an ROBDD for arbitrary sets of Boolean functions over an
ordered set of variables. The ITE algorithm uses w o tables. The first one is the
unique table, which stores the ROBDD information in a strong canonical form. A
second table, called the compured table, is used to improve the performance of the
algorithm. The computed table stores the mapping of any triple ( f , g, h ) to the vertex
implementing i t e ( f , g, h ) once this has been computed. This allows the algorithm to
use results from previous invocations of ITE in subsequent calls.
The worst-case complexity of the ITE algorithm can he derived under the as-
sumption that look-up and insertion in the unique and computed tables can be done
in constant time. ITE will be called at most once for each distinguished triple of ver-
tices of the ROBDDs of f,g and h and therefore the complexity is of the order of
the product of the sizes (i.e., vertex cardinalities) of the corresponding graphs. The
average complexity is much lower.
The ITE algorithm can he used to apply Boolean operators (e.g., AND, OR, e f
cetera) to ROBDDs. For example, it can be used to check the implication of two
functions, e.g., f + g or, equivalently, if f ' + g is a tautology. This test can be
done by verifying whether ire( f ,g, 1) has an identifier equal to that of the leaf with
value I. Alternatively, a specialized algorithm can be used which checks directly for
tautology of i t e ( f.g, h ) . Such an algorithm is based on the following observation.
The function associated with an ROBDD vertex is a tautology if both its children
represent a tautology. Hence the algorithm can return a negative answer as soon as
this condition is not satisfied at some vertex [S, 61.
I T E ( f , g, h)l
it (terminal case)
return ( T = trivial result):
else 1 I* exploit previous in~omation* I
if (computed table has entry ( ( f . y . h ) . r ) )
return ( r from computed table):
else (
x = tap variable o f f . g , h ;
I = ITE(f,.g,.hz1;
e = ITE(f,z.g,,. h,,):
if ( r == e ) 1' children with isomorphic OBDDs */
return (11;
r = findurnddanique>able(r. I . r ) :
I* add r to unique table if not present *I
Update computed table with ( ( f ,g. h). r ) ;
return ( r ) :
1
1
1
ALGORITHM 25.2
The ITE algorithm can also be used for functional composition, i.e., for replacing
a variable of a Boolean expression by another expression, because fix=, = f,g +
f,,g' = ite(g, f,, f,,). A more efficient implementatim of this operation can be
achieved by another specialized algorithm, called COMPOSE [6, 101.
Whereas the consensus and the smoothing operations can be computed by ZTE,
there is also a specific algorithm for these tasks [ 5 , 61. This algorithm is called here
QUANTIFY and has a structure similar to ITE. The algorithm can perform both univer-
sal and existential quantification (i.e., consensus and smoothing) of a function f with
respect to variables in a list uarlist. The algorithm calls procedure OP(f,e), which
performs either the conjunction of its arguments in the case of consensus or their dis-
junction in the case of smoothing. Procedure OP(t,e) is implemented as ITE(t, e, 0)
in the former case and as /TE(t, I, e) in the latter.
The algorithm is recursive and makes use of a computed table that stores the
quantification results. Thus, the table is checked before the actual computation is
performed. If the table does not contain the desired entry, the top variable x in f
is selected and the cofactors with respect to this variable are computed. (This step
is trivial, because the cofactors are just represented by the children of the vertex
representing f .)
Next the algorithm computes recursively the quantification of the cofactors,
which are named t and e , respectively (see Algorithm 2.5.3). If x is included in
+
uarlist, then either r = t . e or r = r e is computed according to the goal of using
QUANTIFY to derive either the consensus or the smoothing of f . If x is not included
in uarlist, the result is expressed by ITE(n, t , e).
84 ClRCUlTS AND MODELS
QUANTIFY(f, uurli.~r)(
if if is constant)
return ( f ) :
else (
if (computed table has entry ((f. imrrrlisr).r])
/* exploit previous information *I
return ir from compucd table):
else 1
x = top va-iirble of /;
r = /.;
h = f,.:
1= QUANTIFY(8. i.orlist):
I* compute quantification recursively on 8 'I
e = QUANTIFYih. vorlisr):
I X compute quantification recursively an h *I
if (,r is in r,arlisr)
r = OP(r. 4:
else
r = iTE(.r. 1, e ) :
Update computed table with I( f. rurlisr). r ) :
return ( r ) ;
I
I
I
ALGORITHM 2.5.3
.
Example 2.5.14. Consider function f = a b t bc + ar and let us compute C,( f ) . Then
uarlist = a . Assume the variables are ordered as ( a , b, c ) . When QUANTIFY(f,n) is
invoked, the top variable is a and the cofactors are computed as g = f,, = b + c and
h = f,,, = bc. The recursive calls return 1 = g = b + c and e = h = bc, because
g and h do not depend on a. Since a is in uarlist, r = DP ( t , e ) = ITE(1, e, 0) =
+
ITE(b c , bc. 0)= bc. Thus the result is C,( f ) = bc, as shown in Figure 2.19 (c).
Example 2.5.15. Consider the following product of sums form: (a + b)(a'+ h' + c). The
expression can be satisfied by chwsing a = 1: b = I : c = 1. Note that the assignment
is not the only one that satisfies the expression.
The satisfiability problem was shown to be intractable by Cook's theorem [I41
and it is considered the root of all intractable problems. Indeed, the intractability
of a problem can be shown by a transformation (or a chain of transformations) of
satisfiability into that problem. For example, the proof of intractability of the ZOLP
can be done by formulating a satisfiahility problem as a ZOLP.
Example 2.5.16. Cons~deragain the product of sum7 form: (a + b)(a' + b' + c). The
corresponding satisfiability problem can be modeled as a ZOLP:
a+b 2 l
(I-a)+(l-b)+c? 1
n,b.c E B
Example 2.5.17. Consider the second graph of Figure 2.13 (a), reported again for con-
venience in Figure 2.24 (a), where edges are labeled ( a , b. c , d , e ] . The corresponding
3
I
FIGURE 2.24
(bJ (a) Graph. (bJ Minimum vertex cover.
0 0 0 0 1
In the vertex cover problem, we search for the minimum number of vertices covering all
the edges. Thus the edge set corresponds to S and the vertex set to C. Note that each
vertex set can be identified by the group of edges incident to it. Therefore A =A; and
c-1.
Examples of covers are represented by the following vectors: x' = [lOO1O]T,
x2 = [Ol1OIlT and x3 = [O11IIIT. Note that A x > 1 for x = x1,x2,xi. Vector x' is a
minimum cover that is shown in Figure 2.24 (b).
Example 2.5.18. Consider the hypergraph shown in Figure 2.25 (a), with five ver-
tices ( u , , i = I, 2, . . . . 5 ) and five edges {o. b. c. d. e ) . The corresponding (vertededge)
incidence matrix is:
We search now for the minimum number of edges that cover all the vertices. Thus the
vertex set corresponds to S and the edge set to C of the covering problem formulation.
Therefore A = A , and c = 1. Edge covers satisfy Ax > 1.
An example of a minimum cover is given by edges {a,b, d l , or equivalently by
vector x = [I 10101'. The cover is shown in Figure 2.25 (b).
Let us extend the problem by a.sociating weights with each edge. Let c =
[I, 2, 1, 1. llT. Then, it is easy to verify that a minimum-cost cover is {a,c, d l , cor-
responding to vector x = [I01 101' with cost 3.
- Note that a minimum venex cover problem can be defined i n an analogous way.
Covering problems can be cast as minimum-cost satisfiability problems, by as-
sociating a selection variable with each group (element of C ) and a clause with each
element of S. Each clause represents those groups that can cover the element, i.e., it
is the disjunction of the variables related to the corresponding groups. It is important
to note that the product of the clauses is an unate expression, and thus this problem
is often referred t o as unate cover.
FIGURE 2.25
(a) Hypergraph. (b) Minimum-edge cover
Example 2.5.19. Consider the covering problem of Example 2.5.18. The first covering
+
clause ( x , x?) denotes that vertex ui must be covered by edge n or c. The overall
product of sums expression to be satisfied is:
The covering problem can be generalized by assuming that the choice of a group
implies the choice of another one. This additional constraint can be represented by
an implication clause. For example, if the selection of group u implies the choice
+
of b, then the clause (a' b ) is added. Note that the implication clause makes the
product of sums form binate in variable a, because a (uncomplemented) is also part
of some covering clauses. Therefore this class of problems is referred to as binate
covering, or covering with closure. Binate covering problems are minimum-cost satis-
fiability problems. Nevertheless, some minimum-cost satisfiability problems exist that
do not represent covering problems even with implications (e.g., when all clauses
have variables with mixed polarity).
Example 2.5.20. Consider again the covering problem of Example 2.5.18. Assume that
the selection of edge a implies the selection of edge b. Then the clause x i + X I must
also be considered. The overall producr of sums expression to be satisfied is:
It is easy to verify that x = [ I 101OIT is a minimum cover and that x = 1101101' is not
a cover.
Vector [1101017 denotes a solution, because the submatrix identified by the first, second
and fourth columns has at least a 1 in all rows. Vector [I0110ITis not a solution, because
the last clause is not satisfied. Indeed, there is no 1 in the last row in correspondence
to the first, third and founh columns, and no -I in correspondence with the second and
fifth columns.
-
Whereas both unate and binate covering problems are intractable, binate cover-
ing is intrinsically more difficult than unate covering. Indeed, the addition of an ele-
ment to a feasible solution may make it infeasible if its implications are not satisfied.
A classic example of unate covering is that related to exact two-level optimization of
logic functions (Section 7.2.2). Notable examples of binate covering are those related
to the exact minimization of Boolean relations (Section 7.6), to state minimization for
finite-state machines (Section 9.2.1) and to cell-librq binding (Chapter 10).
An essential column is a column having the only I entry of some row. Graphi-
cally, it corresponds to an essential edge, i.e., to an edge which is the only one incident
to a vertex. Essential columns (edges) must be part of any cover.
A column dominates another column if the enmes of the former are larger than
or equal to the corresponding entries of the latter. Equivalently, an edge dominates
another edge when it is incident to at least all vertices incident to the other. Dominated
columns (edges) can be discarded from consideration.
A row doniinates another row if the entries of the former are larger than or equal
to the corresponding entries of the latter. Equivalently, a vertex dominates another
vertex when it is incident to at least all edges incident to the other. Dominant rows
(vertices) can be neglected, because any cover of the dominated ones is also a cover
of the complete set.
Example 2.5.22. Consider matrix A of Example 2.5.18. The fourth column is essential,
because it is the only one with a 1 entry in the fourth row. Equivalently, edge d is
essential, because it is the only one incident to vertex u?. Hence it must be part of a
cover.
The second column ([OllOIIT) dominates the fifth column (LO1 1001'). Equiva-
lently, selecting edge b corresponds to covering all vertices covered by edge e . Thus
edge e can be dropped from consideration.
The fifth row ([011101) dominates the fourth row ([00010]). Equivalently, vertex
v 5 dominates vertex u4, because it is incident to a superset of the edges incident to va.
Thus, us can be dropped from consideration, because any solution covering v4 covers us
as well.
Thus, any optimum solution to the coveringproblem specified by A has x4 =
1:x5 =O.
Example 2.5.23. Consider the covering problem specified in Example 2.5.18. As shown
in Example 2.5.22, the fourth column is essential and the fifth is dominated. The fifth
row is dominant. Thus the fourth element of x is set to 1, the fifth element of x is set to
EXACT.COVER(A, x. bl (
Reduce mamx A and update corresponding a;
if (Currenr~stimare? bll return(b);
if ( A has no rows ) return (x);
Select a branching column c;
-
x,=1:
-
x, = 0:
-
A = A after deleting column c:
x = EXACT-COVER (;i.x.b~:
if( El < lbll
-
b = x;
return (b):
I
ALGORITHM 2.5.4
A = ''
[ ;]
01 1
-
The bounding is done by the evaluation of Currentasrimate, which is a lower
bound of the cardinality of all covers that can be derived from the current x. The
estimate is the sum of two components: the number of 1s in x (i.e., the number of
columns already committed) plus a bound on the size of the cover of the current
A. Such a hound can be given by the size of a maximally independent set of rows,
i.e., rows that have no Is in the same column. Painvise independence can be easily
identified and a graph can be constructed whose vertices are in one-to-one correspon-
dence with the rows (vertices in the original hypergraph) and whose edges denote
independence. The bound is the clique number in this graph. Computing an exact
bound is difficult, because identifying the maximum clique is as complex as solving
the covering problem itself. However, any lesser approximation (that can be computed
by a fast heuristic algorithm) is valuable. The worse the approximation is, the less
pruning the algorithm does. In any case, the algorithm is still exact.
Example 2.5.24. Consider the covering matrix A of Example 2.5.18. The fourth row is
independent from the first, second and third, but the first three rows are dependent. Thus
the maximal set of mutually independent rows has cardinality 2. Hence the bound is 2.
Consider the reduced covering matrix A of Example 2.5.23. There are no indepen-
dent rows. Thus a lower bound on covering that matrix is 1. A lower bound on covering
the original mauix is I plus 1x1 = I, i.e., 2. When applying the algorithm to the matrix
of Example 2.5.18, the bound is less than bl = I1 = 5 and the search is not killed.
92 CIRCUITS AND MODELS
Example 2.5.25. Consider the reduced covering matrix A shown in Example 2.5.23.
Let us assume that the branching column c is the first one. Then XI = 1, x = [10010IT
and:
;i = [I I]
Consider now the recursive call to EXACT-COVER. Once a dominated column is
deleted, the other column is essential. Assume it corresponds to 12. Tnen x = [I IOIO]'.
After removing the row, the matrix has no rows and the call returns = x. Now
?/I = 3 < Ibl = 5 and thus the best solution seen so far is updated to b = [llO1O]T.
Then the algorithm
. tries to exclude the first column. Thus x , = 0,x = 10001OJT,
and:
Now both columns are essential. Then x = [OlllO]'. After removing the rows incident
to the essential columns, the matrix has no rows and the call returns ? = x. Now
I? = 3 = Ibl = 3 and b is not updated. The algorithm returns then 11 IOIO]'.
-
The algorithm can be modified to cope with weighted covering problems as
follows. First, the cost of a cover specified by x is eTx, which replaces 1x1 in the
algorithm. Similar considerations apply to the cost of the best solution seen so far, b.
Second, the current estimate computation must reflect the cost of covering a maximal
independent sets of rows. Last, column dominance must also reflect the weights, i.e.,
an additional condition for column dominance is that the dominant column must not
have larger weight than the dominated one.
Example 2.5.26. Consider the covering mahix A of Example 2.5.18, with weight vector
c = [I, 2. 1. 1, I]'. Then, the initial best solution has cost 6. A lower bound on the cost
of the cover can be derived by considering the two independent sets of rows that would
require one unit-weight column each to be covered in the best case. Hence the bound is
2, less than 6, and the search is not killed.
In this case, the fifth column is not dominated by the second. because its cost is
lower. Hence it should not be discarded. Indeed, once the essential column is detected
and the rows incident to it are deleted, the second column can be dropped, because it is
dominated by the fifth column.
At this point, the problem can be represented by a reduced matrix similar to that
used in Example 2.5.25, with the only difference that the columns are related to edges
(a, e , c). All columns have unit weight. A minimum-cost cover is (a, c, d), represented
by vector x = [10110]7.
BACKGROUND 93
Let us consider now a minimum-cost satisfiahility problem that is modeled by a
matrix A t (-1.0, IYX"'. This problem can also be solved by the EXACT-COVER
algorithm with some modifications. We highlight here the major differences from the
unate case, and we refer the reader to references [8,201 for details.
First, note that an instance of a minimum-cost satisfiability problem (in particular
a hinate covering problem) may not have a solution, whereas unate covering problems
always have a solution. Second, the matrix reduction step has to account for three-
valued entries. Whereas an essential column is a column having the only nonzero
entry of some row, an essential must be a part of a solution when the entry is 1, and
it must not be a part of a solution when the entry is -1. Column and row dominance
can be defined in a similar way. In particular, a row dominates another row if the
corresponding clause is satisfied when the latter is satisfied. Third, the computation of
the bounding function can still be based on the search for a maximal independent set
of rows. In the general case, two rows are independent when they cannot be satisfied
simultaneously by setting a single variable to 1 [a].
A = [i;i-iij]-
Variables xs and x, are essential: the former must be set to 1 and the latter to 0. Therefore
the last two rows and columns can be dropped from consideration. Variable x3 dominates
xd: thus xq can be set to 0, satisfying the clause expressed by the third row. The second
row dominates the first one and can be discarded, because x ; + x Z implies x ; +x2+x3 +x4.
The problem is thus reduced to satisfying x ; + x Z , which can be done by setting both XI
and ~2 to 0. A solution is x = [OOOO1Oll.
Example 2.5.28. Consider the problem of Example 2.5.27. Figure 2.26 shows a fragment
of the corresponding OBDD, with all paths to the TRUE leaf. The emboldened path in
the figure has minimum weight. It identifies solution x = [OOOO1OJT.
94 CIRCUITS AND MODELS
FIGURE 2.26
OBDD fragment modeling a mimmum-cast satisfiabilily problem
2.7 REFERENCES
Graph theory is extensively described in the textbooks by Deo 1131, Bondy and Mumy 141, and H a y 1161.
Perfect graphs and algorithms that exploit the properties of perfect graphs are well described in Golumbic's
book [IS].
Fundamental algorithms are presented in detail in the recent book by Cormen era1 [12]. A classical
reference is Lawler's book 1221. Linear programs and network algorithms are also described in detail in
Nemhauser and Wolsey's text 1271 and surveyed in the more recent book by Papadimitnou and Steiglitr
1281. Another good reference for fundamental algorithms is the textbook by Homwitz and Sahni [IY]. G m y
and Johnson [I41 have wrinen an excellent book a n tractable and intractable problems with an extended
list of problems and their taxonomy.
Boolean algebra and its applications have been the subject of several textbooks, e.g., Hill and
Peterson's [la]. A good reference for formal properties of Boolean algebra is Brown's book [Y]. Binary
decision diagrams have been described in several papers 12. 5. 10, 241. Mast applications of BDDs are
based on the model popularized by Bryant 110, 111. Recent applications of BDDs are ohen supported by
the BDD package developed by Brace e r a 1 151, who inmduced the use of the ITE algorithm and the
tabular notation, also credited to Karplus and Bryant.
I. A. Aho, J. Hopcrofi and J. Ullman, Doto Srnrcrurer and Algorithm. Addison-Wesley, Reading, MA,
1983.
2. S. Akers, "Binary Decision Diagrams," IEEE Tronrocrlons on Computers, Vol. C-27, pp. 509-516,
June 1978.
~ ~
5. K. Brace, R. Rudell and R. Bryant, "Efficient Implementation of a BDD Package,'' DAC. Proceedings
ofrhe Derign Automation Conference. pp. W 5 . 1990.
6. K. Brace, "Ordered Binary Decision Diagrams for Optimization in Symbolic Switch-level Analysis
of MOS Circuits," Ph.D. Dissemtian, Camegie-Mellon University. Pittsburgh, PA. 1993.
7. R. Brayton. R. Rudell. A. Sangiovanni and A. Wang, "MIS: A Multiple-Level Logic Optimization
System," IEEE Traransacrionr on CADflCAS, Vol CAD-6, No. 6, pp. 1062-1081, November 1987.
8. R. Brayton and F. Somenri. "An Exact Minimizer for Boolean Relations." ICCAD. Proceedings of
the Internnrionol Conference on Computer Aided Design, pp. 316-320, 1989.
9. F. Brawn, Boolean Rerisuning, Kluwer Academic Publishers, Boston. MA, 1990.
10. R. Btyant, "Graph-based Algorithms for Boolean Function Manipulation," IEEE Transactions on
Compulers, Vol. C-35, No. 8, pp. 677491, August 1986.
I I. R. Bryant, "Symbolic Boolean Manipulation with Ordered Binary-Decision Diagrams," ACM Com-
-
.nutinn Survevr, Val. 24. No. 3. Seotember 1992.
12. T. Cormen. C. Leiserson and R. Rivest, Introduction to Algorithm, McGraw-Hill, New York, 1990.
13. N. Deo, Groph Theon with Applicorions m En~ineeringand Compurer Science, Prentice-Hall, Engle-
~ ~