A Fast Parametric Maximum Flow Algorithm
A Fast Parametric Maximum Flow Algorithm
A Fast Parametric Maximum Flow Algorithm
https://www.researchgate.net/publication/220616489
CITATIONS READS
411 621
3 authors, including:
All content following this page was uploaded by Giorgio Gallo on 16 December 2013.
Abstract. The classical maximum flow problem sometimes occurs in settings in which the arc capacities
are not fixed but are functions of a single parameter, and the goal is to find the value of the parameter such
that the corresponding maximum flow or minimum cut satisfies some side condition. Finding the desired
parameter value requires solving a sequence of related maximum flow problems. In this paper it is shown
that the recent maximum flow algorithm of Goldberg and Tarjan can be extended to solve an important
class of such parametric maximum flow problems, at the cost of only a constant factor in its worst-case
time bound. Faster algorithms for a variety of combinatorial optimization problems follow from the result.
Key words, algorithms, data structures, communication networks, complexity, flow sharing, fractional
programming, graphs, knapsack constraint, linear programming, maximum flow, maximum-density sub-
graphs, network flows, network vulnerability, networks, nonlinear zero-one programming, ratio closure,
record segmentation, parallel computations, parametric programming, provisioning, pseudoforest, schedul-
ing, selection, sequencing
* Received by the editors July 20, 1987; accepted for publication (in revised form) February 16, 1988.
This research was partially supported by the National Science Foundation under grants MCS-8113503 and
DCR-8605962, and by the Office of Naval Research under contracts N00014-84-K-0444 and N00014-87-K-
0467.
? Departimento di Informatica, UniversitA di Pisa, Pisa, Italy. Some of this work was done while this
author was on leave at the Department of Computer Science, Rutgers University, New Brunswick, New
Jersey from April to July 1986.
Department of Computer Science, Rutgers University, New Brunswick, New Jersey 08903.
Department of Computer Science Princeton University, Princeton, New Jersey 08544 and AT & T Bell
Laboratories, Murray Hill, New Jersey 07974.
30
PARAMETRIC MAXIMUM FLOW 31
,
we shall describe computes no more than n- 1 distinct minimum cuts, no matter how
many values of are given. For all of our applications, O(n).
We shall now extend the preflow algorithm to the parametric maximum flow
problem. Suppose that for some value Ai of the parameter we have computed a
maximum flow f and a valid labeling d for f What is the effect of changing the value
of the parameter to ,/? The capacity of each arc (s, v) may increase; that of each
arc (v, t) may decrease. Suppose we modify f by replacing f(v, t) by min {ca,+,(v, t),
f(v, t) for each arc (v, t) E, and replacing f(s, v) by max { cA,+, (s, v), f(s, v) } for each
arc(s, v)E such that d(v)<n. The modified f is a preflow, since e(v) for vC:{s, t}
can only have increased. Furthermore, d is a valid labeling for the modified f, since
the only new residual arcs are of the form (s, v) for d (v) --> n and (v, s) for d (v) < n.
This means that we can compute a maximum flow and a minimum cut for Ai+ by
applying the preflow algorithm beginning with the modified f and the current d.
34 G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
This idea leads to the following parametric preflow algorithm. The algorithm finds
a maximum flow f and a minimum cut (Xi, Xi) for each value ,i of the parameter. It
consists of initializing f 0, d (s) n, d (v) 0 for v s, and 0, and repeating the
following three steps times:
Step 1. (Update preflow.) Replace by i+1. For (v, t)E, replace f(v, t) by
min { cA, (v, t), f(v, t)}. For (s, v) E with d (v) < n, replace f(s, v) by max { cA, (s, v),
f(s,v)}.
Step 2. (Find maximum flow.) Apply the preflow algorithm to the network with
arc capacities corresponding to hi, beginning with the current f and d. Let f and d be
the resulting flow and final valid labeling.
Step 3. (Find minimum cut.) Redefine d(v)=min {dr(v, s)+ n, dr(v t)} for each
v V. The cut (Xi, Xi) is then given by Xi {rid(v)>-n}.
The minimum cuts produced by the algorithm have a nesting property that was
previously observed in the context of various applications by Eisner and Severance
[9], Stone [43], and perhaps others. Megiddo [27] has also noted a similar property
_
in a related problem. Here, the result follows directly from our algorithm.
LEMMA 2.4. For a given on-line sequence of parameter values
the parametric preflow algorithm correctly computes maximum flows fl,f2,"" ,fl and
minimum cuts (X1,-’1), (X2, 2)," ", (X, t) such that XI. X2 Xl.
Proof The correctness of the algorithm is immediate. For any vertex v, the label
d(v) cannot decrease in Step 3, which implies that d(v) never decreases during the
running of the algorithm, This means that X1 X2 _’"_ Xl, which in turn implies
that there can be at most n- 1 distinct sets Xi.
2.4. Analysis of the parametric preflow algorithm. In view of Lemma 2.4, Lemmas
2.1 and 2.2 hold without change for the parametric preflow algorithm. Furthermore,
the time spent in Steps 1 and 3 is O(m) per iteration, for a total of O(lm) time. Thus
the parametric preflow algorithm runs in O((n + l)m) time plus O(1) time per non-
saturating push.
The number of nonsaturating pushes depends on the order in which push/relabel
steps are performed. We shall analyze three versions of the parametric preflow
algorithm. For each, we show that the time bound for the nonparametric case extends
to the parametric case with an increase of at most a constant factor. The proofs of the
following three theorems are analogous to those given in [13].
We first analyze the generic version, in which push/relabel steps take place in any
order.
THEOREM 2.5. If the order of push/relabel steps is arbitrary, then the number of
nonsaturating push steps is O( n2(l + m ). The total running time of the parametric preflow
algorithm is O((n2(l+ m)), or O(n2m) if l= O(n).
Proof Let O=Y {d(v)lv is active} if some vertex is active, and O=0 otherwise.
A nonsaturating push step decreases by at least one. The function is always in
the range 0 to 2n 2. Step 1 can increase by at most 2n 2, for a total over all iterations
of Step 1 of O(In). A relabeling step increases by the amount by which the label
changes. Thus the total increase in due to relabeling steps is O(n2). A saturating
push step can increase by at most 2n. Thus the total increase in due to such steps
is O(n2m). These are the only ways in which can increase. The total number of
nonsaturating push steps is bounded by the total increase in over the algorithm,
which is O(n m).
Next we consider the first-in first-out (FIFO) version of the preflow algorithm,
which solves the nonparametric problem in O(n 3) time [13]. In this version, a queue
PARAMETRIC MAXIMUM FLOW 35
Q is used to select vertices for push relabel steps. Initially Q is empty. At the beginning
of Step 2 of the parametric preflow algorithm, every active vertex is appended to Q.
The FIFO algorithm consists of repeating the following step until Q is empty:
Discharge. Remove the vertex v on the front of Q. Apply push/relabel steps to v
until v is no longer active or v is relabeled. If a push from v to another vertex w makes
w active, add w to the rear of Q.
THEOREM 2.6. For the FIFO version of the parametric preflow algorithm, the number
of nonsaturating push steps is O(n3). The total running time is O(n3+ ln2), or O(n 3) if
l=O(n).
Proof. We define passes over the queue Q as follows. The first pass during an
iteration of Step 2 consists of the discharge steps applied to the vertices initially on
Q. Each pass after the first in an iteration of Step 2 consists of the discharge steps
applied to the vertices added to Q during the previous pass. There is at most one
nonsaturating push step per vertex v per pass, since such a step reduces e(v) to zero.
Q .
We claim that the total number of passes is O(n + ln), from which the theorem follows.
To establish the claim, we define d=max{d(v)[vQ} if Q, and =0 if
and d(v) ,
Consider the effect on of a pass over Q. If v Q at the beginning of a pass
then v Q at the end of the pass unless a relabel step occurs during the
-
first observation concerns variants of the parametric maximum flow problem. Our
,.
s and nondecreasing on arcs into t, and the values of the parameter ,
algorithm remains valid if the arc capacity functions are nonincreasing on arcs out of
are given in
decreasing order. To see this, merely substitute for The algorithm also applies
if the arc capacity functions are nondecreasing on arcs out of s and nonincreasing on
arcs into t, and the values of are given in decreasing order. In this case, reverse the
directions of all the arcs, exchange the source and sink, and apply the original algorithm
tO this reversed network, which we shall denote by G R. Each minimum cut (, X)
generated for G t will correspond to a minimum cut (X, ) in the original (nonreversed)
network that will have the source side of minimum size instead of the sink side.
Successively generated minimum cuts in G R will correspond to cuts with successively
smaller source sides in the original network.
If we are only interested in computing minimum cuts, there is a variant of the
preflow algorithm that does less computation, although it has the same asymptotic
time bound [13]. This variant, here called the rain-cut preflow algorithm, computes a
preflow of maximum value and a minimum cut, but not a maximum flow. It differs
from the original algorithm in that a vertex v is considered to be active only if e(v) > 0
and d(v)< n. The algorithm terminates having computed a maximum preflow. (A
preflow f is maximum if and only if for every vertex v, v s or e(v) 0 or d,.(v, t) < .)
If this variant is used to compute minimum cuts, a maximum flow for a desired
parameter value can be computed by beginning with the corresponding maximum
preflow and converting it into a maximum flow using the original preflow algorithm.
Most of the applications we shall consider only require the computation of a sequence
of minimum cuts or even minimum cut values and not maximum flows. We shall refer
to the variant of our parametric preflow algorithm.that computes minimum cuts and
maximum preflows as the min-cut parametric algorithm.
So far we have required all arc capacities to be nonnegative, but if we are only
interested in computing minimum cuts, we can allow negative capacities on arcs out
of s and on arcs into t. This is because there is a simple transformation that makes
such arc capacities nonnegative without affecting minimum cuts [35]. For a given
vertex v, suppose we add a constant A(v) to c(s, v) and c(v, t). Then the minimum
cuts do not change since the capacity of every cut is increased by A(v).
By adding a suitably large A(v) to c(s, v) and c(v, t) for each v, we can make all
the arc capacities positive. In the parametric problem, we can choose a new function
A on the vertices of G for each new value hi of without affecting the O((n+ 1)m
log (nZ/m)) time bound for our algorithm. It suffices to choose
With this choice, the transformed arc capacities are nondecreasing functions of A on
arcs leaving s and constant on arcs that enter t. Although the same effect could be
obtained by adding a sufficiently large constant to the capacities of these arcs, the
modification we have described has the additional advantage of keeping capacities as
small as possible. In subsequent sections, when discussing minimum cut problems, we
shall allow arbitrary capacities, positive or negative, on arcs leaving s and arcs entering t.
The minimum cuts corresponding to various parameter values have a nesting
property that is a strengthening of Lemma 2.4. The following lemma is an extension
of known results [10, pp. 13], [43] that we shall need in the next section. To prove
the lemma, we shall use the min-cut preflow algorithm discussed above.
PARAMETRIC MAXIMUM FLOW 37
LEMMA 2.8. Let (X, X) be any minimum cut for A A, and let (Y, Y) be any
minimum cut for A A_ such that A l<= A. Then, (X Y, X Y) is a minimum cut for
A AI and (X w Y, X c Y) is a minimum cut for A A.
Proof. Run the min-cut parametric algorithm for , A, followed by A -A. At
the beginning of the computation for A =A_, all vertices vX-{s} have d(v)>-_n.
Thus, after a maximum preflow is computed for A A2, all arcs (v, w) with v X, w X
are saturated (their flow has not changed during the computation for A A2). Since
(Y, Y) is a minimum cut for A Ae, all arcs (v, w) with v Y, we Y are saturated.
Furthermore, if v Y-{t} then e(v)=0, since the net flow across (Y, Y) must be
equal to the excess at t.
Now consider the cut Z (X w Y), Z (X Y). Any arc (v, w) with v Z, w Z
must be saturated. Since v Z-{t} implies e(v)=0, the cut (Z, Z) must have capacity
.
_ __
e(t), and hence it must be a minimum cut.
The proof for (X c Y, w ’) is similar" proceed on G R, and run the min-cut
parametric algorithm for A A, followed by A A 1"]
A direct consequence of this lemma is the following corollary, which we shall
need in the next section.
COROLLARY 2.9. Let (X1,X) be a minimum cut for A.= A1, let (X2,X) be a
minimum cut for A A such that X X and A <- A 9_, and let A be such that A <= A3 A 2.
Then there is a cut (X3, X3) minimum for A3 such that X1 X3 X.
Proof. Let (X, X) be any minimum cut for A A3. Take X3 (X w X) X,
X V-X3, and apply Lemma 2.8 twice. [3
Our last observation is that if the graph G is bipartite, the O((n + 1)m log (n2/m))
--IAI
Let rt A
,
time bound for computing parametric maximum flows can be improved slightly.
Suppose V A w B, A B and every arc in G has one vertex in A and one in B.
and n/ Inl and suppose that tl a r/t. Then the time to compute maximum
flows for ordered values of A can be reduced to O(nAm log (na/m+2)) if l= O(nA).
This requires modifying the preflow algorithm so that only vertices in A are active,
and modifying the use of the dynamic tree data structure so that such a tree contains
as many vertices in A as in B. The bound is slightly worse if to(r/a). The details
can be found in [42].
h ho gives an equation for a line that contributes a line segment to the function
K(A) at A=ho. This line is Lxo(A)=ao+Ao, where ao=CAo(Xo, Xo)-Aoflo and
/30 Zo a(v)-xobl(V ). (Recall from 2 that cA(X, X)=X,w x. cA(v, w). Also
note that a(v)=Oif(s, v)_ E, and bl(V) =0 if (v, t) Eo)
3.1. Computing the smallest breakpoint of (A). To compute the smallest break-
point of K(A)we use.an algorithm stated by Gusfield [18] for an application involving
scheduling transmissions in a communication network, discussed in more detail in 4.1.
The algorithm, an adaption of Newton’s method, consists of the following two
steps.
Step O. Compute h , h such that the smallest breakpoint ho satisfies h <_- ho <- h2.
Compute a cut (X, X) that is a minimum for h. Go to Step 1.
Step 1. Compute a cut (X, X2) that is a minimum for h2. If Lx,(hl) Lx(h2),
stop: h_ is the smallest breakpoint. Otherwise, replace h2 by the value of h such that
Lx(h)= Lx2(h) and repeat Step 1. (The appropriate value of h is (a2-ai)/(fl-fl2).)
The values of h2 generated by this algorithm are strictly decreasing; thus the
g
parametric prefloW algorithm of 2 applied to G performs all iterations of Step 1 in
O(nm log (n2/m)) time, saving a factor of n over Gusfield’s algorithm [18].
In Step 0, it suffices to select h sufficiently small so that for each vertex v such
that (s, v) or (v, t) is of nonconstant capacity, c, (s, v) +
simple computation shows that a suitable value of h is
u v-s,, c(u, v) < cA, (v, t). A
(3.1) min
v-s., { b(v)-a(v)-’v-t
a,(v)+b,(v)"
c(u’ v)
a(v)+b(v)>O-1.
Similarly, it suffices to select h sufficiently large so that for each vertex v such that
(s,v) or (v,t) is of nonconstant capacity,, c2(v,t)+Ywv_,,c(v, w)<cA(s,v). A
suitable value of h2 is
(3.2) max
,v-., { b(v)-a(v)+v-’tc(v’w)
a,(v)+b,(v)
a(v)+b(v)>O +1.
Essentially the same algorithm can be used for computing the largest breakpoint;
n,
instead of successively decreasing 2 and using G successively increase A and use G.
3.2. Finding a maximum of (A). Our algorithm for finding the value of A that
maximizes u(A) is based on a simple method of iterative interval contraction for
computing the maximum f(A*) of a strictly concave and continuously differentiable
function f(A on a nonempty interval A , A3] of the real line. The method is as follows.
First, compute the function values and the tangents of f(A) at each end of the given
interval. Second, compute the point A2 [A1, Aa] where the two tangent lines intersect,
and also compute f’(A2). Then, iff’(A2)> 0 replace A3 by A2 and .repeat; iff’(A2)> 0
replace A by A2 and repeat; if f’(A2)=0 accept A2 as the solution. Of course this
algorithm need not terminate, but it will converge to the maximum.
The method is seldom used in this general setting because it is inferior to several
other algorithms for one-dimensional maximization. But it can be specialized in the
obvious way to handle the piecewise-linear concave function n(A) efficiently. A
maximum of (A) can be computed in as many function evaluations as there are linear
segments that comprise n(A), namely n- 1 or fewer. Using the notation introduced
above, ,2 (t3- al)/(1-3) if the line segments of (h) at h and at/3 are distinct.
PARAMETRIC MAXIMUM FLOW 39
Otherwise, the search terminates with a line segment of zero slope and h*= h (or
A*--A3). The algorithm will compute a maximum of K(A) in O(n2m log (n2/m)) time
since at most n- 1 minimum cut problems must be solved.
The running time of this algorithm can be improved by partitioning the sequence
of successive values of A2 into two subsequences, one increasing and the other decreas-
ing. It is then possible to use two concurrent invocations of the parametric preflow
algorithm: invocation I that starts with h and computes minimum cuts of G for an
increasing sequence of h values, and invocation D that starts with A and computes
minimum cuts of G R for a decreasing sequence of h values. A new value A2 is a
member of the increasing sequence if/32 > 0, and a member of the decreasing sequence
otherwise. The initial values of h and A must be such that all breakpoints lie in the
interval [hi, h3], a property that is assured by using (3.1) to give the initial value of
hi. and (3.2) to give the initial value of h 3.
The algorithm to compute a maximum of K (h) consists of the following four steps.
Step O. Compute the initial values A and/-3 from (3.1) and (3.2). Start concurrent
invocations (I and D) of the parametric preflow algorithm of 2" For A A , invocation
I computes a minimum cut (X1, X1) having IX] maximum; for A -A3, invocation D
computes a minimum cut (X3, X3) having IX3] minimum.
Step 1. Compute A2-"(O3--O1)/(1--3), pass ’2 to both invocations I and D,
and run them concurrently. If invocation I finds a minimum cut (X,)) first, suspend
invocation D and go to Step 2 (the other case is symmetric). Compute /32
Ev:2Step
al(v)-Ex2 bl(v).
2. If f12=0, stop: A*=A2. Otherwise, if f12>0, replace A1 by A2, back up
invocation D to its state before it began processing A2, and go to Step 1. Otherwise,
go to Step 3.
also use graph contraction so that recursive invocations of the method compute cuts
on smaller and smaller graphs.
If G is a network and X is a set of vertices such that at most one of s and is
in X, we define G(X), the contraction of G by X, to be the network formed by shrinking
the vertices in X to a single vertex, eliminating loops, and combining multiple arcs
by adding their capacities. The algorithm we present reports only the breakpoints of
(A), although it computes cuts corresponding to the line segments of the graph of
(A). If the actual cuts are needed, they can either be saved as the computation
proceeds or computed in a postprocessing step using one application of the method
in 2.3.
Our algorithm uses a recursive procedure called slice. With each network G to
which slice is applied, we associate four pieces of information: Two values of A,
denoted by A and A3, and two flows f and f3, such that f is a maximum flow for
f is a maximum flow for A3, the cut ({s}, V-{s}) is the unique minimum cut for A,
the cut (V-{t}, {t}) is the unique minimum cut for A3, and A < A3. The initial values
for A1 and A3 are computed from (3.1) and (3.2) as before. The breakpoint algorithm
consists of the following two steps.
Remarks. Step is an initialization step that guarantees that the graph G’, on
which slice is called, has unique minimum cuts ({s}, V-{s}) and (V-{t}, {t}) for A
PARAMETRIC MAXIMUM FLOW 41
and /-3, respectively. The flows f and f3 are needed as input parameters to slice to
guarantee that the initial labeling is such that the time for a sequence of calls to slice
is subject to the bound of 2.
The correctness of this algorithm follows from Corollary 2.9. Note that the
minimum cuts computed in Step $2 correspond to minimum cuts for A2 in the original
network, with the correspondence obtained by expanding the contracted vertex sets.
Since each vertex of G is in at most one of the two subproblems in Step $4, there are
O(n) invocations of slice.
3.4. Analysis of the breakpoint algorithm. Two ideas underlay the efficiency of the
breakpoint algorithm. To explain them, we need to develop a framework for the analysis
of the algorithm. We shall charge to an invocation of slice the time spent in the
invocation, not including the time spent in nested invocations. The time charged to
one invocation is then O(m) plus the time spent running the preflow algorithm in Step
$2. Summing O(m) over all O(n) invocations of slice gives a bound of O(nm). It
remains to estimate the time spent running the preflow algorithm.
In our analysis we shall denote by no and mo the number of vertices and edges
in the original (unshrunken) graph, and by n and m the number of vertices and edges
in one of the shrunken graphs passed to slice. We use the dynamic tree version of the
preflow algorithm, with the maximum tree size k chosen globally. Specifically, let ko
max {2, n/too}. For an invocation of slice on a graph with n vertices and m edges,
we let the maximum tree size for this subproblem be k min {n, ko}. Then the running
time of this invocation of the preflow algorithm is O((nm+n3/k) logk)
O((nm + n3/ko) log ko).
The first idea contributing to the speed of the algorithm is that the results of 2
allow us to bound the time of a sequence of preflow algorithm applications, not just
a single one, by O((nm + n3/ko) log ko). That is, if we charge this much time for an
invocation of slice, we can regard certain of the nested invocations as being free. The
second idea is that running the preflow algorithm concurrently on G and on G allows
us to regard the larger of the nested invocations in Step $4 as being free, since the
time spent on the larger subproblem invocation is no more than that spent on the
smaller, which implies that the total time is at most twice the time spent on all the
smaller subproblem invocations. This leads to a recurrence bounding the total running
time for all nested invocations whose solution is O((nm + n3/ko) log ko). This gives a
total time bound for the breakpoint algorithm of O(no mo log (n/too)).
Consider an invocation of slice G(A, A3, fl, f3). Let G G(X2) as computed in
Step $4, and let G3--G(X); let n, ml and n2, m2 be the numbers of vertices and
arcs in G and G, respectively. We regard this invocation of slice as being a continu-
ation of the algorithm of 2.3 applied to G, with A the most recently processed value
of A and f the resulting maximum flow. Simultaneously, we regard the invocation as
being a continuation of the algorithm of 2 applied to G R, with A3 the most recently
processed value of A and f3 the resulting flow.
With this interpretation we can regard the preflow algorithm applications in Step
$2 as being free, but if [X] =< n/2 we must account for new applications of the algorithm
in 2 on G(z) and GR(), and otherwise (i.e., I1_ < n/2) we must account for
new applications of the algorithm in 2 on G(X) and GR(x’2). Thus we obtain the
following bound on the time spent in invocations of the preflow algorithm. If G has
n vertices and m arcs, the time spent in such invocations during the computation of
(A) is at most T(n, m)+ O((nm+ r/3/k0) log ko), where T(n, m) is defined recursively
as follows"
42 G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
0 ifn =<3,
max {T(nl, ml)+ T(n2, m2)+ O((nl ml + n3/ko) log ko)"
T(n, m)=
n, n2->3; n+n<=n+2;
n<=n2; ml,mz>=l; m+mz<=m+l} ifn>3.
Remark. In this analysis, the sequence of preflow algorithm invocations associated
with a particular application of the algorithm of 2.3 is on a sequence of successively
smaller graphs, but the analysis in 2.4 remains valid. In the definition of T(n, m),
the constraint ml + mz <_- m + 1 allows for the existence of an arc (s, t), which will appear
in both subproblems.
A straightforward proof by induction shows that T(n, m) O((nm + n3/ko) log ko).
By setting n no and m mo, we obtain the following theorem.
THEOREM 3.1. The breakpoint algorithm runs in O(nm log (n2/ m)) time on a graph
with n vertices and m edges.
3.5. Additional observations. We conclude this section with two observations. First,
as noted by Stone [43], a complete set of minimum cuts for all values of A can be
represented in O(n) space: store with each vertex vC={s, t} the breakpoint at which v
moves from the sink side to the source side of a minimum cut, for a set of minimum
cuts whose source sides are nested. The breakpoint algorithm can be augmented to
compute this information without affecting its asymptotic time bound. Second,
the time bound of the three algorithms in Sections 3.1-3.3 can be improved to
O(nAm log (nA/m+2)) if G is bipartite and K(A) has O(nA) breakpoints. Here nA is
the size of the smaller half of the bipartite partition of V. This bound follows using
the bipartite variant of the preflow algorithm mentioned at the end of 2.5.
4.1. Flow sharing. Consider a network with a set of sources S {s, s2,"’, Sk}
and a single sink t, in which we want to find a flow from the sources in S to t. We
require flow conservation at vertices not in S w {t}. We can model this problem as an
ordinary one-source, one-sink problem by adding a supersource s and an arc (s, si) of
infinite capacity for each 1,. , k}. The resulting network can have many different
maximum flows, with different utilizations of the various sources; we define the
utilization ui of source si to be the flow through the arc (s, si). The question arises of
how to compare the quality of such flows. Suppose each source si has a positive weight
wi. Several figures of merit have been proposed, leading to the following optimization
problems:
(i) Perfect sharing. Among flows with u/w equal for all i {1,..., k}, find one
that maximizes the flow value e(t).
PARAMETRIC MAXIMUM FLOW 43
(ii) Maximin sharing. Among maximum flows, find one that maximizes the smal-
lest ui/ wi, { 1,. , k}.
(iii) Minimax sharing. Among maximum flows, find one that minimizes the largest
ui/wi, i_{1,...,k}.
(iv) Optimal sharing. Among maximum flows, find one that simultaneously
maximizes the smallest u/w and minimizes the largest u/wi, i {1,..., k}.
(v) Lexicographic sharing. Among maximum flows, find one that lexicographically
maximizes the k-component vector whose jth component is the jth smallest u/wi,
i{l,...,k}.
Flow-sharing problems with one source and multiple sinks are completely
analogous to the multiple-source case: merely exchange source and sinks and reverse
the network. For the criteria (ii)-(v), we can even allow multiple sources and multiple
sinks, and simultaneously optimize one criterion for the sources and a possibly different
criterion for the sinks. This is because each of problems (ii)-(v) calls for a maximum
flow, and a multiple-source, multiple-sink problem can be decomposed into a multiple-
source, one-sink problem and a one-source, multiple-sink problem, by finding a
minimum cut of all sources from all sinks, contracting all vertices on the sink side to
give a one-sink problem, and separately contracting all vertices on the source side to
give a one-source problem. This observation is due to Megiddo [27].
Perfect sharing arises in a network transmission problem studied by Itai and Rodeh
[22] and Gusfield [18] and in a network vulnerability model proposed by Cunningham
[5]. We discuss these models below. Brown studied maximin sharing [3], Ichimori,
Ishii, and Nishida [21] formulated minimax and optimal sharing, and Megiddo [27],
[28] studied lexicographic sharing. Motivation for these problems is provided by the
following kind of example, which gives rise to a multiple-sink problem. During a
famine, relief agencies supplying food to the stricken areas want to distribute their
available food supplies so that each person receives a fair share. The weight associated
with each sink (famine area) is the population in that area, possibly adjusted for
differences in food needs between adults and children, and other factors. A perfect
sharing solution gives every person in every famine area the same amount of food,
but it may be too restrictive since it need not allocate all the available and transportable
food supply. A better solution will be provided by solving one of the problems (ii)-(v).
There are analogous industrial interpretations of this model.
We shall show that all five flow-sharing problems can be solved in
O(nm log (n2/m)) time using the algorithms of 2 and 3. The lexicographic sharing
problem requires computing all the breakpoints of a min-cut capacity function by the
algorithm of 3.3. The other four problems are easier, and can be solved by the algorithm
for finding the smallest (or largest) breakpoint given in 3.1. Our tool for solving all
five problems is the following parametric formulation: for each s S, let arc (s, s)
have capacity wA, where A is a real-valued parameter. Since all arc capacities are
nonnegative, the range of interest of A is [0, co). There are at most k breakpoints of
the min-cut capacity function K(A), one per source si.
Perfect sharing. Find the smallest breakpoint AL Of K(A). Any maximum flow for
A.,. solves the perfect sharing problem. This was observed by Gusfield 18] in the context
of the network transmission-scheduling problem described below. Another application
will arise in 4.2.
Scheduling transmissions. Itai and Rodeh state the following problem of scheduling
transmissions in a "circuit-switched" communication network represented by a directed
graph G V, E) with fixed positive arc capacities. The capacity c(v, w) is the effective
transmission rate of the communication channel (v, w) in the direction from v to w,
44
_
G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
say in bits per second. A central vertex (sink) V receives all traffic that originates
at a subset of vertices S V-{t} called emitters (sources). Each emitter si S has
wi > 0 bits of information that it wishes t’o send to t. We assume that G has paths from
each si S to t. The communication protocol allows the sharing of arc capacities by
everal paths, but it requires that at least one path from s to be established before
transmission from si can begin. Clearly, if transmissions are scheduled from each
emitter, one at a time, the entire task can be completed in T’=.ics wi/c(Xi, X)
seconds, where (X, X) is a minimum cut separating si and t. But since arc capacities
can be shared, it may be possible to obtain a lower value for T. The objective is to
minimize the time T within which all transmissions can be completed.
To see that the problem is a perfect sharing multiple-source problem, let 1/T,
where T is in seconds, and assign a capacity of wi bits per second to each arc (s, s)
from the supersource s to an emitter s S. Once s and the corresponding maximum
flow have been computed by the algorithm above, the actual transmission schedule
can be constructed from the flow in O(m) time as described in [22]. Itai and Rodeh
proposed two algorithms for this problem, with running times of O(kn2m) and
O(kZnm log n). These are modifications of known maximum-flow algorithms. In com-
parison, our algorithm runs in O(nm log (n2/m)) time.
Maximin sharing. Find the largest breakpoint of (). Any maximum flow for
for .
solves the maximin sharing problem.
Minimax sharing. Find the smallest breakpoint s of (). Find a maximum flow
Construct a residual network in which each arc (v, w) with s { v, w} has capacity
c(v, w)-f(v, w), each arc(s,s) has infinite capacity, and each arc(s,s) has zero
capacity. Find a maximum flow f’ in the residual network. The flow f+f’ is a minimax
flow in the original network.
with oo.
.,
the correctness of the algorithm for the lexicographic sharing problem. Renumber the
sources if necessary so that =< z=<. and let G denote the parametric network
THeOreM 4.1. On G there is a maximum flow f such that f(s, s)= w,ifor all i.
Such a flow is a lexicographic flow,
it
=
k.
Xo X X
Then
=
, ,..., ,
Proof Let i, i2,’’" it- be the values of such that
=... =X be
are
the
the
sets
distinct
such
breakpoints
that (X, X./)
in
for 1
increasing
<-j _<-I is
<+.
the
Let io=0 and
order. Let {s}---
minimum cut
with the smallest sink side for ) =)!. Then s_,+, s_,+z,. .,
s! X-X_. For
1 _j_<- l, the cut (X_, X._) is a minimum cut for as well, specifically the one
with the smallest source side. Thus, ci(X._, X._) e,(X, X). It follows by induction
on j that for =<j =< l,
k
(4.1) c,,,(Xj, X/) E i:-:l
w, Ai + E w,A,,,
i----ii+l
PARAMETRIC MAXIMUM FLOW 45
(4.2)
where f(x) and g(x) are real-valued functions on a subset S of R ", and g(x)> 0 for
all x S. Isbell and Marlow [23] proposed an elegant solution method for the important
case of linear f and g, but their approach has been extended to nonlinear problems
(see, e.g., Dinkelbach [7]), and more recently to several classes of combinatorial
problems (see, e.g., Picard and Queyranne [33], [34], Padberg and Wolsey [31], and
Cunningham [5]).
A problem that is intimately related to (4.3) is
(4.4) z(x*, A max {z(x, A
xS
=f(x)- Ag(x)},
where A is a real-valued constant. These two problems are related in the sense that x*
solves (4.3) if and only if (x*, A*) solves (4.4) for A A*= A(x*) giving the value
z(x*, A*)= 0. Isbell and Marlow’s algorithm generates a sequence of solutions until
this condition is met, We state their algorithm below in a form useful for our purposes
(see, e.g., Gondran and Minoux [14, pp. 636-641]):
ALGORITHM FP.
x S. Compute Ao=f(x)/g(x). Set k =0.
.
Step O. Select some
Step 1. Compute x k+l, solving the problem (4.4): z(x k+l, A)= maxxs z(x, A)=
f(x)-Ag(x).
Step 2. If z(x +, Ak)=0, stop: x*=x Otherwise, let A+=f(x+)/g(x+),
replace k by k + 1 and go to Step 1.
THEOREM 4.2. Algorithm FP is correct. The sequence of values {A,} generated by
the algorithm is increasing.
,
Proof For any particular k, z(x +, A) is nonnegative in Step 1, since
,
z(x +, A)=> z(x Ak)= 0. If z(x +, A)= 0, the algorithm halts with x which solves
(4.4) for A A, and hence solves (4.3). The algorithm continues only if z(x +, A) > 0;
i.e., f(x +’) Akg(x +) > 0, which implies A+I =f(x+)/g(x
4’6 G. GALLO, M. D. GRIGORIADIS, AND R. E. TAR.IAN
,
which implies that g(x k) > g(x k+) since ,t > a-l. The inequality "-<" above follows
from z(x k+l, ,,_)<-z(x ’k-) since x maximizes z(x, h/f_l). ["]
The efficiency of Algorithm FP depends on the number of times problem (4.4)
has to be solved, and on the time spent solving it. For continuous functions f and g
defined on a nonempty compact set S, Schaible [38] has shown that the decreasing
sequence {g(xk)} for k=> approaches g(x*) linearly, and the increasing sequence
{Ak} approaches * superlinearly. Nevertheless, (4.4) may be as hard to solve as the
original fractional problem unless some assumptions are made about f, g, and S.
Fortunately, even the most restrictive assumptions find relevant applications in practice.
For instance, iff and g are linear and S is polyhedral (the case in [23]), the algorithm
consists of solving a finite number of linear programs (4.4) whose solution is imple-
mented by cost-parametric programming on intervals [/k, /k+l], for successive k. This
can be specialized to network simplex parametric programming by using the primitives
described by Grigoriadis [15]. If f is a negative semidefinite quadratic form and g is
linear, the sequence of concave quadratic programs defined by (4.4) can be handled
by the parametric algorithm of Grigoriadis and Ritter [16]. If f and g are negative-
and positive-definite quadratic forms, respectively, Ritter’s parametric quadratic pro-
gramming method [37] can be used. Approaches for more general nonlinear problems
are analyzed in [7] and [38].
If S is nonempty and finite, f is real-valued, and g is positive, integer-valued, and
bounded above by some integer p> 0, Lemma 4.3 implies that Algorithm FP will
terminate in p+ 1 or fewer iterations. This observation has been used in various
applications where g(x) is a set function, for which usually p O(n). Such is the case
whether (4.3) is a maximization or a minimization problem.
We shall now describe a number of applications of the generic Algorithm FP. In
each case the sequence of problems (4.4) that arises can be handled by our parametric
preflow algorithm or its min-cut variant described in 2.5.
Strength of a directed network. This is an application due to Cunningham [5, 6].
Let G (V, E) be a given directed graph with n vertices, m arcs, nonnegative arc
weights and nonnegative vertex weights, and a distinguished vertex s e V. We assume
that every v V-{s} is reachable from s in G. The arc weight c(v, w) represents the
PARAMETRIC MAXIMUM FLOW 47
f(A) =(,w)A C(V, W)) may cause some subset of vertices VA c_ V-{s} to become
_
cost required to "destroy" the arc (v, w) E. The node weight d is the "value" attributed
to having v reachable from s. Destroying a set of edges A_ E (at a total cost of
which leads to a sequence of problems (4.4) that Cunningham calls attack problems:
z(A k+, Ak) max {z(A, ,) =f(A)- )tg(A)}.
AE
Each such problem amounts to finding a minimum cut in an expanded network formed
by adding to G a sink and an arc(v, t) with capacity Akd for each v V-{s}. If
we solve the strength problem using Algorithm FP and use the algorithm of 2.3 to
compute minimum cuts for the generated parameter values, we obtain an algorithm
running in O(nm log (n2/m)) time; as Cunningham notes, there can be only. O(n)
iterations of Step 1. Alternatively, we can make use of his observation that (4.5) is
zero if and only if there is flow in the expanded network such that f(v, t)= Akd for
each v V. Equivalently, A(A*) is the largest value of A for which the minimum cut
is (V, {t}). That is, the strength problem is a perfect sharing problem, and it can be
solved in O(nm log (n2/m)) time as described in 4.1. Either method improves over
Cunningham’s method, which solves O(n) minimum cut problems, without making
use of their similarity.
Zero-one fractional programming. An important subclass of (4.3) is the problem
for which f(x)>-O and g(x)>0 are given polynomials defined for all x in S=
{0, 1}n {0}n as follows:
(4.5) f(x)= Y, ae
PA
I-I
iP
xi+
i=1
aixi,
(4.6) g(x)=
QB
bo
iO
i bx.
l-I x+ i=1
The sets A and B are given collections of nonempty nonsingleton subsets of { 1, , n},
>=
ap 0 for each P A, and b o <= 0 for each Q B. Since f(x) >= 0 and g(x) > 0 for all
xe S, we have a_>-0 and b>0 for each i{1,..., n}. This problem was studied by
Picard and Queyranne [33]. For ease in stating time bounds we assume n O(IAI + IBI).
Algorithm FP leads to a sequence of problel’ns of the form (4.4) for increasing
values Ag -> 0 of A. Each such problem is an instance of the selection or provisioning
problem, characterized by Rhys [36] and Balinski [2] as a minimum cut problem on
a bipartite graph.
The entire sequence of these problems can be handled as a parametric minimum
cut problem of the kind studied in 2. We give two different formulations, one of
.
which works for the special case of B
= (i.e., g(x) contains no nonlinear terms) and
the other of which works for the general case. All the subsequent applications we
consider fall into the case B
If B =, we define a bipartite network G whose vertex set contains one vertex
for each set P e A, one vertex for each e {1,..., n}, and two additional vertices, a
source s and a sink t. There is an arc (s, v) of capacity ap for each vertex v corresponding
48 G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
to a set PeA, an arc(i, t) of capacity Abi-ai for every i{1,..., n}, and an arc(v, i)
of infinite capacity for every vertex corresponding to a set P A that has as one of
its elements. Observe that the capacities of all arcs into are nondecreasing functions
of A, and those of all other arcs are constant. The parametric preflow algorithm operates
on G R instead of G. For a given value of A, a minimum cut (X, ) in G corresponds
to a solution x to (4.4) defined by xi 1 if i X, x 0 if e X.
In the general case (B 0), it is convenient to assume f(x) > 0 for some x (otherwise
the solution to (4.3) is A(x)=0, attained for any x) and that Algorithm FP starts with
an x such that o> 0. Then, the entire sequence {Ak} is strictly positive. To solve (4.4)
we rewrite it as follows"
z(x*, A) max
x55
(f(x)/A -g(x)).
We define the network G to have a vertex set consisting of one vertex for each
set P e A, one vertex for each set Q B, one vertex for each e { 1, , n}, and a source
s and a sink t. There is an arc (s, v) of capacity a,/h for each v corresponding to a
set P A, an arc (s, v) of capacity -b o for each v corresponding to a set Q e B, an
arc (v, i) of infinite capacity for each vertex v corresponding to a set P e A or Q B
that has as one of its elements, and an arc (i, t) of capacity bi-a/A for each
i {1,..., n}. The capacities of arcs out of s are nonincreasing functions of A and
those of arcs into are nondecreasing functions of A; the parametric preflow algorithm
operates on GR. Minimum cuts in G correspond to solutions exactly as described above.
Remark. This formulation differs from that in [33] because of the division by A.
The formulation of[33] gives a parametric minimum cut problem in which the capacities
of arcs out of the source and of arcs into the sink are nondecreasing functions of
to which the results of 2 do not apply.
,
The following analysis is valid for both of the above two cases. The nesting
property of minimum cuts (Lemma 2.4) implies that the number of iterations of Step
1 of Algorithm FP is O(n), a fact also observed by Picard and Queyranne [33]. To
state time bounds, let us denote by n’ and rn’ the number of vertices and edges,
respectively, in G; n’= n+IAI+IBI+2 and m’= n+la[+lBl+Zea IP]+Zo,_n IQI.
Algorithm FP, in combination with the parametric preflow algorithm of 2.3,
yields a time bound of O(n’m’log(n’/m)) [or O(nm’log(n:/m’+2))], improving
over the algorithms of Picard and Queyranne [33] and Gusfield, Martel, and
Fernandez-Baca [20].
Maximum-ratio closure problem. This problem was considered by Picard and
Queyranne [34] and independently by Lawler [25], who only considered acyclic graphs
_
(see the next application). The problem can be solved by a straightforward application
of Algorithm FP. Each problem in the sequence of problems (4.4) is a maximum-weight
closure problem. The maximum-weight closure problem (Picard [32]) is the generaliz-
ation to nonbipartite graphs of the selection or provisioning problem [2], [36] men-
tioned above.
These closure problems are defined formally as follows. Let G (V, E) be a
directed graph with vertex weights a of arbitrary sign. A subset U V is a closure in
G if for each arc (v, w) E with v U we also have w U. A closure U* V is of
maximum weight if the sum of its vertex weights is maximum among all closures in
G. To compute a maximum-weight closure, construct the graph G* from G as follows.
Add a source s and a sink to G. Create an arc (s, v) of capacity a and an arc (v, t)
of zero capacity for each v V. Assign infinite capacity to all arcs in E. A minimum
cut (X, X) of G* gives the desired closure U*= X-{s}.
_
PARAMETRIC MAXIMUM FLOW
Now let a,->_ 0 and by> 0 be given weights on the vertices of G (V, E). The
maximum-ratio closure problem is to find a closure U* that maximizes the ratio
a( U)/b(U) over all nonempty closures U V. To compute a maximum-ratio closure,
Picard and Queyranne [34] suggest the use of Algorithm FP. This requires the solution
of a sequence of O(n) maximum-weight closure problems, each of which is a minimum-
cut problem. Thus an O(nZm log (n2/m))-time algorithm results. Lawler’s algorithm
49
uses binary search and runs in O(knm log (n2/m)) time, where k-
log (max {n, amax, bmax}), assuming integer weights.
We can solve the entire sequence of these minimum-cut problems by the parametric
preflow algorithm of 2.3 as follows. Modify (3* so that for each vertex v V there
is an arc (s, v) of capacity av-Abv and an arc (v, t) of capacity zero. All other arcs
have infinite capacity. We start with U = Vw{s}; or, equivalently, with a sufficiently
small value of A so that the minimum cut is ({s} w V, t}). Such a value is Ao mini ai/
The capacities of arcs out of the source are nonincreasing functions of the parameter,
and the parameter values are given on-line in increasing order. The parametric preflow
algorithm operates on (G*) R and runs in O(nm log (n2/m)) time, improving the bound
of Picard and Queyranne by a factor of n and that of Lawler by a factor of k.
Remark. Negative arc capacities in the various minimum-cut problems can be
made nonnegative using the transformation suggested in 2.5. In the minimum-ratio
closure problem, it suffices to assign a capacity of max {0, a- Abv} to each arc (s, v)
and a capacity of max {0, Ab- a} to each arc (v, t).
A job-sequencing application. Lawler [25] applied his algorithm to a problem
studied by Sidney [39] and others: there are n jobs to be scheduled for processing on
a single machine subject to a partial order given as an acyclic graph G (V, E), where
V is the set of jobs. Each job v has a processing time a and a "weight" b > 0 that
describes its importance or some measure of profit. Let the completion time of job v
as determined by a given feasible sequence be C. It is required to find a sequence
that minimizes
even when all a
v v b C. This problem is NP-complete for an arbitrary partial order
or all by [25]. Sidney offered the following decomposition
procedure. First find a maximum-ratio closure U1 such that ]U1] is minimum. Remove
the subgraph induced by U1 from G, find a maximum-ratio closure U2 of the reduced
graph, and repeat this process until the entire vertex set is partitioned. Sidney and
Lawler call closures initial sets of V. Once such a decomposition is found, an optimal
schedule can be computed by finding an optimal schedule for each closure, for example,
by a branch-and-bound method, and then concatenating the solutions. The algorithm
described above can be used to find each closure. The overall time bound depends on
the size of each closure. (Our algorithm will give closures of minimum cardinality,
since the algorithm is applied to the graph (G*)R; see 2.5.)
Maximum-density subgraph. A special case of the fractional programming problem
(4.3) is that of finding a nonempty subgraph of maximum density in an undirected
graph G V, E) with n vertices and m edges. The density of a subgraph of G induced
by a subset of vertices V’_ V is the number of its edges divided by the number of its
vertices, For this application, (4.5) and (4.6) have the simpler forms f(x)= xAx and
g(x) ex, where e is the vector of all ones, A is the vertex-vertex incidence matrix of
G, and xi if vertex i V’, and xi 0 otherwise. Algorithm FP, which yields x* and
the maximum density A*, can be used to compute a maximum-density subgraph of G.
It is not necessary to construct a bipartite network and solve minimum-cut problems
on it. We can merely modify (3 by specializing the construction of [35]. Replace each
edge of (3 by two oppositely directed arcs of unit capacity, add a source s and a sink
t, and create an arc (s, v) of capacity 6 A and an arc (v, t) of zero capacity for each
50 G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
where f(x) is given by (4.5), d is a positive n-vector of item weights and b is a scalar,
the knapsack size, such that iv di> b > 0, V {1,..., n}. Thus, in addition to the
benefit a obtained for including an individual item i V in the knapsack, the model
allows the possibility of an additional reward of a, _-> 0 for including all of the items
that comprise a given subset P e A. The (linear) knapsack problem is a special case of
(4.8) in which all subsets P A are singletons.
This NP-complete problem was suggested by Lawler [24]. Because of its many
practical applications there is interest in the fast computation of bounds on z(x*). To
this end we consider the Lagrangian function for (4.8):
L(x,A)=f(x)-A(dx-b) for A -> O,
which has a finite infimum over x {0, 1} n. For each A _-> 0, we define the dual function:
max L(x, ).
x{0,1}"
This value is an upper bound on z(x*) and can be used to construct heuristics and
search procedures for computing an approximate or exact solution to (4.8). It can be
evaluated by the above algorithm in O(n’m’ 10g (n’2/m’)) [or O(nm’ log (n2/m’+ 2))]
time.
A special case of considerable practical importance is the quadratic knapsack
problem, for which f(x) xAx, where A [ao] is a nonnegative real symmetric matrix
having no null rows. For this case, Gallo, Hammer, and Simeone [11] proposed an
O(n log n)-time algorithm for creating a class of "upper planes" bounding z(x).
Chaillou, Hansen, and Mahieu [4] showed that its Lagrangian dual can be solved as
a sequence of O(n) minimum-cut problems in O(nZrn log (n:/rn)) time.
The problem of evaluating (A) for a fixed A can be formulated as a minimum-cut
problem using a graph construction similar to that described earlier for the maximum-
density subgraph problem, thereby avoiding the use of a bipartite graph. Let G V, E)
be a directed graph with vertex set V={1,..., n}, arc set E ={(v,.w): avw>O, v,
w V}, and arc weights a(v, w) avw. We add to G a source s, a sink t, and an arc (s, v)
of capacity a-Ad and an arc (v, t) of zero capacity for each v V, where a
Ywvaw. Using this network formulation, the above algorithm computes the
Lagrangian relaxation of a quadratic knapsack problem in O(nrn log (nZ/m)) time.
4.4. Miscellaneous applications. Our last two applications both use the algorithm
developed in 3.3 for computing the min-cut capacity function K(A) of a parametric
minimum-cut problem. The first application is to a problem of computing critical load
factors for modules of a distributed program in a two-processor distributed system
[43]. The second application is to a problem of record segmentation between primary
and secondary memory in large shared databases [9].
Critical load factors in two-processor distributed systems. Stone [43] modeled this
problem by a graph G V, E) in which V { 1, , n} is the set of program modules
and / is the set of pairs of modules that need to communicate with each other. The
capacity of an arc (v, w) / specifies the communication cost between modules v and
w (it is infinity if the modules must be coresident). The two processors, say A and B,
are represented by the source s and the sink t, respectively, that are appended to the
52 G, GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
network. There is an arc (s, v) of capacity Ab > 0, where b is the given cost of executing
program module v on processor B. There is an arc (v, t) of capacity (1-A)a>0,
where a is the given cost of executing program module v on processor A.
The parameter [0, is the fraction of the time processor A that delivers useful
cycles, commonly known as the load factor. For a fixed value of A, a minimum cut
(X, X) in this network gives an optimum assignment of modules to processors. For
A =0, a minimum cut (X, X) with IXI minimum has X {s}, i.e., all modules are
assigned to B. For A a minimum cut (X, ) with [XI maximum has {t}, i.e.,
all modules are assigned to A. The objective is to find the best assignment of program
modules to processors for various values of A, or to generate these assignments for
each breakpoint of the min-cut capacity function K(A). Lemma 2.4 implies that, at
each breakpoint, one or more modules shift from one side of the cut to the other. By
listing, for each module, the breakpoint at which it shifts from one side of the minimum
cut to the other, one can determine what Stone calls the critical load factor for each
module. The operating system can then use this list of critical load factors to do
dynamic assignment of modules to processors. The algorithm of 3.3 will compute
_
the critical load factors of the modules in O(nm’ log ((n + 2)2/ m’)) time, where
m’= rn + 2n. Stone does not actually propose an algorithm for this computation.
Record segmentation in large shared databases. Eisner and Severance [9] have
stated a model for segmenting records in a large shared database between primary and
secondary memory. Such a database consists of a set of data items S {1,..., N}
and serves a set of users T {1, , n}. Each user w T retrieves a nonempty subset
Sw S of data items and receives a "value" (satisfaction) of bw> 0 whenever all of
the items in Sw reside in primary memory. The cost of transporting and storing a data
item v S in primary memory is Aa > 0, where a > 0. The scalar A > 0 is a conversion
factor such that A units of transportation and storage costs equals one unit of user
value. The objective is to find a segmentation that minimizes the total cost minus user
satisfaction.
For a fixed value of A the problem can be formulated as a selection or provisioning
problem [2], [36] as follows. Construct a bipartite graph having the data items S as
its source part and the users T as its sink part. Construct an arc (v, w), v S, w T of
infinite capacity if data item v belongs to the set of data items Sw retrieved by user w.
Create a supersource s and a supersink t, and append an arc (s, v) of capacity Aa for
each v S and an arc (w, t) of capacity bw for each w T. A min-cut (X, X) separating
s and in this network necessarily partitions S and T into (S, S) and (T, T),
respectively. It is easy to see that
c(X,X)= min
sx’rx, g.u ’x
A
.gx
a+ ) bw
w’rx
The value of A plays an important role in this linear performance measure, and it
depends on the system load. In practice it is necessary to create a list of primary storage
assignments for all critical values of A. The database inquiry program can then select
and implement the best assignment at appropriate times. This table consists of all the
breakpoints of the min-cut capacity function K(A) and, for each data item and user,
the parameter value at which it moves from one side to the other of a minimum cut.
This information can be computed by the breakpoint algorithm of 3.3 in O((n + N)rn
log ((n+ N)2/rn)) [or O(min {n, N}m log ((min {n, N})Z/m+2))] time. The algorithm
proposed by Eisner and Severance [9] for solving the parametric problem requires the
solution of O(min {n, N}) minimum-cut problems. Our algorithm improves their
method by a factor of min { n, N}. They also consider a nonlinear performance measure,
PARAMETRIC MAXIMUM FLOW 53
for which an algorithm such as that in 4.3 can be used to derive bounds on an
optimum solution. This bounding method gives an approximate solution, and the
method can be used in a branch-and-bound algorithm to give an exact solution.
5. Remarks. We have shown how to extend the maximum-flow algorithm of
Goldberg and Tarjan to solve a sequence of O(n) related maximum-flow problems at
a cost of only a constant factor over the time to solve one problem. The problems
must be instances of the same parametric maximum flow problem and the corresponding
parameter values must either consistently increase or consistently decrease. We have
further shown how to extend the algorithm to generate the entire min-cut capacity
function of such a parametric problem, assuming that the arc capacities are linear
functions of the parameter.
We have applied our algorithms to solve a variety of combinatorial optimization
problems, deriving improved time bounds for each of the problems considered. Our
list of applications is meant to be illustrative, not exhaustive. We expect that more
applications will be discovered. Although we have only considered a special form of
the parametric maximum-flow problem, most of the parametric maximum-flow prob-
lems we have encountered in the literature can be put into this special form.
We have discussed only sequential algorithms in this paper, but our ideas extend
to the realm of parallel algorithms. Specifically, the preflow algorithm has a parallel
version that runs in O(n 2 log n) time using n processors on a parallel random-access
machine. This version extends to the parametric preflow algorithm in exactly the same
way as the sequential algorithm. Thus we obtain O(n 2 log n)-time, n-processor parallel
algorithms for the problems considered in 2 and 3 and for each of the applications
in 4, where n is the number of vertices in the network.
There are a number of remaining open problems. One is to find additional
applications. Our methods might extend to parametric maximum-flow problems that
do not have the structure considered in this paper. Such problems include computing
the arboricity of a graph [30], [33], computing properties of activity-selection games
[45], and computing processor assignment for a two-processor system in which the
processor speeds vary independently [17]. (This last problem is a two-parameter
generalization of Stone’s model [43] discussed in 4.4.) Gusfield [19] has recently
found a new application, to a problem considered by Cunningham [5], of solving the
sequence of attack problems involved in the computation of the strength of an undirec-
ted graph. (This problem is related to the strength problem considered in 4.2 but is
harder.)
Another area for research is investigating whether an arbitrary maximum-flow
algorithm can be extended to the parametric problem at a cost of only a constant
factor in running time. One algorithm that we have unsuccessfully tried to extend in
this way is that of Ahuja and Orlin [1]. Working in this direction, Martel [26] has
recently discovered how to modify an algorithm based on the approach of Dinic [6]
so that it solves the parametric problem with only a constant factor increase in running
time.
REFERENCES
1] R. K. AHUJA AND J. B. ORLIN, A fast and simple algorithm jbr the maximum flow problem, Working
Paper No. 1905-87, Sloan School of Management, Massachusetts Institute of Technology,
Cambridge, MA, 1987.
[2] M. L. BALINSKI, On a selection problem, Management Sci., 17 (1970), pp. 230-231.
54 G. GALLO, M. D. GRIGORIADIS, AND R. E. TARJAN
[3] J. R. BROWN, The sharing problem, Oper. Res., 27 (1.979), pp. 324-340.
[4] P. CHAILLOU, P. HANSEN, AND Y. MAHIEU, Best network flow bounds for the quadratic knapsack
problem, presented at the NETFLO 83 International Workshop, Pisa, Italy, 1983. Lecture Notes
in Mathematics, Springer-Verlag, Berlin, to appear.
[5] W. H. CUNNINGHAM, Optimal attack and reinforcement of a network, J. Assoc. Comput. Mach., 32
(1985), pp. 549-561.
[6] E. A. DINIC, Algorithm for solution of a problem of maximum flow in networks with power estimation,
Soviet Math. Dokl., 11 (1970), pp. 1277-1280.
[7] W. DINKELBACH, On nonlinear fractional programming, Management Sci., 13 (1967), pp. 492-498.
[8] J. EDMONDS, Minimum partition of a matroid into independent subsets, J.Res. Nat. Bur. Standards,
69B (1965), pp. 67-72.
[9] M. J. EISNER AND D. G. SEVERANCE, Mathematical techniques for efficient record segmentation in
large shared databases, J. Assoc. Comput. Mach., 23 (1976), pp. 619-635.
[10] L. R. FORD, JR. AND D. R. FULKERSON, Flows in Networks, Princeton University Press, Princeton,
NJ, 1962.
[11] G. GALLO, P. HAMMER, AND B. SIMEONE, Quadratic knapsack problems, Math. Programming, 12
(1980), pp. 132-149.
12] A.V. GOLDBERG, Finding a maximum density subgraph, Tech. Report No. UCB CSD 84/171, Computer
Science Division (EECS), University of California, Berkeley, CA, 1984.
[13] A. V. GOLDBERG AND R. E. TARJAN, A new approach to the maximum flow problem, Proc. 18th Annual
ACM Symposium on Theory of Computing, 1986, pp. 136-146; J. Assoc. Comput. Mach., 35 (1988).
[14] M. GONDRAN AND M. MINOUX, Graphs and Algorithms (translated by S. Vajda), John Wiley, New
York, 1984.
[15] M. D. GRIGORIADIS, An efficient implementation of the network simplex method, Math. Programming
Study, 26 (1986), pp. 83-111.
16] M. D. GRIGORIADIS AND K. RITTER, A parametric method for semidefinite quadratic programs, SIAM
J. Control, 7 (1969), pp. 559-577.
[17] D. GUSFIELD, Parametric combinatorial computing and a program of module distribution, J. Assoc.
Comput. Mach., 30 (1983), pp. 551-563.
[18] On scheduling transmissions in a network, Tech. Report YALEU DCS TR 481, Department of
Computer Science, Yale University, New Haven, CT, 1986.
[19] Computing the strength of a network in O([V[3[E[) time, Tech. Report CSE-87-2, Department
of Electrical and Computer Engineering, University of California, Davis, CA, 1987.
[20] D. GUSFIELD, C. MARTEL, AND D. FERNANDEZ-BACA, Fast algorithms for bipartite network flow,
SIAM J. Comput., 16 (1987), pp. 237-251.
[21] T. ICHIMORI, H. ISHII, AND T. NISHIDA, Optimal sharing, Math. Programming, 23 (1982), pp. 341-348.
[22] A. ITA! AND M. RODEH, Scheduling transmissions in a network, J. Algorithms, 6 (1985), pp. 409-429.
[23] J. R. ISBELL AND H. MARLOW, Attrition games, Naval Res. Logist. Quart., 2 (1956), pp. 71-93.
[24] E. L. LAWLER, Combinatorial Optimization: Networks and Matroids, Holt, Rinehart and Winston, New
York, 1976.
[25] Sequencingjobs to minimize total weighted completion time subject to precedence constraints, Ann.
Discrete Math., 2 (1978), pp. 75-90.
[26] C. MARTEL, A comparison of phase and non-phase network algorithms, Tech. Report CSE-87-7, Depart-
ment of Electrical and Computer Engineering, University of California, Davis, CA, 1987.
[27] N. MEGIDDO, Optimal flows in networks with multiple sources and sinks, Math. Programming, 7 (1974),
pp. 97-107.
[28] ., A good algorithm for lexicographically optimal flows in multi-terminal networks, Bull. Amer.
Math. Soc., 83 (1979), pp. 407-409.
[29] ., Combinatorial optimization with rational objective functions, Math. Oper. Res., 4 (1979), pp.
414-424.
[30] C. ST. J. A. NASH-WILLIAMS, Decomposition of finite graphs into forests, J. London Math. Soc., 39
(1964), p. 12.
[31] M.W. PADBERG AND L. A. WOLSEY, Fractional covers and forests and matchings, Math. Programming,
29 (1984), pp. 1-14.
[32] J.-C. PICARD, Maximal closure of a graph and applications to combinatorial problems, Management Sci.,
11 (1976), pp. 1268-1272.
[33] J.C. PICARD AND M. QUEYRANNE, A network flow solution to some nonlinear 0-1 programming
problems, with applications to graph theory, Networks, 12 (1982), pp. 141-159.
[34] , Selected applications o.f minimum cuts in networks, INFOR, 20 (1982), pp. 394-422.
[35] J.-C. PICARD AND H. D. RATLIFF, Minimum cuts and related problems, Networks, 5 (1975), pp. 357-370.
PARAMETRIC MAXIMUM FLOW 55
[36] J. M. W. RHYS, A selection problem of shared fixed costs and network flows, Management Sci., 17 (1970),
pp. 200-207.
[37] K. RITTER, Ein verfahren zur losung parameterabhangiger, nichtlineare maximum probleme, Unter-
nehmensforschung, 6 (1962), pp. 149-166.
[38] S. SCHAIBLE, Fractional programming II: On Dinkelbach’s algorithm, Management Sci., 22 (1976), pp.
868-873.
[39] J. B. SIDNEY, Decomposition algorithm for single-machine sequencing with precedence relations and
deferral costs, Oper. Res., 23 (1975), pp. 283-298.
[40] D. D. SLEATOR AND R. E. TARJAN, A data structure for dynamic trees, J. Comput. System Sci., 24
(1983), pp. 362-391.
[41] Self-adjusting binary search trees, J. Assoc. Comput. Mach., 32 (1985), pp. 652-686.
[42] C. STEIN, Efficient algorithms for bipartite network flow, Unpublished manuscript, Department of
Computer Science, Princeton University, Princeton, N J, 1986.
[43] H. S. STONE, Critical load factors in two-processor distributed systems, IEEE Trans. Software Engrg., 4
(1978), pp. 254-258.
[44] R. E. TARJAN, Data Structures and Network Algorithms, CBMS-NSF Regional Conference Series in
Applied Mathematics 44, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1983.
[45] D. M. TOPKIS, Activity selection games and the minimum-cut problem, Networks, 13 (1983), pp. 93-105.