Ford-Fulkerson Algorithm: From Wikipedia, The Free Encyclopedia

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

Ford-Fulkerson algorithm

From Wikipedia, the free encyclopedia

Jump to: navigation, search


The Ford-Fulkerson algorithm (named for L. R. Ford, Jr. and D. R. Fulkerson) computes the maximum flow in a flow network. The name
Ford-Fulkerson is often also used for the Edmonds-Karp algorithm, which is a specialisation of Ford-Fulkerson.
The idea behind the algorithm is very simple: As long as there is a path from the source (start node) to the sink (end node), with available
capacity on all edges in the path, we send flow along one of these paths. Then we find another path, and so on. A path with available capacity is
called an augmenting path.

Contents
[hide]

1 Algorithm

5 References

2 Complexity
3 Example
4 External links

[edit] Algorithm
Given is a graph G(V,E), with capacity c(u,v) and flow f(u,v) = 0 for the edge from u to v, we want to find the maximum flow from the source s
to the sink t. After every step the following is maintained:

. The flow between u and v does not exceed the capacity.

. We maintain the net flow.

for all nodes except s and t. The amount of flow into a node equals the flow out of the node.

This means that the flow through the network is a legal flow after each round in the algorithm. We define the residual network Gf(V,Ef) to be
the network with capacity cf(u,v) = c(u,v) f(u,v) and no flow. Notice that it is not certain that E = Ef, as sending flow on u,v might close u,v (it
is saturated), but open a new edge v,u in the residual network.
Inputs Graph G with flow capacity c, a source node s, and a sink node t
Output Flow f such that f is maximal from s to t
1.
2.

for all edges u,v


While there is a path from s to t in Gf:
1.
2.

Find a path
Find m = min(cf(ui,ui + 1))

where u1 = s and uk = t, such that cf(ui,ui + 1) > 0

3.

(send flow along the path)

4.

(the flow might be "returned" later)

The path can be found with for example a breadth-first search or a depth-first search in Gf(V,Ef). If you use the former, the algorithm is called
Edmonds-Karp.

[edit] Complexity

By adding the flow augmenting path to the flow already established in the graph, the maximum flow will be reached when no more flow
augmenting paths can be found in the graph. However, there is no certainty that this situation will ever be reached, so the best that can be
guaranteed is that the answer will be correct if the algorithm terminates. In the case that the algorithm runs forever, the flow might not even
converge towards the maximum flow. However, this situation only occurs with irrational flow values. When the capacities are integers, the
runtime of Ford-Fulkerson is bounded by O(E*f), where E is the number of edges in the graph and f is the maximum flow in the graph. This is
because each augmenting path can be found in O(E) time and increases the flow by an integer amount which is at least 1.
A variation of the Ford-Fulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the
Edmonds-Karp algorithm, which runs in O(VE2) time.

[edit] Example
The following example show the first steps of Ford-Fulkerson in a flow network with 4 nodes, source A and sink D. The augmenting paths are
found with a depth-first-search, where neighbours are visited in alphabetical order. This example shows the worst-case behaviour of the
algorithm. In each step, only a flow of 1 is sent across the network. See that if you used a breadth-first-search instead, you would only need two
steps.
Path

Capacity

Initial flow network

A,B,C,D min(cf(A,B),cf(B,C),cf(C,D)) =
min(c(A,B) f(A,B),c(B,C) f(B,C),c(C,D) f(C,D)) =
min(1000 0,1 0,1000 0) = 1

Resulting flow network

A,C,B,D min(cf(A,C),cf(C,B),cf(B,D)) =
min(c(A,C) f(A,C),c(C,B) f(C,B),c(B,D) f(B,D)) =
min(1000 0,0 ( 1),1000 0) = 1

Final flow network

Notice how flow is "pushed back" from C to B when finding the path A,C,B,D.

7.1 Optimization with inequality constraints: the KuhnTucker conditions


Many models in economics are naturally formulated as optimization problems with
inequality constraints.
Consider, for example, a consumer's choice problem. There is no reason to insist that a
consumer spend all her wealth, so that her optimization problem should be formulated
with inequality constraints:
max u(x) subject to px w and x 0.
Depending on the character of the function u and the values of p and w, we may have px
< w or px = w at a solution of this problem.
x

One approach to solving this problem starts by determining which of these two
conditions holds at a solution. In more complex problems, with more than one constraint,
this approach does not work well. Consider, for example, a consumer who faces two
constraints (perhaps money and time). Three examples are shown in the following figure,
which should convince you that we cannot deduce from simple properties of u alone
which of the constraints, if any, are satisfied with equality at a solution.

We consider a problem of the form


max f (x) subject to g (x) c for j = 1, ..., m,
where f and g for j = 1, ..., m are functions of n variables, x = (x , ..., x ), and c for j =
1, ..., m are constants.
x

All the problems we have studied so far may be put into this form.

Equality constraints
We introduce two inequality constraints for every equality constraint. For
example, the problem
max f (x) subject to h(x) = 0
x

may be written as
max f (x) subject to h(x) 0 and h(x) 0.
Nonnegativity constraints
For a problem with a constraint x 0 we let g (x) = x and c = 0 for some j.
Minimization problems
For a minimization problem we multiply the objective function by 1:
min h(x) subject to g (x) c for j = 1,...,m
is the same as
max f (x) subject to g (x) c for j = 1,...,m,
where f (x) = h(x).
x

To start thinking about how to solve the general problem, first consider a case with a
single constraint (m = 1). We can write such a problem as
max f (x) subject to g(x) c.
x

There are two possibilities for the solution of this problem. In the following figures, the
black closed curves are contours of f ; values of the function increase in the direction
shown by the blue arrows. The downward-sloping red line is the set of points x satisfying
g(x) = c; the set of points x satisfying g(x) c lie below and the the left of the line, and
those satisfying g(x) c lie above and to the right of the line.

In each panel the solution of the problem is the point x*. In the left panel the constraint
binds at the solution: a change in c changes the solution. In the right panel, the constraint
is slack at the solution: small changes in c have no effect on the solution.
As before, define the Lagrangean function L by
L(x) = f (x) (g(x) c).
Then from our previous analysis of problems with equality constraints and with no
constraints,
if g(x*) = c (as in the left-hand panel) and the constraint satisfies a regularity
condition, then L (x*) = 0 for all i
if g(x*) < c (as in the right-hand panel), then f (x*) = 0 for all i.
i

Now, I claim that in the first case (that is, if g(x*) = c) we have 0. Suppose, to the
contrary, that < 0. Then we know that a small decrease in c raises the maximal value of
f . That is, moving x* inside the constraint raises the value of f , contradicting the fact
that x* is the solution of the problem.
In the second case, the value of does not enter the conditions, so we can choose any
value for it. Given the interpretation of , setting = 0 makes sense. Under this
assumption we have f (x) = L (x) for all x, so that L (x*) = 0 for all i.
i

Thus in both cases we have L (x*) = 0 for all i, 0, and g(x*) c. In the first case we
have g(x*) = c and in the second case = 0.
i

We may combine the two cases by writing the conditions as


L (x*) = 0 for j = 1, ..., n
0, g(x*) c, and either = 0 or g(x*) c =
0.
Now, the product of two numbers is zero if and only if at least one of them is zero, so we
can alternatively write these conditions as
L (x*) = 0 for j = 1, ..., n
0, g(x*) c, and [g(x*) c] =
0.
The argument I have given suggests that if x* solves the problem and the constraint
satisfies a regularity condition, then x* must satisfy these conditions.
i

Note that the conditions do not rule out the possibility that both = 0 and g(x*) = c.
The inequalities 0 and g(x*) c are called complementary slackness conditions; at
most one of these conditions is slack (i.e. not an equality).

For a problem with many constraints, then as before we introduce one multiplier for each
constraint and obtain the Kuhn-Tucker conditions, defined as follows.
Definition
The Kuhn-Tucker conditions for the problem
max f (x) subject to g (x) c for j = 1, ..., m
are
L (x) = 0 for i = 1 ,..., n
0, g (x) c and [g (x) c ] = 0 for j = 1, ..., m,
where
L(x) = f (x) (g (x) c ).
x

j=1

These conditions are named in honor of Harold W. Kuhn, an emeritus member of the
Princeton Math Department, and Albert W. Tucker, who first formulated and studied the
conditions.
In the following sections I discuss results that specify the precise relationship between the
solutions of the Kuhn-Tucker conditions and the solutions of the problem. The following
example illustrates the form of the conditions in a specific case.
Example
Consider the problem
maxx1, x2 [(x 4) (x 4) ] subject to x + x 4 and x + 3x 9,
illustrated in the following figure.
2

We have
L(x , x ) = (x 4) (x 4) (x + x c) (x + 3x 9).
1

The Kuhn-Tucker conditions are


2(x 4) = 0
1

2(x 4) 3 = 0
x + x 4, 0, and (x + x 4) = 0
x + 3x 9, 0, and (x + 3x 9) = 0.
2

You might also like