Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

Chapter 1

Linear Programming
1.1

Introduction

In this chapter, we give an introduction to linear programming. Linear programming


(LP) is a mathematical method for finding optimal solutions to problems. It deals
with the problem of optimizing (maximizing or minimizing) a linear function, subject
to the constraints imposed by a system of linear inequalities. It is widely used in
industry and in government. Historically, linear programming was first developed
and applied in 1947 by George Dantzig, Marshall Wood, and their associates at the
U.S. department of the Air Force, the early applications of LP were thus in military
field. However, the emphasis in applications has now moved to the general industrial
area. Linear programming today is concerned with the efficient use or allocation of
limited resources to meet desired objectives.
Before formally defining a LP problem, we define the concepts of linear function
and linear inequality.
Definition 1.1 (Linear Function)
A function Z(x1 , x2 , . . . , xN ) of x1 , x2 , . . . , xN is a linear function if and only if for
some set of constants c1 , c2 , . . . , cN ,
Z(x1 , x2 , . . . , xN ) = c1 x1 + c2 x2 + + cN xN
For example, Z(x1 , x2 ) = 2x1 +3x2 is a linear function of x1 and x2 , but Z(x1 , x2 ) =
x21 x2 is not a linear function of x1 and x2 .

Definition 1.2 (Linear Inequality)


For any function Z(x1 , x2 , . . . , xN ) and any real number b, the inequalities
Z(x1 , x2 , . . . , xN ) b
1

8.2 General Formulation

and
Z(x1 , x2 , . . . , xN ) b
are linear inequalities. For example, 5x1 + 4x2 3 and 2x1 + 34x2 3 are linear
inequalities, but x21 x2 3 is not a linear inequality.

Definition 1.3 (Linear Programming Problem)


A linear programming problem is an optimization problem for which we do the
following:
1. We attempt to maximize or to minimize a linear function of the decision
variables. The function that is to be maximized or minimized is called the
objective function.
2. The values of the decision variables must satisfy a set of constraints. Each
constraint must be a linear equation or linear inequality.
3. A sign restriction is associated with each variable. For any variable xi ,
the sign restriction specifies that xi must be either nonnegative (xi 0) or
unrestricted sign.
In the following we will discuss some LP problems involving linear functions, inequality constraints, equality constraints and sign restriction.

1.2

General Formulation

Let x1 , x2 , . . . , xN be N variables in a LP problem. The problem is to find the values


of the variables x1 , x2 , . . . , xN to maximize (or minimize) a given linear function of
the variables, subject to a given set of constraints that are linear in the variables.
The general formulation for a LP problem is
Maximize

Z = c1 x1 + c2 x2 + + cN xN

(1.1)

subject to the constraints


a11 x1
a21 x1
..
.

+
+

a12 x2
a22 x2
..
.

aM 1 x1 + aM 2 x2

+ +
+ +

a1N xN
a2N xN
..
.

+ + aM N xN

b1
b2
..
.

(1.2)

bM

and
x1 0, x2 0, . . . , xN 0

(1.3)

Chapter One Linear Programming

where aij (i = 1, 2, . . . , M ; j = 1, 2, . . . , N ) are constants (called constraint coefficients), bi (i = 1, 2, . . . , M ) are constants (called resources values), and cj (j =
1, 2, . . . , N ) are constants (called cost coefficients). We call Z the objective function.
In matrix form, the general formulation can be written as
Maximize

Z = cT x

(1.4)

subject to the constraints


Ax b

(1.5)

x0

(1.6)

and
where

b=

b1
b2
..
.
bM

A=

a11
a21
..
.

aM 1

c1
c2

c= .
..

cN

a12
a22
..
.

..
.

a1N
a2N
..
.

aM 2 aM N

x1

x2

, x = ..

.
xN

and cT denotes the transpose of the vector c.

1.2.1

(1.7)

0=

0
0
..
.
0

(1.8)

Terminology

The following terms are commonly used in linear programming:


Decision variables: variables x1 , x2 , . . . , xN in (1.1)
Objective function: function Z given by (1.1)
Objective function coefficients: constants c1 , c2 , . . . , cN in (1.1)
Constraints coefficients: constants aij in (1.2)
Nonnegativity constraints: constraints given by (1.3)
Feasible solution: set of x1 , x2 , . . . , xN values that satisfy all the constraints
Feasible region: collection of all feasible solutions
Optimal solution: feasible solution that gives an optimal value of the objective
function (that is, the maximum value of Z in (1.1))

1.3

8.3 Linear programming Problems

Linear programming Problems

Example 1.1 (Product-Mix Problem)


The Handy-Dandy Company wishes to schedule the production of a kitchen appliance that requires two resources-labor and material. The company is considering
three different models and its production engineering department has furnished the
following data: The supply of raw material is restricted to 250 pounds per day. The
Labor(hours per unit)
Material(hours per unit)
Profit($ per unit)

Model A
6
5
5

Model B
4
5
4

Model C
5
6
4

daily availability of labor is 200 hours. Formulate a linear programming model to


determine daily production rate of the various models in order to maximize the total
profit.
Formulation of Mathematical Model
Step I. Identify the Decision Variables:The unknown activities to be determined are
the daily rate of production for the three models.
Representing them by algebraic symbols:
xA -Daily production of model A
xB -Daily production of model B
xC -Daily production of model C
Step II. Identify the Constraints: In this problem the constraints are the limited
availability of the two resources-labor and material. Model A requires 6 hours of labor
for each unit, and its production quantity is xA . Hence, the labor requirement for
model A alone will be 6xA hours (assuming a linear relationship). Similarly, models
B and C will require 4xB and 5xC hours, respectively. Thus the total requirement
of labor will be 6xA + 4xB + 5xC , which should not exceed the available 200 hours.
So the labor constraint becomes
6xA + 4xB + 5xC 200
Similarly, the raw material requirements will be 5xA pounds for model A, 5xB
pounds for model B, and 6xC pounds for model C. Thus, the raw material constraint
is given by
5xA + 5xB + 6xC 250

Chapter One Linear Programming

In addition, we restrict the variables xA , xB , xC to have only nonnegative values,


that is
xA 0, xB 0, xC 0
These are called the nonnegativity constraints, which the variables must satisfy. Most
practical linear programming problems will have this nonnegative restriction on the
decision variables. However, the general framework of linear programming is not
restricted to nonnegative values.
Step III. Identify the Objective: The objective is to maximize the total profit for
sales. Assuming that a perfect market exists for the product such that all that is
produced can be sold, the total profit from sales becomes
Z = 5xA + 4xB + 4xC
Thus complete mathematical model for the product mix problem may now be summarized as follows:
Find numbers xA , xB , xC which will maximize
Maximize:

Z = 5xA + 4xB + 4xC

subject to the constraints


6xA + 4xB + 5xC
5xA + 5xB + 6xC
xA 0,

xB 0,

200
250
xC 0

Example 1.2 (An Inspection Problem)


A company has two grades of inspectors, 1 and 2, who are to be assigned for a quality
control inspection. It is requires that at least 1800 pieces be inspected per 8hour
day. Grade 1 inspectors can check pieces at the rate of 25 per hour, with an accuracy
of 98%. Grade 2 inspectors check at the rate of 15 pieces per hour, with an accuracy
of 95%. The wage rate of a Grade 1 inspector is $4.00 per hour, while of a Grade
2 inspector is $3.00 per hour. Each time an error is made by an inspector, the cost
to company is $2.00. The company has available for the inspection job eight Grade
1 inspectors, and ten Grade 2 inspectors. The company wants to determine the optimal assignment of inspectors, which will minimize the total cost of the inspection.
Formulation of Mathematical Model

8.4 Graphical Solution of LP Models

Let x1 and x2 denote the number of Grade 1 and Grade 2 inspectors assigned for
inspection. Since the number of available inspectors in each grade is limited, we have
the following constraints:
x1 8
x1 10

(Grade 1)
(Grade 2)

The company requires at least 1800 pieces to be inspected daily. Thus, we get
8(25)x1 + 8(15)x2 1800
or
200x1 + 120x2 1800
which can be also written as
5x1 + 3x2 45
To develop the objective function, we note that the company incurs two types of costs
during inspections: wages paid to the inspector, and the cost of his inspection error.
The hourly cost of each Grade 1 inspector is
$4 + 2(25)(0.02) = $5 per hour
Similarly, for each Grade 2 inspector
$3 + 2(15)(0.05) = $4.50 per hour
Thus the objective function is to minimize the daily cost of inspection given by
Z = 8(5x1 + 4.5x2 ) = 40x1 + 36x2
The complete formulation of the linear programming problem thus becomes:
M inimize :

Z = 40x1 + 36x2

subject to the constraints


x1
5x1

8
x2 10
+ 3x2 45

x1 0,

x2 0

Chapter One Linear Programming

1.4

Graphical Solution of LP Models

In the last section two examples were presented to illustrate how practical problems can be formulated mathematically as LP problems. The next step after the
formulation is to solve the problem mathematically to obtain the best possible solution. In this section, a graphical procedure to solve LP problems involving only
two variables is discussed . Though in practical such small problems are usually
not encountered, the graphical procedure is presented to illustrate some of the basic
concepts and used in solving large LP problems.
Example 1.3 A company manufactures two types of mobiles, model A and model
B. It takes 5 hours and 2 hours to manufacture A and B, respectively. The company
has 900 hours available per week for the production of mobiles. The manufacturing
cost of each model A is $8 and the manufacturing cost of a model B is $10. The
total funds available per week for production are $2800. The profit on each model
A is $3, and the profit on each model B is $2. How many of each type of mobile
should be manufactured weekly to obtain the maximum profit?
Solution. We first find the inequalities that describe the time and monetary constraints. Let the company manufacture x1 of model A and x2 of model B. Then
the total manufacturing time ie (5x1 + 2x2 ) hours. There are 900 hours available.
Therefore
5x1 + 2x2 900
Now the cost of manufacturing x1 model A at $8 each is $8x1 and the cost of manufacturing x2 model B at $10 each is $10x2 . Thus the total production costs is
(8x1 + 10x2 ). There is $2800 available for production of mobiles. Therefore
8x1 + 10x2 2800
Furthermore, x1 and x2 represent numbers of mobiles manufactured. These numbers
cannot be negative. Therefore we get two more constraints:
x1 0,

x2 0

Next we find a mathematical expression for profit. Since the weekly profit on x1
mobiles at $3 per mobile is $3x1 and the weekly profit on x2 mobiles at $2 per
mobile is $2x2 . Thus the total weekly profit is $(3x1 + 2x2 ). Let the profit function
Z is defined as
Z = 3x1 + 2x2
Thus the mathematical model for the given LP problem with the profit function and
the system of linear inequalities may be written as
Maximize

Z = 3x1 + 2x2

8.4 Graphical Solution of LP Models

subject to the constraints


5x1 + 2x1 900
8x1 + 10x2 2800
x1 0,

x2 0

In this problem, we are interested in determining the values of the variables x1 and
x2 that will satisfy all the restrictions and give the maximum value of the objective
function. As a first step in solving this problem, we want to identify all possible
values of x1 and x2 that are nonnegative and satisfy the constraints. Solution of a
linear program is merely finding the best feasible solution (optimal solution) in the
feasible region (set of all feasible solutions). In our example, an optimal solution is
a feasible solution which maximizes the objective function 3x1 + 2x2 . The value of
the objective function corresponding to an optimal solution is called optimal value
of the linear program.
To represent the feasible region in a graph, every constraint is plotted, and all values of x1 , x2 that will satisfy these constraints are identified. The nonnegativity
constraints imply that all feasible values of the two variables will be in the first quadrant. It can be shown that the graph of the constraint 5x1 + 2x2 900 consists
of points on and below the straight line 5x1 + 2x2 = 900. Similarly, the points
that satisfy the inequality 8x1 + 10x2 2800 are on and the below the straight line
8x1 + 10x2 = 2800.
The feasible region is given by the region ABCO as shown in Figure 1.1. Obviously
y
5x + 2x = 900
1

A(0, 280)

200

B(100, 200)

Feasible
Region

100

8x + 10x = 2800
1

100

x
C(180, 0)

Figure 1.1: Feasible region of the Example 1.3.

Chapter One Linear Programming

there is an infinite number of feasible points in this region. Our objective is to identify the feasible point with largest value of objective function Z.
It has been proved that the maximum value of Z will occur at a vertex of the feasible region, namely at one of the points A, B, C, or O; if there is more than one
point at which the maximum occurs, it will be along one edge of the region, such
as AB or BC. Hence, we have only to examine the value of the objective function Z = 3x1 + 2x2 at the vertices A, B, C, and O. These vertices are found by
determining the points of intersection of the lines. We obtain
A(0, 280) :
B(100, 200) :
C(180, 0) :
O(0, 0) :

ZA
ZB
ZC
ZO

= 3(0) + 2(280)
= 3(100) + 2(200)
= 3(180) + 2(0)
= 3(0) + 2(0)

=
=
=
=

560
700
540
0

The maximum value of Z is 700 at B. Thus the maximum value of Z = 3x1 + 2x2
is 700 when x1 = 100 and x2 = 200. The interpretation of these results is that the
maximum weekly profit is $700 and this occurs when 100 model A mobles and 200
model B mobiles are manufactured.

In using optimization toolbox, linprog solves linear programming problems:


min Z x subject to:
x

A x <= b
Aeq x == beq
lb <= x <= ub

In solving linear programming problems using linprog, we use the following:


Syntax:
>> x = linprog(Z, A, b)
solves min Z x such that A x <= b.
>> x = linprog(Z, A, b, Aeq, beq)
solves min Z x such that A x <= b, Aeq x == beq. If no inequalities exist, then
set A = [ ] and b = [ ].
>> x = linprog(Z, A, b, Aeq, beq, lb, ub)
defines a set of lower and upper bounds on the design variables, x, so that the solution is always in the range lb <= x <= ub. If no equalities exist, then set Aeq = [ ]

10

8.4 Graphical Solution of LP Models

and beq = [ ].
>> [x, F val] = linprog(Z, A, b)
returns the value of the objective function at x, F val = Z x.
>> [x, F val, exitf lag] = linprog(Z, A, b)
returns a value exitflag that describes the exit condition.
>> [x, F val, exitf lag, output] = linprog(Z, A, b)
returns a structure containing information about the optimization.
Input Parameters:
Z
A
b
Aeq
beq
lb
ub

is objective function coefficients.


is matrix of inequality constraint coefficients.
is right hand side in inequality constraints.
the matrix of equality constraints.
is right hand side in equality constraints.
is lower bounds on the desgin values; -Inf == unbounded below.
Embty lb ==> -Inf on all variables.
is upper bounds on the desgin values; Inf == unbounded above.
Embty ub ==> Inf on all variables.

Output Parameters:
x:
F val :
exitflag :
>0
=0
<0

optimal design parameters.


optimal design parameters.
exit conditions of linprog. If exitflag is:

then linprog converged with a solution x.


then linprog reached the maximum number of iterations without converging to a solution x.
then the problem was infeasible or linprog failed. For example,

If exitflag = - 2, then no feasible point was found.


If exitflag = - 3, then problem is unbounded.
If exitflag = - 4, then NaN value was encountered during execution of the algorithm.
If exitflag = - 5, then both primal and dual problems are infeasible.
output : structure, and the fields of the structure are: the number of iterations taken, the type of
algorithm used, and the number of conjugate gradient iterations (if used).

11

Chapter One Linear Programming

Now to reproduce the above results (the Example 1.3) using MATLAB, we do the
following:
First, enter the coefficients:
>> Z = [3 : 2];
>> A = [5 2; 8 10];
>> b = [900; 2800];
Second, evaluate linprog:
>> [x, F val, exitf lag, output] = linprog(Z, A, b);
MATLAB answer is:
optimization terminated successfully.
Now evaluate x and Z as follows:
>> x =
100.0000
200.0000
and
>> Objective = F val =
700.0000
Note that the optimization functions in the toolbox minimize the objective function.
to maximize a function Z, apply an optimization function to minimize Z. The
resulting point where the maximum of Z occurs is also the point where the minimum
of Z occurs.
Example 1.4 Find the maximum value of
Z = 2x1 + x2
subject to the constraints
3x1 + 5x2 20
4x1 + x2 15
and
x1 0,

x2 0

Solution. The constraints are represented graphically by the region ACBO in Figure 1.2. The vertices of the feasible region are found by determining the points of

12

8.4 Graphical Solution of LP Models

4x1 + x2 = 15
A(0,4)
O

Feasible
Region

C(55/17,35/17)

3x + 5x = 20
1

B(15/4,0)

Figure 1.2: Feasible region of Example 1.4.

intersections of the lines. We get


A(0, 4) :
B(15/4, 0) :
C(55/17, 35/17) :
O(0, 0) :

ZA
ZB
ZC
ZO

= 2(0) + 4
= 2(15/4) + 0
= 2(55/17) + 35/17
= 0+0

=
=
=
=

4
15/2
145/17
0

Thus the maximum value of the objective function Z is 145/17, namely when x1 =
55/17 and x2 = 35/17.

1.4.1

Reversed Inequality Constraints

For cases where some or all of the constraints contain inequalities with sign reversed
( rather than ), the signs can be converted to signs by multiplying both
sides of the constraints by 1. Thus, the constraint
ai1 x1 + ai2 x2 + + aiN xN bi
is equivalent to
ai1 x1 ai2 x2 aiN xN bi

1.4.2

Equality Constraints

For cases where some or all of the constraints contain equalities, the problem can
be reformulated by expressing an equality as two inequalities with opposite signs.

13

Chapter One Linear Programming

Thus the constraint


ai1 x1 + ai2 x2 + + aiN xN = bi
is equivalent to

1.4.3

ai1 x1 + ai2 x2 + + aiN xN

bi

ai1 x1 + ai2 x2 + + aiN xN

bi

Minimum Value of a Function

A LP problem that involves determining the minimum value of an objective function


Z can be solved by looking for the maximum value of Z, the negative of Z, over
the same feasible region. Thus, the problem
Minimize Z = c1 x1 + c2 x2 + + cN xN
is equivalent to
Maximize Z = (c1 )x1 + (c2 )x2 + + (cN )xN
Theorem 1.1 (Minimum Value of a Function)
The minimum value of a function Z over a region S occurs at the point(s) of maximum value of Z, and is the negative of that maximum value.
Proof. Let Z has minimum value ZA at the point A in the region. Then if B is
any other point in the region,
ZA ZB
Multiply both sides of this inequality by 1 to get
ZA ZB
This implies that A is a point of maximum value of Z. Furthermore, the minimum
value is ZA , the negative of the maximum value ZA . The above steps can be
reversed, proving that the converse holds, this verifying the result.

In summary, a minimization LP problem in which


1. The objective function is to be minimized (rather than maximized) or
2. The constraints contain equalities (= rather than ) or
3. The constraints contain inequalities with sign reversed ( rather than )

14

8.4 Graphical Solution of LP Models

can be reformulated in terms of the general solution formulation given by (1.1),


(1.2), and (1.3).
The following diet problem is an example of a general LP problem, in which the
objective function is to be minimized and the constraints contain signs.
Example 1.5 (Diet Problem)
The diet problem arises in the choice of foods for a healthy diet. The problem is
to determine the mix foods in a diet that minimizes total cost per day, subject to
constraints that ensure minimum daily nutritional requirement are met. Let
M = number of nutrients
N = number of types of food
aij = number of units of nutrient i in food (i = 1, 2, . . . , M ; j = 1, 2, . . . , N )
bi = number of units of nutrient i required per day (i = 1, 2, . . . , M )
ci = cost per unit of food j (j = 1, 2, . . . , N ) xj = number of units of food j
in the diet per day (j = 1, 2, . . . , N )
The objective is to find the values of the N variables x1 , x2 , . . . , xN to minimize the
total cost per day, Z. The LP formulation for the diet problem is
Minimze Z = c1 x1 + c2 x2 + + cN xN
subject to the constraints
a11 x1
a21 x1
..
.

+
+

a12 x2
a22 x2
..
.

+ +
+ +

a1N xN
a2N xN
..
.

aM 1 x1 + aM 2 x2 + + aM N xN

b1
b2
..
.

bM

and
x1 0, x2 0, . . . , xN 0
where aij , bi , cj (i = 1, 2, . . . , M ; j = 1, 2, . . . , N ) are constants.
Example 1.6 Consider the inspection problem given by the Example 1.2:
Minimize:

Z = 40x1 + 36x2

15

Chapter One Linear Programming

subject to the constraints


x1
5x1

8
x2 10
+ 3x2 45

x1 0,

x2 0

In this problem, we are interested in determining the values of the variables x1 and
x2 that will satisfy all the restrictions and give the least value of the objective function. As a first step in solving this problem, we want to identify all possible values of
x1 and x2 that are nonnegative and satisfy the constraints. For example, a solution
x1 = 8 and x2 = 10 is positive and satisfies all the constraints. In our example,
an optimal solution is a feasible solution which minimizes the objective function
40x1 + 36x2 .
To represent the feasible region in a graph, every constraint is plotted, and all values of x1 , x2 that will satisfy these constraints are identified. The nonnegativity
constraints imply that all feasible values of the two variables will be in the first quadrant. The constraint 5x1 + 3x2 45 requires that any feasible solution (x1 , x2 )
to the problem should be on one side of the straight line 5x1 + 3x2 = 45. The
proper side is found by testing whether the origin satisfies the constraint or not.
The line 5x1 + 3x2 = 45 is first plotted by taking two convenient points (for example, x1 = 0, x2 = 15 and x1 = 9, x2 = 0).
Similarly, the constraints x1 8 and x2 10 are plotted. The feasible region is
given by the region ACBO as shown in Figure 1.3. Obviously there is an infinite

Z = 600
x2 = 10
10

B(8,10)

C(3,10)
3x1+5x2 = 45

Feasible
Region
x =8
1

Z = 300
A(8,5/3)

Figure 1.3: Feasible region of Example 1.6.

16

8.4 Graphical Solution of LP Models

number of feasible points in this region. Our objective is to identify the feasible point
with lowest value of objective function Z.
Observe that the objective function, given by Z = 40x1 + 36x2 , represents a straight
line if the value of Z is fixed a priori. Changing the value of Z essentially translates
the entire line to another straight line parallel to itself. In order to determine an
optimal solution, the objective function line is drawn for a convenient value of Z
such that it passes through one or more points in the feasible region. Initially Z is
chosen as 600. By moving this line closer to the origin the value of Z is further
decreased (see Figure 1.3). The only limitation on this decrease is that the straight
line Z = 40x1 + 36x2 contains at least one point in the feasible region ABC. This
clearly occurs at the corner point A given by x1 = 8, x2 = 53 . This is the best feasible
point giving the lowest value of Z as 380. Hence, x1 = 8, x2 = 53 is an optimal
solution, and Z = 380 is the optimal value for the linear program.
Thus for the inspection problem the optimal utilization is achieved by using eight
Grade 1 inspectors and 1.67 Grade 2 inspectors. The fractional value x2 = 53 suggests that one of the Grade 2 inspectors is only utilized 67% of the time. If this
is not possible, the normal practice is to round off the fractional values to get an
optimal integer solution as x1 = 8, x2 = 2. (In general, rounding off the fractional
values will not produce an optimal integer solution.)

Example 1.7 Find the minimum value of


Z = 2x1 3x2
subject to the constraints
x1 + 2x2 10
2x1 + x2 11
and
x1 0,

x2 0

Solution. The feasible region is shown in Figure 1.4. To find the minimum value
of Z over the region, let us determine the maximum value of Z. The vertices are
found, and the value of Z computed at each vertex:
A(0, 5) :

ZA = 2(0) + 3(5)

= 15

B(4, 3) :

ZB = 2(4) + 3(3)

= 1

C(11/2, 0) :

ZC = 2(11/2) + 3(0) = 11

O(0, 0) :

ZO = 2(0) + 3(0)

= 0

The maximum value of Z is 15 at A. Thus the minimum value of Z = 2x1 3x2


is 15 when x1 = 0 and x2 = 5.

17

Chapter One Linear Programming

2x + x = 11
1

A(0,5)
O

B(4,3)
O

x + 2x = 10
1
2

Feasible
Region

C(11/2,0)

Figure 1.4: Feasible region of Example 1.7.

Unique Optimal Solution


In the Example 1.6, the solution x1 = 8, x2 = 53 is the only feasible point with the
lowest value of Z. In other words, the values of Z corresponding to the other feasible
solution in Figure 1.3 exceed the optimal value of 380. Hence for this problem, the
solution x1 = 8, x2 = 53 is the unique optimal solution.
Alternative Optimal Solutions
In some LP problems, there may exist more than one feasible solution such that
their objective values are equal to the optimal values of the linear program. In such
cases, all of these feasible solutions are optimal solutions, and linear program is said
to have alternative or multiple optimal solutions. To illustrate this, consider
the following LP problem:
Example 1.8 Find the minimum value of
Z = x1 + 2x2
subject to the constraints
x1 + 2x2 10
x1 + x2 1
x2 4

18

8.4 Graphical Solution of LP Models

and
x1 0,

x2 0

y
5

x2 = 4

C(2,4)

D(0,4)

Feasible
Region

x1 + 2x2 = 10

x1 + 2x2 = 6

E(0,1)
x1 + 2x2 = 2
x1 + x2 = 1

B(10,0)
A(1,0)

Figure 1.5: Feasible region of Example 1.8.


Solution. The feasible region is shown in Figure 1.5. The objective function lines
are drawn for Z = 2, 6 and 10. The optimal value for the linear program is 10 and
the corresponding objective function line x1 + 2x2 = 10 coincides with side BC of
the feasible region. Thus, the corner point feasible solutions x1 = 10, x2 = 0(B),
and x1 = 2, x2 = 4(C), and all other points on the line BC are optimal solutions.
Unbounded Solution
Some LP problems may not have an optimal solution. In other words, it is possible
to find better feasible solutions continuously improving the objective function values.
This would have been the case if the constraint x1 + 2x2 10 were not given in
the Example 1.8. In this case, moving farther away from the origin increases the
objective function x1 + 2x2 and maximize Z would be +. When there exists no
finite optimum, the linear program is said to have an unbounded solution.
It is inconceivable for a practical problem to have an unbounded solution since
this implies that one can make infinite profit from a finite amount of resources.
If such a solution obtained in a practical problem it generally means that one or
more constraints have been omitted inadvertently during the initial formulation of
the problem. These constraints would have prevented the objective function from
assuming infinite values.

19

Chapter One Linear Programming

Theorem 1.2 If there exists an optimal solution to a LP problem, then at least


one of the corner points of the feasible region will always qualify to be an optimal
solution.

Notice that each feasible region we have discussed is such that the whole of the
segment of a straight line joining two points within the region lies within that
region. Such region is called convex. A theorem states that the feasible region in
a LP problem is convex (see Figure 1.6).
p
o 2
p1

o p2

p o
1

op

Convex

p1 o

Convex

Convex

p o
2

Not Convex

Not convex

Figure 1.6: Convex and nonconvex sets in R2 .


In the coming section we will use an iterative procedure called the simplex method
for solving LP problem is based on the Theorem 1.2. Even though the feasible region
of a LP problem contains an infinite number of points, an optimal solution can be
determined by merely examining the finite number of corner points in the feasible
region. Before we discuss the simplex method, we discuss the canonical and standard
forms of linear program.

1.4.4

Linear Program in Canonical Form

The general linear program problem can always to put in the following form, which
is referred to as the canonical form:
Maximize Z =

M
X
ci xi
i=1

20

8.4 Graphical Solution of LP Models

subject to the constraints


M
X
aij xj bi ,

j = 1, 2, . . . , N

i=1

xi 0,

i = 1, 2, . . . , M

The characteristics of this form are:


1. All decision variables are nonnegative
2. All the constraints are of the form
3. The objective function is to maximize
4. All the right-hand side bi 0,

i = 1, 2, . . . , M

5. The matrix A contains M identity columns of an M M identity matrix I


6. The objective function coefficients corresponding to those M identity columns
are zero.
Note that the variables corresponding to the M identity columns are called basic
variables and the remaining variables are called nonbasic variables. The feasible
solution obtained by setting the nonbasic variables equal to zero and using the
constraint equations to solve for the basic variables is called the basic feasible
solution.

1.4.5

Linear Program in Standard Form

The standard form of a LP problem with M constraints and N variables can be


represented as follows:
Maximize(M inimize) Z = c1 x1 + c2 x2 + + cN xN

(1.9)

subject to the constraints


a11 x1
a21 x1
..
.

+
+

a12 x2
a22 x2
..
.

aM 1 x1 + aM 2 x2

+ +
+ +

a1N xN
a2N xN
..
.

+ + aM N xN

=
=

b1
b2
..
.

(1.10)

= bM

and
x1 0, x2 0, . . . , xN

b1 0, b2 0, . . . , bM

The main features of the standard form are:

(1.11)

21

Chapter One Linear Programming

The objective function is of the maximization or minimization type.


All constraints are expressed as equations.
All variables are restricted to be nonnegative.
The right-hand side constant of each constraint is nonnegative.
In matrix-vector notation, the standard LP problem can be expressed as:
Maximize(M inimize) Z = cx

(1.12)

Ax = b

(1.13)

subject to the constraints


and
x0
b0
where A is an M N matrix, x is an
vector, and x is an 1 N row vector.

a11
a21

A= .
..

(1.14)

N 1 column vector, b is an M 1 column


In other words,

a12 a1N
a22 a2N

(1.15)
..
..
..
.
.
.

aM 1 aM 2 aM N

b=

b1
b2
..
.
bM

c=

c1 c2 cN

x=

x1
x2
..
.
xN

(1.16)

In practice, A is called coefficient matrix, x is the decision vector, b is the requirement vector, and c is the profit (cost) vector of a LP problem.
Note that to convert a LP problem into standard form, each inequality constraint
must be replaced by an equation constraint by introducing new variables which
are slack variable or surplus variable. We illustrate this procedure using the
following problem.
Example 1.9 (Leather Limited)
Leather Limited manufactures two types of belts: the deluxe model and the regular
model. Each type requires 1 square yard of leather. A regular belt requires 1 hour

22

8.4 Graphical Solution of LP Models

of skilled labor, and a deluxe belt requires 2 hours. Each week, 40 square yards of
leather and 60 hours of skilled labor are available. Each regular belt contributes $3
to profit and each deluxe belt, $4. Formulate the LP problem.
Solution. Let x1 be the number of deluxe belt and x2 be the regular belt which are
produced weekly. Then the appropriate LP problem got the form:
Maximize Z = 4x1 + 3x2
subject to the following constraints
x1 + x2 40
2x1 + x2 60
x1 0,

x2 0

To convert the above inequality constraints to equality constraints, we define for each
constraint a slack variable ui (ui = slack variable for ith constraint), which is the
amount of the resource unused in the ith constraint. Because x1 + x2 square yards
of leather being used, and 40 square yards are available, we define u1 by
u1 = 40 x1 x2

or

x1 + x2 + u1 = 40

or

2x1 + x2 + u2 = 60

Similarly, we define u2 by
u2 = 60 2x1 x2

Observe that a point (x1 , x2 ) satisfies ith constraint if and only if ui 0. Thus
converted LP problem
Maximize Z = 4x1 + 3x2
subject to the following constraints
x1 + x2 + u1
= 40
2x1 + x2
+ u2 = 60
x1 0, x2 0, u1 0, u2 0
is in standard form.
In summary, if constraint i of a LP problem is a constraint, then we convert it
to an equality constraint by adding slack variable ui to the ith constraint and adding
the sign restriction ui 0.

23

Chapter One Linear Programming

Now we illustrate how a constraint can be converted to an equality constraint,


let us consider the diet problem discussed in the Example 1.5. To convert the ith
constraint to an equality constraint, we define an excess variable(surplus variable)
vi (vi will always be the excess variable for the ith constraint). We define vi to be
the amount by which the ith constraint is oversatisfied. Thus, for the diet problem,
v1 = a11 x1 + a12 x2 + + a1N xN b1
or
a11 x1 + a12 x2 + + a1N xN v1 = b1
We do same for the other remaining constraints, the converted standard form of
the diet problem after adding the sign restrictions vi 0(i = 1, 2, . . . , M ) may be
written as
Minimze Z = c1 x1 + c2 x2 + + cN xN
subject to the constraints
a11 x1
a21 x1
..
.

+
+

a12 x2
a22 x2
..
.

+ +
+ +

a1N xN
a2N xN
..
.

aM 1 x1 + aM 2 x2 + + aM N xN

v1

..
.

=
=
..
.

v2

vM

b1
b2

= bM

and
xi 0, vi 0

(i = 1, 2, . . . , M )

A point (x1 , x2 , . . . , xN ) satisfies the ith constraint if and only if vi is nonnegative.


In summary, if the ith constraint of a LP problem is a constraint, then it can be
converted to an equality constraint by subtracting an excess variable vi from the ith
constraint and adding the sign restriction vi 0.
If a LP problem has both and constraints, then simply apply the procedures
we have described to the individual constraints. For example, the following LP
problem
Maximize Z = 30x1 + 20x2
subject to the following constraints
x1
10x1
20x1

50
x2 60
+ 15x2 80
+ 10x2 100

x1 0,

x2 0

24

8.4 Graphical Solution of LP Models

can be easily transformed into standard form by adding slack variables u1 , u2 , and
u3 respectively, to the first three constraints and subtracting an excess variable v4
from the fourth constraint. Then we add the sign restrictions
u1 0,

u2 0,

u3 0,

v4 0

This yields the following LP problem in standard form


Maximize Z = 30x1 + 20x2
subject to the following constraints
x1

+ u1

x2
10x1 + 15x2
20x1 + 10x2

+ u2
+ u3
+ v4

= 50
= 60
= 80
= 100

x1 0, x2 0, u1 0, u2 0, u3 0, v4 0

1.4.6

Some Important Definitions

Let us review the basic definitions using the standard form of a LP problem given
by
Maximize Z = cx
subject to the following constraints
Ax = b
x0
1. Feasible Solution. A feasible solution is a nonnegative vector x satisfying
the constraints Ax = b.
2. Feasible Region. The feasible region, denoted by S, is the set of all feasible
solutions. Mathematically,
S = {x|Ax = b, x > 0}
If the feasible set S is empty, then the linear program is said to be infeasible.
3. Optimal Solution. An optimal solution is a vector x0 such that it is feasible
and its value of the objective function (cx0 ) is larger than that of any other
feasible solution. Mathematically, x0 is optimal if and only if x0 S and
cx0 cx for all x S.

Chapter One Linear Programming

25

4. Optimal Value. The optimal value of a linear program is the value of the
objective function corresponding to the optimal solution. If Z 0 is the optimal
value, then Z 0 = cx0 .
5. Alternate Optimum: When a linear program has more than one optimal
solution, it is said to have alternate optimal solution. In this case, there exists
more than one feasible solution having the same optimal value (Z 0 ) for their
objective functions.
6. Unique Optimum. The optimal solution of a linear program is said to be
unique when there exists no other optimal solution.
7. Unbounded Solution. When a linear program does not poses a finite optimum (that is, Zmax ), it is said to have an unbounded solution.

1.5

The Simplex Method

The graphical method of solving a LP problem introduced in the last section has
its limitations. The method demonstrated for two variables can be extended to LP
problems involving three variables but for problem involving more than two variables, the graphical approach becomes impractical. Here we introduce the other
approach called the simplex method, an algebraic method that can be used for
any number of variables. This method was developed by George B. Dantzig in 1947.
It can be used to solve maximization or minimization problems with any standard
constraints.
Before proceeding further with our discussion with the simplex algorithm, we must
define the concept of a basic solution to a linear system (1.13).
Basic and Nonbasic Variables
Consider a linear system Ax = b of M linear equations in N variables (assume
N M ).
Definition 1.4 (Basic Solution)
A basic solution to Ax = b is obtained by setting N M variables equal to 0 and
solving for the values of the remaining M variables. This assumes that setting the
N M variables equal to 0 yields unique values for the remaining M variables or,
equivalently, the columns for the remaining M variables are linearly independent.

26

8.5 The Simplex Method

To find a basic solution to Ax = b, we choose a set of N M variables (the nonbasic


variables) and set each of these variables equal to 0. Then we solve for the values of
the remaining N (N M )M variables (the basic variables) that satisfy Ax = b.
Definition 1.5 (Basic Feasible Solution)
Any basic solution to a linear system (1.13) in which all variables are nonnegative
is a basic feasible solution.

The simplex method deals only with basic feasible solutions in the sense that it
moves from one basic solution to another. Each basic solution is associated with an
iteration. As a result, the maximum number of iterations in the simplex method
cannot exceed the number of basic solutions of the standard form. We can thus
conclude that the maximum number of iterations cannot exceed


N!
N
=
M
(N M )!M !
The basic-nonbasic swap gives rise to two suggestive names: The entering variable
is a current nonbasic variable that will enterthe set of basic variables at the next
iteration. The leaving variable is a current basic variable that will leave the
basic solution in the next iteration.
Definition 1.6 (Adjacent Basic Feasible Solution)
For any LP problem with M constraints, two basic feasible solutions are said to be
adjacent if their sets of basic variables have M 1 basic variables in common.
In other words, an adjacent feasible solution differs from the present basic feasible solution in exactly one basic variable.

We now give a general description of how the simplex algorithm solves LP problems.
The Simplex Algorithm
1. Set up the initial simplex tableau.
2. Locate the negative element in the last row, other than the last element, that
is largest magnitude (if two or more entries share this property, any one of
these can be selected). If all such entries are nonnegative, the tableau is in
final form.
3. Divide each positive element in the column defined by this negative entry into
the corresponding element of the last column.

27

Chapter One Linear Programming

4. Select the divisor that yields the smallest quotient. This element is called
pivot element (if two or more elements share this property, any one of these
can be selected as pivot).
5. Use now operations to create a 1 in the pivot location and zeros elsewhere in
the pivot column.
6. Repeat steps 2 5 until all such negative elements have been eliminated from
the last row. The final matrix is called the final simplex tableau and it
leads to the optimal solution.
Example 1.10 Determine the maximum value of the function
Z = 3x1 + 5x2 + 8x3
subject to the following constraints
100
x1 + x2 + x3
3x1 + 2x2 + 4x3 200
x1 + 2x2 + x3 150
x1 0,

x2 0,

x3 0

Solution. Taking the three slack variables u1 , u2 , and u3 which must added to the
given 3 constraints to get the standard constraints which may be written in the LP
problem as
x1 + x2 + x3 + u1
= 100
3x1 + 2x2 + 4x3
+ u2
= 200
x1 + 2x2 + x3
+ u3 = 150
x1 0, x2 0, x3 0, u1 0, u2 0, u3 0
The objective function Z = 3x1 + 5x2 + 8x3 is rewritten in the form
3x1 5x2 8x3 + Z = 0
Thus the entire problem now becomes that of determining the solution to the following system of equations:
x1
3x1
x1
3x1

+ x2
+ 2x2
+ 2x2
5x2

+ x3 + u1
+ 4x3
+ u2
+ x3
+ u3
8x3
+ Z

= 100
= 200
= 150
= 0

28

8.5 The Simplex Method

Since we know that the simplex algorithm starts with an initial basic feasible solution,
so by inspection, we see that if we set nonbasic variable x1 = x2 = x3 = 0, we can
solve for the values of the basic variables u1 , u2 , u3 . So the basic feasible solution for
the basic variables is
u1 = 100, u2 = 200, u3 = 150, x1 = x2 = x3 = 0
It is important to observe that each basic variable may be associated with the row
of the canonical form in which the basic variable has a coefficient of 1. Thus, for
initial canonical form, u1 may be thought of as the basic variable for row 1, as may
u2 for row 2, and u3 for row 3. To perform the simplex algorithm, we also need a
basic (although not necessarily nonnegative) variable for last row. Since Z appears
in last row with coefficient of 1, and Z does not appear in any other row, we use Z
as its basic variable. With this convention, the basic feasible solution for our initial
canonical form has
basic variables {u1 , u2 , u3 , Z}

and

nonbasic variables {x1 , x2 , x3 }

For this basic feasible solution


u1 = 100, u2 = 200, u3 = 150, Z = 0, x1 = x2 = x3 = 0
Note that a slack variable can be used as a basic variable for an equation if the
right-hand side of the constraint is nonnegative.
Thus the simplex tableaux are as follows:
basis
u1
u2
u3
Z

x1
1
3
1
-3

x2
1
2
2
-5

x3 u1
1
1
4l 0
0
0
-8 0

u2
0
1
0
0

u3
0
0
1
0

Z
0
0
0
1

constants
100
200
150
0

basis
u1
u2
u3
Z

x1
1

x2
1
1
2

1
-3

2
-5

u1
1
0
0
0

u2
0

3
4

x3
1
1
0
-8

u3
0
0
1
0

Z
0
0
0
1

constants
100
50
150
0

basis
u1
x3
u3
Z

x1

x2

1
4
3
4

1
2
1
2

2
-1

u1
1
0
0
0

u2
- 14

1
3

x3
0
1
0
0

u3
0
0
1
0

Z
0
0
0
1

constants
50
50
150
400

1
4

0
0

1
4

0
2

29

Chapter One Linear Programming

basis
u1
x3
u3
Z

x1

x2

1
4
3
4

1
2
1
2

basis
u1
x3
x2
Z

x1
0

1
3

1
2
1
2
7
2

x3
0
1
l
2 0
-1 0

u1
1
0
0
0

u2
- 14

x2
0
0
1
0

u1
1
0
0
0

u2
- 14

x3
0
1
0
0

1
4

0
2

1
4

0
2

u3
0
0
1
0

Z
0
0
0
1

constants
50
50
150
400

u3
- 14
- 14

Z
0
0
0
1

constants

1
2
1
2

25
2
25
2

75
475

Since all negative elements have been eliminated from the last row, therefore, the
final tableau gives the following system of equations:

1
2 x1
1
2 x1

+ x3

u1

1
4 u2

1
4 u3

25
2

1
4 u2

1
4 u3

25
2

1
2 u3

75

1
2 u3

+ x2

7
2 x1

+ 2u2

+ Z = 475

The constraints are


x1 0, x2 0, x3 0, u1 0, u2 0, u3 0
The final equation, under the constraints, implies that Z has a maximum value of 475
when x1 = 0, u2 = 0, u3 = 0. On substituting these values back into the equations,
we get
25
25
x2 = 75, x3 = , u1 =
2
2
Thus Z = 3x1 + 5x2 + 8x3 has a maximum value of 475 at
x1 = 0,

x2 = 75,

x3 =

25
.
2

Note that the reasoning at this maximum value of Z implies that the element in
the last row and the last column of the final tableau will always corresponds to the
maximum value of the objective function Z.

In the following example, we will illustrate the application of the simplex method
when there are many optimal solutions.

30

8.5 The Simplex Method

Example 1.11 Determine the maximum value of the function


Z = 8x1 + 2x2
subject to the following constraints
4x1 + x2 32
4x1 + 3x2 48
x1 0,

x2 0

Solution. Taking the two slack variables u1 and u2 which must added to the given 2
constraints to get the standard constraints which may be written in the LP problem
as
4x1 + x2 + u1
= 32
4x1 + 3x2
+ u2 = 48
x1 0, x2 0, u1 0, u2 0
The objective function Z = 8x1 + 2x2 is rewritten in the form
8x1 2x2 + Z = 0
Thus the entire problem now becomes that of determining the solution to the following system of equations:
4x1 + x2 + u1
= 32
4x1 + 3x2
+ u2
= 48
+ Z = 0
8x1 2x2
The simplex tableaux are as follows:
basis
u1
u2
Z

x1 x2
4l 1
4
3
-8 -2

u1
1
0
0

u2
0
1
0

Z
0
0
1

constants
32
48
0

basis
u1
u2
Z

x1
1
4
-8

x2

u1

1
4

1
4

3
-2

0
0

u2
0
1
0

Z
0
0
1

constants
8
48
0

basis
x1
u2
Z

x1
1
0
0

x2

u1

1
4

1
4

2
0

-1
2

u2
0
1
0

Z
0
0
1

constants
8
16
64

31

Chapter One Linear Programming

Since all negative elements have been eliminated from the last row, therefore, the
final tableau gives the following system of equations:
x1 +

1
4 x2

1
4 u1

2x2

u1

=
+ u2

= 16
+ Z = 64

2u1
with constraints

x1 0, x2 0, u1 0, u2 0
The last equation implies that Z has a maximum value of 64 when u1 = 0. On
substituting these values back into the equations, we get
x1 +

1
4 x2

2x2

=
+ u2

= 16

with constraints
x1 0,

x2 0,

u2 0

Any point (x1 , x2 ) that satisfies these conditions is an optimal solution. Thus Z =
8x1 + 2x2 has a maximum value of 64. This is achieved at a point on the line
x1 + 14 x2 = 8 between (6, 8) and (8, 0).

To use the simplex method set LargeScale to off and simplex to on in options.
>> option = optimset( Large , of f , simplex , on );
Then call the function linprog with the options input arguments.
>> Z = [8 : 2];
>> A = [4 1; 4 3];
>> b = [32; 48];
>> lb = [0; 0]; ub = [20; 20];
>> [x, F val, exitf lag, output] = linprog(Z, A, b, [ ], [ ], lb, ub);
Simplex Method for Minimization Problem
In the last two examples we used the simplex method for finding the maximum
value of the objective function Z. In the following we will apply the method for the
minimization problem.

32

8.5 The Simplex Method

Example 1.12 Determine the minimum value of the function


Z = 2x1 + x2
subject to the following constraints
2x1 + x2 20
x1 x2 4
x1 + x2 5
x1 0,

x2 0

Solution. We can solve this LP problem using two different approaches.


First Approach: Put Z1 = Z, then minimizing Z is equivalent to maximizing
Z1. For this first find Z1max , then Zmin = Z1max . Let
Z1 = Z = 2x1 x2
then the problem reduces to maximize Z1 = 2x1 x2 under the same constraints.
Introduce slack variables, we have
2x1
x1
x1
2x1

+
+

x2 + u1
x2
+ u2
x2
+ u3
x2
+ Z1

= 20
= 4
= 5
= 0

x1 0, x2 0, u1 0, u2 0, u3 0
The simplex tableaux are as follows:
basis
u1
u2
u3
Z1

x1 x2
2
1
1l -1
-1 1
-2 1

u1
1
0
0
0

u2
0
1
0
0

u3
0
0
1
0

Z1
0
0
0
1

constants
20
4
5
0

basis
u1
x1
u3
Z1

x1
0
1
0
0

x2 u1
3l 1
-1 0
0
0
-1 0

u2
-2
1
1
2

u3
0
0
1
0

Z1
0
0
0
1

constants
12
4
9
8

33

Chapter One Linear Programming

basis
x2
x1
u3
Z1

x1
0
1
0
0

x2
1
0
0
0

u1
1
3
1
3

u2
- 23
1
3

1
3

4
3

u3
0
0
1
0

Z1
0
0
0
1

constants
4
8
9
12

Thus the final tableau gives the following system of equations:


x2 +

1
3 u1

2
3 u2

1
3 u1

1
3 u2

x1

u2
1
3 u1

+ u3

4
3 u2

+ Z1 = 12

with constraints
x1 0, x2 0, u1 0, u2 0, u3 0
The last equation implies that Z1 has a maximum value of 12 when u1 = u2 = 0.
Thus Z1 = 2x1 x2 has a maximum value of 12 at x1 = 8 and x2 = 4. Since Z =
Z1 = 12, therefore, the minimum value of the objective function Z = 2x1 + x2
is 12.
Second Approach: To decrease Z, we have to pick out the largest positive entry
in the bottom row to find a pivotal column. Thus the problem now becomes that of
determining the solution to the following system of equations:
2x1
x1
x1
2x1

+ x2 + u1
2x2
+ u2
+ x2
+ u3
x2
+ Z

= 20
= 4
= 5
= 0

The simplex tableaux are as follows:


basis
u1
u2
u3
Z

x1 x2
2
1
l
1 -1
-1 1
2 -1

u1
1
0
0
0

u2
0
1
0
0

u3
0
0
1
0

Z
0
0
0
1

constants
20
4
5
0

basis
u1
x1
u3
Z

x1
0
1
0
0

x2 u1
3l 1
-1 0
0
0
1
0

u2
-2
1
1
-2

u3
0
0
1
0

Z
0
0
0
1

constants
12
4
9
-8

34

8.5 The Simplex Method

basis
x2
x1
u3
Z

x1
0
1
0
0

x2
1
0
0
0

u1
1
3
1
3

0
- 13

u2
- 23
1
3

1
- 43

u3
0
0
1
0

Z
0
0
0
1

constants
4
8
9
-12

Thus Z = 2x1 + x2 has a minimum value of 12 at x1 = 8 and x2 = 4.

1.5.1

Unrestricted-in-Sign Variables

In solving a LP problem with the simplex method, we used the ratio test determine
the row in which entering variable become a basic variable. Recall that ratio test
depended on the fact that any feasible point required all variables to be nonnegative.
Thus, if some variables are allowed to be unrestricted in sign, the ratio test and
therefore the simplex method are no longer valid. Here, we show how a LP problem
with restricted in sign variables can be transformed into a LP problem in which all
variables are required to be nonnegative.
For each unrestricted in sign variable xi , we begin by defining two new variables
xi and xi . Then substitute xi xi for xi in each constraint and in the objective
function. Also add the sign restrictions xi 0 and xi 0. Now all the variables
are nonnegative, therefore we can use the simplex method. Note that each basic
feasible solution can have either xi > 0 (and xi = 0), or xi > 0 (and xi = 0), or
xi = xi = 0 (xi = 0).
Example 1.13 Consider the following LP problem:
Maximize

Z = 30x1 4x2

subject to the following constraints


5x1 x2 30
x1
5
x1 0,

x2 unrestricted

Solution. Since x2 is unrestricted in sign, therefore we replace x2 by x2 x2 in the


objective function and in the first constraint, we obtain
Maximize

Z = 30x1 4x2 + 4x2

subject to the following constraints


5x1 x2 + x2 30
x1
5

35

Chapter One Linear Programming

x2 0,

x1 0,

x2 0

Now convert the problem into standard form by adding two slack variables u1 and
u2 in the first and second constraints respectively, we get
Z = 30x1 4x2 + 4x2

Maximize
subject to the following constraints

5x1 x2 + x2 + u1
= 30
x1
+ u2 = 5
x1 0, x2 0, x2 0, u1 0, u2 0
The simplex tableaux are as follows:
basis
u1
u2
Z

x1
5
1l
-30

x2
-1
0
4

x2
1
0
-4

u1
1
0
0

u2
0
1
0

Z
0
0
1

constants
30
5
0

basis
u1
x1
Z

x1
0
1
0

x2
-1
0
4

x2 u1
1l 1
0
0
-4 0

u2
-5
1
30

Z
0
0
1

constants
5
5
150

basis
x2
x1
Z

x1
0
1
0

x2
-1
0
0

x2
1
0
0

u2
-5
1
10

Z
0
0
1

constants
5
5
170

u1
1
0
4

We now have an optimal solution to the linear program given by x1 = 5, x1 = 0, x2 =


5, u1 = u2 = 0, and maximum Z = 170.
Note that the variables x2 and x2 will never both be basic variables in the same
tableau.

1.6

Finding a Feasible Basis

A major requirement of the simplex method is the availability of an initial basic


feasible solution in canonical form. Without it the initial simplex tableau cannot
found. There are two basic approaches to finding an initial basic feasible solution:

36

8.6 Finding a Feasible Basis

1. By Trial and Error


Here a basic variable is chosen arbitrarily for each constraint, and the system is
reduced to canonical form with respect to those basic variables. If the resulting
canonical system gives a basic feasible solution (that is, the right-hand side constants are nonnegative), then the initial tableau can be set up to start the simplex
method. It is also possible that during the canonical reduction some of the righthand side constant may become negative. In that case the basic solution obtain will
be infeasible, and the simplex method cannot be started. Of course, one can repeat
the process by trying a different set of basic variables for the canonical reduction
and hope for a basic feasible solution. Now it is clearly obvious that the trial and
error method is very inefficient and expensive. In addition, if a problem does not
possess a feasible solution, it will take a long time to realize this.
2. Use of Artificial Variables
This is a systematic way of getting a canonical form with a basic feasible solution
when none is available by inspection. First a LP problem is converted to standard
form such that all the variables are nonnegative, the constraints are equations, and
all the right-hand side constants are nonnegative. Then each constraint is examined
for the existence of basic variable. If none is available, a new variable is added to
act as the basic variable in that constraint. In the end, all the constraint will have
a basic variable, and by definition we have a canonical system. Since the right-hand
side elements are nonnegative, an initial simplex tableau can be formed readily. Of
course the additional variables have no meaning to the original problem. These are
merely added so that we will have a ready canonical system to start the simplex
method. Hence these variables are termed as artificial variables as opposed to the
real decision variables in the problem. Eventually they will be forced to zero lest
they unbalance the equations. To illustrate the use of artificial variables, consider
the following LP problem:
Example 1.14 Consider the minimization problem
M inimize

Z = 3x1 + x2 + x3

subject to the following constraints


x1
2x2 + x3 11
4x1 + x2 + 2x3 3
2x1
x3 = 1
x1 0,

x2 0,

x3 0

37

Chapter One Linear Programming

First the problem is converted to the standard form as follows:


M inimize

Z = 3x1 + x2 + x3

subject to the following constraints


2x2 + x3 + u1
= 11
x1
4x1 + x2 + 2x3
v2 = 3
+ x3
= 1
2x1
x1 0, x2 0, x3 0, u1 0, v2 0
The slack variable u1 in the first constraint equation is a basic variable. Since there
are no basic variables in the other constraint equations, so we add artificial variables
w3 and w4 to the second and third constraint equations, respectively. To retain the
standard form w3 and w4 will be restricted to be nonnegative. Thus we now have an
artificial system given by:
2x2 + x3 + u1
= 11
x1
v2 + w3
= 3
4x1 + x2 + 2x3
2x1
+ x3
+ w4 = 1
x1 0, x2 0, x3 0, u1 0, v2 0, w3 0, w4 0
The artificial system has a basic feasible solution in canonical form given by
x1 = x2 = x3 = 0, u1 = 11, v2 = 0, w3 = 3, w4 = 1
But this is not a feasible solution to the original problem due to the presence of the
artificial variables w3 and w4 at positive values.

On the other hand, it is easy to see that any basic feasible solution to the artificial
system in which the artificial variables (as w3 and w4 in the above example) are
zero is automatically a basic feasible solution to the original problem. Hence the
object is to reduce the artificial variables to zero as soon as possible. This can
be accomplished in two ways, and each one gives rise to a variant of the simplex
method, the Big M simplex method and the Two-Phase simplex method.

1.6.1

Big M Simplex Method

In this approach, the artificial variables are assigned a very large cost in the objective
function. The simplex method, while trying to improve the objective function, will
find the artificial variables uneconomical to maintain as basic variables with positive
values. Hence they will be quickly replaced in the basis by the real variables with
smaller costs. For hand calculations it is not necessary to assign a specific cost value

38

8.6 Finding a Feasible Basis

to the artificial variables. The general practice is to assign the letter M as the cost
in a minimization problem, and M as the profit in a maximization problem with
the assumption that M is a very large positive number.
The following steps describe the Big M simplex method.
1. Modify the constraints so that the right-hand side of each constraint is nonnegative. This requires that each constraint with a negative right-hand side
be multiplied through by -1.
2. Convert each inequality constraint to standard form. This means that if constraint i is a constraint, we add a slack variable ui , and if i is a constraint,
we subtract an surplus variable vi .
3. If (after step 1 has been completed) constraint i is a or = constraint, add
an artificial variable wi . Also add the sign restriction wi 0.
4. Let M denote a very large positive number. If a LP problem is a minimization
problem, add (for each artificial variable) M wi to the objective function. If
a LP problem is a maximization problem, add (for each artificial variable)
M wi to the objective function.
5. Since each artificial variable will be in the starting basis, all artificial variables
must be eliminated from last row before beginning the simplex method. This
ensures that we begin with canonical form. In choosing the entering variable,
remember that M is a very large positive number. Now solve the transformed
problem by the simplex method. If all artificial variables are equal to zero
in the optimal solution, we have found the optimal solution to the original
problem. If any artificial variables are positive in the optimal solution, the
original problem is infeasible.
Example 1.15 To illustrate the Big M simplex method, let us consider the standard
form of the Example 1.14:
M inimize

Z = 3x1 + x2 + x3

subject to the constraints


x1
2x2 + x3 + u1
= 11
4x1 + x2 + 2x3
v2 = 3
2x1
+ x3
= 1
x1 0, x2 0, x3 0, u1 0, v2 0

39

Chapter One Linear Programming

Solution. In order to derive the artificial variables to zero, a large cost will be
assigned to w3 and w4 so that the objective function becomes:
M inimize

Z = 3x1 + x2 + x3 + M w3 + M w4

where M is very large positive number. Thus the LP problem with its artificial
variables becomes:
M inimize

Z = 3x1 + x2 + x3 + M w3 + M w4

subject to the constraints


x1
2x2 + x3 + u1
= 11
4x1 + x2 + 2x3
v2 + w3
= 3
2x1
+ x3
+ w4 = 1
x1 0, x2 0, x3 0, u1 0, v2 0, w3 0, w4 0
Notice the reason behind the use of the artificial variables. We have three equations
and seven unknowns. Hence the starting basic solution must include 7 3 = 4
zero variables. If we put x1 , x2 , x3 , and v2 at zero level, we immediately obtain
the solution u1 = 11, w3 = 3, and w4 = 1, which is the required starting feasible
solution. Having constructed a starting feasible solution, we must condition the
problem so that when we put it in tabular form, the right-hand side column with
render the starting solution directly. This is done by using the constraint equations
to substitute out w3 and w4 in the objective function. Thus
w3 = 3 + 4x1 x2 2x3
w4 = 1 + 2x1
x3

+ v2

The objective function thus becomes


Z = 3x1 + x2 + x3 + M (3 + 4x1 x2 2x3 + v2 ) + M (1 + 2x1 x3 )
or
Z = (3 + 6M )x1 + (1 M )x2 + (1 3M )x3 + M v2 + 4M
and the Z-equation now appears in the tableau as
Z (3 + 6M )x1 (1 M )x2 (1 3M )x3 M v2 = 4M
Now we can see that at the starting solution, given x1 = x2 = x3 = v2 = 0, the value
of Z is 4M , as it should be when u1 = 11, w3 = 3, and w4 = 1.
The sequence of tableaus leading to the optimum solution is showing in the following:

40

8.6 Finding a Feasible Basis

basis
u1
w3
w4
Z

x1
1
-4
-2
3-6M

basis
u1
w3
x3
Z
basis
u1
x2
x3
Z

x1
3
0
-2
1

x2
-2
1
0
-1+M

x3
1
2
1l
-1+3M

x2
x3
-2
0
l
1
0
0
1
-1+M 0

x1 x2
3l 0
0
1
-2 0
1
0

x3
0
0
1
0

u1
1
0
0
0
u1
1
0
0
0

u1
1
0
0
0

v2
0
-1
0
-M

w3
0
1
0
0

v2
0
-1
0
-M

w3
0
1
0
0

w4
-1
-2
1
1-3M

v2
-2
-1
0
-1

w3
2
1
0
1-M

w4
-5
-2
1
-1-M

w4
0
0
1
0

Z
0
0
0
1
Z
0
0
0
1

Z
0
0
0
1

constants
11
3
1
4M
constants
10
1
1
M+1

constants
12
1
1
2

Now both the artificial variables w3 and w4 have been reduced to zero. Thus Tableau
3 represents a basic feasible solution to the original problem. Of course, this is not
an optimal solution since x1 can reduce the objective function further by replacing
u1 in the basis.
basis
x1
x2
x3
Z

x1
1
0
0
0

x2
0
1
0
0

x3
0
0
1
0

u1
1
3

0
0
- 13

v2
- 23
-1
0
- 13

w3
2
3

1
0
1
3 -M

w4
- 53
-2
1
2
3 -M

Z
0
0
0
1

constants
4
1
9
-2

Tableau 4 is optimal, and the unique optimal solution is given by x1 = 4, x2 =


1, x3 = 9, u1 = 0, v2 = 0, and minimum z = 2.

Note that an artificial variable is added merely to act as a basic variable in a particular equation. Once it is replaced by a real (decision) variable, there is no need
to retain the artificial variable in the simplex tableaus. In other words, we could
have omitted the column corresponding to the artificial variable w4 in Tableaus 2,
3, and 4. Similarly, the column corresponding to w3 could have been dropped from
Tableaus 3 and 4.
When the Big M simplex method terminates with an optimal tableau, it is sometimes possible for one or more artificial variables to remain as basic variables at
positive values. This implies that the original problem is infeasible since no basic
feasible solution is possible to the original system if it includes even one artificial

Chapter One Linear Programming

41

variable at a positive value. In other words, the original problem without artificial
variables does not have a feasible solution. Infeasibility is due to the presence of
inconsistent constraints in the formulation of the problem. In economic terms, this
means that the resources of the system are not sufficient to meet the expected demands.
Also, note that for the computer solutions, M has to be assigned a specific value.
Usually the largest value that can be represented in the computer is assumed.

1.6.2

Two-Phase Simplex Method

A drawback of the Big M simplex method is the possible computational error that
could result from assigning a very large value of the constant M and sometimes
create computational problems in a digital computer. The Two-Phase method is
designed to alleviate this difficulty. Although the artificial variables are added in
the same manner employed in the Big M simplex method, the use of the constant M
is eliminated by solving the problem in two phases (hence the name Two-Phase
method). These two phases are outlined as follows:
Phase 1. This phase consists of finding an initial basic feasible solution to the
original problem. In other words, the removal of the artificial variables is taken
up first. For this an artificial objective function is created which is the sum of all
the artificial variables. The artificial objective function is then minimized using the
simplex method. If the minimum value of the artificial problem is zero, then all the
artificial variables have been reduced to zero, and we have a basic feasible solution
to the original problem. Go to Phase 2. Otherwise, if the minimum is positive, the
problem has no feasible solution. Stop.
Phase 2. The basic feasible solution found is optimized with respect to the original
objective function. In other words, the final tableau of Phase 1 becomes the initial
tableau for Phase 2 after changing the objective function. The simplex method is
once again applied to determine the optimal solution.
The following steps describe the Two-phase simplex method. Note that steps 1-3
for the Two-Phase simplex method are similar to steps 1-3 for the Big M simplex
method.
1. Modify the constraints so that the right-hand side of each constraint is nonnegative. This requires that each constraint with a negative right-hand side
be multiplied through by -1.
2. Convert each inequality constraint to standard form. This means that if constraint i is a constraint, we add a slack variable ui , and if i is a constraint,
we subtract an surplus variable vi .

42

8.6 Finding a Feasible Basis

3. If (after step 1 has been completed) constraint i is a or = constraint, add


an artificial variable wi . Also add the sign restriction wi 0.
4. Let for now, ignore the original LPs objective function. Instead solve a LP
problem whose objective function is M inimize W =(sum of all the artificial
variables). This is called the Phase 1 LP problem. The act of solving the
Phase 1 LP problem will force the artificial variables to be zero.
Note that:
If the optimal value of W is equal to zero, and no artificial variables are in
the optimal Phase 1 basis, then we drop all columns in the optimal Phase 1
tableau that correspond to the artificial variables. We now combine the original objective function with the constraints from the optimal Phase 1 tableau.
This yields the Phase 2 LP problem. The optimal solution to the Phase 2 LP
problem is the optimal solution to the original LP problem.
If the optimal value W is greater than zero, then the original LP problem has
no feasible solution.
If the optimal value of W is equal to zero and at least one artificial variable
is in the optimal Phase 1 basis, then we can find the optimal solution to the
original LP problem if at the end of Phase 1 we drop from the optimal Phase
1 tableau all nonbasic artificial variables and any variable from the original
problem that has a negative coefficient in last row of the optimal Phase 1
tableau.
Example 1.16 To illustrate the Two-Phase simplex method, let us consider again
the standard form of the Example 1.14:
M inimize

Z = 3x1 + x2 + x3

subject to the constraints


x1
2x2 + x3 + u1
= 11
4x1 + x2 + 2x3
v2 = 3
2x1
+ x3
= 1
x1 0, x2 0, x3 0, u1 0, v2 0
Solution.
Phase 1 Problem:
Since we need artificial variables w3 and w4 in the second and third equations, the
Phase 1 problem reads as
M inimize

W = w3 + w4

43

Chapter One Linear Programming

subject to the constraints


x1
2x2 + x3 + u1
= 11
4x1 + x2 + 2x3
v2 + w3
= 3
2x1
+ x3
+ w4 = 1
x1 0, x2 0, x3 0, u1 0, v2 0, w3 0, w4 0
Because w3 and w4 are in the starting solution, they must be substituted out in the
objective function as follows:
W

= w3 + w4
= (3 + 4x1 x2 2x3 + v2 ) + (1 + 2x1 x3 )
= 4 + 6x1 x2 3x3 + v2

and the W -equation now appears in the tableau as


W 6x1 + x2 + 3x3 v2 = 4
The initial basic feasible solution for the Phase 1 problem is given below:
basis
u1
w3
w4
W

x1
1
-4
-2
-6

x2
-2
1
0
1

x3 u1
1
1
2
0
l
1 0
3
0

basis
u1
w3
x3
W

x1
3
0
-2
0

x2 x3
-2 0
1l 0
0
1
-1 0

basis
u1
x2
x3
W

x1
3
0
-2
0

x2
0
1
0
0

x3
0
0
1
0

v2
0
-1
0
-1

w3
0
1
0
0

w4
0
0
1
0

W
0
0
0
1

constants
11
3
1
4

u1
1
0
0
0

v2
0
-1
0
-1

w3
0
1
0
0

w4
-1
-2
1
-3

W
0
0
0
1

constants
10
1
1
1

u1
1
0
0
0

v2
-2
-1
0
0

w3
2
1
0
-1

w4
-5
-2
1
1

W
0
0
0
1

constants
12
1
1
0

We now have an optimal solution to the Phase 1 linear program given by x1 =


0, x2 = 1, x3 = 1, u1 = 12, v2 = 0, w3 = 0, w4 = 0, and minimum W = 0. Since
the artificial variables w3 = 0 and w4 = 0, so Tableau 3 represents a basic feasible
solution to the original problem.

44

8.6 Finding a Feasible Basis

Phase 2 Problem: The artificial variables have now served their purpose and must
be dispensed with in all subsequent computations. This means that the equations of
the optimum tableau in Phase 1 can be written as
3x1
x2
2x1

+ x3

+ u1 2v2 = 12
v2 = 1
= 1

These equations are exactly equivalent to those in the standard form of the original
problem (before artificial variables are added). Thus the original problem can be
written as
M inimize Z = 3x1 + x2 + x3
subject to the constraints
3x1
x2
+ x3

2x1

+ u1 2v2 = 12
v2 = 1
= 1

x1 0, x2 0, x3 0, u1 0, v2 0
As we can see, the principal contribution of the Phase 1 computations is to provide
a ready starting solution to the original problem. Since the problem has three equations and five variables, by putting 5 3 = 2 variables, namely, x1 = v2 = 0, we
immediately obtain the starting basic feasible solution u1 = 12, x2 = 1, and x3 = 1.
To solve the problem, we need to substitute out the basic variables x1 , x2 , and x3
in the objective function. This is accomplished by using the constraint equations as
follows:
Z = 3x1 + x2 + x3
= 3(4 13 u1 + 23 v2 ) + (1 + v2 ) + 2(4 13 u1 + 23 v2 )
= 2 + 13 u1 + 13 v2
Thus the starting tableau for Phase 2 becomes:
basis
u1
x2
x3
Z

x1 x2
3l 0
0
1
-2 0
0
0

x3
0
0
1
0

u1
1
0
0
- 13

v2
-2
-1
0
- 13

Z
0
0
0
1

constants
12
1
1
-2

basis
x1
x2
x3
Z

x1
1
0
0
0

x3
0
0
1
0

u1

v2
- 23
-1
- 43
- 13

Z
0
0
0
1

constants
4
1
9
-2

x2
0
1
0
0

1
3

0
2
3
- 13

45

Chapter One Linear Programming

An optimal solution has been reached, and it is given by x1 = 4, x2 = 1, x3 = 9, u1 =

0, v2 = 0, and minimum Z = 2.
Comparing the Big M simplex method and the Two-Phase simplex method, we
observe the following:
The basic approach to both methods is the same. Both add the artificial
variables to get the initial canonical system and then derive them to zero as
soon as possible.
The sequence of tableaus and the basis changes are identical.
The number of iterations are the same.
The Big M simplex method solves the linear problem in one pass while the
Two-Phase simplex method solves it in two stages as two linear programs.

1.7

Duality

From both the theoretical and practical points of view, the theory of duality is
one of the most important and interesting concepts in linear programming. Each
LP problem has a related LP problem called the dual problem. The original LP
problem is called the primal problem. For the primal problem defined by (1.1)(1.3) above, the corresponding dual problem is to find the values of the M variables
y1 , y2 , . . . , yM to solve the following:
Minimize

V = b1 y1 + b2 y2 + + bM yM

(1.17)

subject to the constraints


a11 y1
a12 y1
..
.

+
+

a21 y2
a22 y2
..
.

a1N y1 + a2N y2

+ +
+ +

aM 1 yM
aM 2 yM
..
.

+ + aM N yM

c1
c2
..
.

(1.18)

cN

and
y1 0, y2 0, . . . , yM 0

(1.19)

46

8.7 Duality

In matrix notation, the primal and the dual problems are formulated as
Primal

Dual

Maximize Z = cT x
subject to the constraints

Minimize V = bT y
subject to the constraints

Ax b

AT y c

x0

y0

where

A=

b=

b1
b2
..
.
bM

a11
a21
..
.

a12
a22
..
.

..
.

a1N
a2N
..
.

aM 1 aM 2 aM N

c1
x1
c2
x2

c = . , x = . ,
.
.
.
.
cN
xN

y=

y1
y2
..
.
yM

and cT denotes the transpose of the vector c.


The concept of a dual can be introduce with the help of the following LP problem.
Example 1.17 Write the dual of the following linear problem:
Primal Problem:
maximize

Z = x1 + 2x2 3x3 + 4x4

subject to the following constraints


x1 + 2x2 + 2x3 3x4 25
2x1 + x2 3x3 + 2x4 15
x1 0,

x2 0,

x3 0,

x4 0

The above linear problem has two constraints and four variables. The dual of this
primal problem is written as:
Dual Problem:
minimize

V = 25y1 + 15y2

47

Chapter One Linear Programming

subject to the following constraints


y1 + 2y2
2y1 + 2y2
2y1 3y2
3y1 + 2y2
y1 0,

1
2
3
4
y2 0

where y1 and y2 are called the dual variables.

1.7.1

Comparison of Primal and Dual Problems

Comparing the primal and the dual problems, we observe the following relationships:
1. The objective function coefficients of the primal problem have became the
right-hand side constants of the dual. Similarly, the right-hand side constants
of the primal have became the cost coefficients of the dual.
2. The inequalities have been reversed in the constraints.
3. The objective function is changed from maximization in primal to minimization in dual.
4. Each column in the primal corresponds to a constraint (row) in the dual. Thus
the number of dual constraints is equal to the number of primal variables.
5. Each constraint (row) in the primal corresponds to a column in the dual.
Hence there is one dual variable for every primal constraint.
6. The dual of the dual is the primal problem.
In both of the primal and the dual problems, the variables are nonnegative and
the constraints are inequalities. Such problems are called symmetric dual linear
programs.
Definition 1.7 (Symmetric Form)
A linear program is said to be in symmetric form, if all the variables are restricted
to be nonnegative, and all the constraints are inequalities (in maximization problem
the inequalities must be in less than or equal to form; while in a minimization
problem they must be greater than or equal to).

The general rules for writing the dual of a linear program in symmetric form as
summarized below:

48

8.7 Duality

1. Define one (nonnegative) dual variable for each primal constraint.


2. Make the cost vector of the primal the right-hand side constants of the dual.
3. Make the right-hand side vector of the primal the cost vector of the dual.
4. The transpose of the coefficient matrix of the primal becomes the constraint
matrix of the dual.
5. Reverse the direction of the constraint inequalities.
6. Reverse the optimization direction, that is, changing minimizing to maximizing
and vice versa.
Example 1.18 Write the following linear problem in symmetric form and then find
its dual:
minimize Z = 2x1 + 4x2 + 3x3 + 5x4 + 3x5 + 4x6
subject to the following constraints
x1 + x2 + x3
x1
x2
x3

x4 + x5 + x6
+ x4
+ x5
+ x6

300
600
200
300
400

x1 0, x2 0, x3 0, x4 0, x5 0, x6 0
Solution. For the above linear program (minimization) to be in symmetric form,
all the constraints must be in greater than or equal to form. Hence we multiply
the first two constraints by 1, then we have the primal problem as
minimize

Z = 2x1 + 4x2 + 3x3 + 5x4 + 3x5 + 4x6

subject to the following constraints


x1 x2 x3
x1
x2
x3

x4 x5 x6
+ x4
+ x5
+ x6

300
600
200
300
400

x1 0, x2 0, x3 0, x4 0, x5 0, x6 0
The dual of the above primal problem becomes:
maximize

V = 300y1 600y2 + 200y3 + 300y4 + 400y5

49

Chapter One Linear Programming

subject to the following constraints


+ y3

y1
y1
y1

+ y4
+ y5
y2 + y3
y2
+ y4
y2
+ y5

2
4
3
5
3
4

y1 0, y2 0, y3 0, y4 0, y5 0

1.7.2

Primal-Dual Problems in Standard Form

In most LP problems, the dual is defined for various forms of the primal depending
on the types of the constraints, the signs of the variables, and the sense of optimization. Now we introduce a definition of the dual that automatically accounts for
all forms of the primal. It is based on the fact that any LP problem must be put
in the standard form before the model is solved by the simplex method. Since all
the primal-dual computations are obtained directly from the simplex tableau, it is
logical to define the dual in a way that is consistent with the standard form of the
primal.
Example 1.19 Write the standard form of primal-dual problem of the following
linear problem:
maximize Z = 5x1 + 12x2 + 4x3
subject to the following constraints
x1 + 2x2 + x3 10
2x1 x2 + x3 = 8
x1 0,

x2 0,

x3 0

Solution. The given primal can be put in the standard primal as


maximize

Z = 5x1 + 12x2 + 4x3

subject to the following constraints


x1 + 2x2 + x3 + u1 = 10
2x1 x2 + x3
= 8
x1 0,

x2 0,

x3 0,

u1 0

50

8.7 Duality

Notice that u1 is a slack in the first constraint. Now its dual form can be written as
minimize

V = 10y1 + 8y2

subject to the following constraints


y1 + 2y2 5
2y1 y2 12
y1 + y2 4
y1 0,

y2 unrestricted

Example 1.20 Write the standard form of primal-dual problem of the following
linear problem:
minimize Z = 5x1 2x2
subject to the following constraints
x1 + x2 3
2x1 + 3x2 5
x1 0,

x2 0

Solution. The given primal can be put in the standard primal as


minimize

Z = 5x1 2x2

subject to the following constraints


x1 x2 + u1
= 3
2x1 + 3x2
+ u2 = 5
x1 0,

x2 0,

u1 0,

u2 0

Notice that u1 and u2 are slack in the first and second constraints. Its dual form is
maximize

V = 3y1 + 5y2

subject to the following constraints


y1 + 2y2
y1 + 3y2
y1
y2

5
2
0
0

y1 , y2 unrestricted

51

Chapter One Linear Programming

Theorem 1.3 (Duality Theorem)


If the primal problem has an optimal solution, then the dual problem also has an
optimal solution, and the optimal values of their objective functions are equal, that
is
Maximize Z = Minimize V

It can be shown that when primal problem is solved by the simplex method, the
final tableau contains the optimal solution to the dual problem in the objective row
under the columns of the slack variables. That is, the first dual variable is found
in the objective row under the first slack variable, the second is found under the
second slack variable, and so on.
Example 1.21 Find the dual of the following linear problem
Z = 12x1 + 9x2 + 15x3

maximize
subject to the following constraints

2x1 + x2 + x3 30
x1 + x2 + 3x3 40
x1 0,

x2 0,

x3 0

and then find its optimal solution.


Solution. The dual of this problem is
minimize

V = 30y1 + 40y2

subject to the constraints


2y1 + y2 12
y1 + y2 9
y1 + 3y2 15
y1 0,

y2 0

Introducing the slack variables u1 and u2 in order to convert the given linear problem
in the standard form, we obtain
maximize

Z = 12x1 + 9x2 + 15x3

52

8.7 Duality

subject to the following constraints


2x1 + x2 + x3 + u1
= 30
x1 + x2 + 3x3
+ u2 = 40
x1 0, x2 0, x3 0, u1 0, u2 0
We now apply the simplex method, obtaining the following tableaux:
basis
u1
u2
Z

x1
2
1
-12

x2
1
1
-9

x3
1
3l
-15

basis
u1
x3
Z

x1

x2

-4

u1
1
0
0

u2
- 13

-7

x3
0
1
0

basis
x1
x3
Z

x1
1
0
0

x2

x3
0
1
0

u1

2
m
5
1
5
- 65

basis
x2
x3
Z

x1

x2
1
0
0

x3
0
1
0

u1

5
m
3
1
3

5
2
- 12

2
3
1
3

u1
1
0
0

3
5
- 15
21
5

3
2
- 12

u2
0
1
0

Z
0
0
1

constants
30
40
0

Z
0
0
1

constants

u2
- 15

Z
0
0
1

constants
10
10
270

u2
- 12

Z
0
0
1

constants
25
5
300

1
3

2
5
18
5

1
2

50
3
40
3

200

Thus the optimal solution to the given primal problem is


x1 = 0,

x2 = 25,

x3 = 5

and the optimal value of the objective function Z is 300..


The optimal solution to the dual problem is found in the objective row under the
slack variables u1 and u2 columns as:
y1 = 6

and

y2 = 3

Thus the optimal value of the dual objective function is


V = 30(6) + 40(3) = 300
which we expect from the Duality Theorem 1.3.

53

Chapter One Linear Programming

In the following we give another important duality theorem which gives the relationship between the primal and the dual solutions.
Theorem 1.4 (Weak Duality Theorem)
Consider the symmetric primal-dual linear problems:
Primal

Dual

Maximize Z = cT x
subject to the constraints

Minimize V = bT y
subject to the constraints

Ax b

AT y c

x0

x0

The value of the objective function of the minimization problem (dual) for any feasible solution is always greater than or equal to that of the maximization problem
(primal).

Example 1.22 Consider the following LP problem:


Primal:
Maximize

Z = x1 + 2x2 + 3x3 + 4x4

subject to the following constraints


x1 + 2x2 + 2x3 + 3x4 20
2x1 + x2 + 3x3 + 2x4 20
x1 0, x2 0, x3 0, x4 0
Its dual form is:
Dual:
Minimize

V = 20y1 + 20y2

subject to the following constraints


y1
2y1
2y1
3y1

+ 2y2
+ y2
+ 3y2
+ 2y2

1
2
3
4

54

8.8 Sensitivity Analysis in Linear Programming

y1 0,

y2 0

The feasible solution for the primal is x1 = x2 = x3 = x4 = 1 and y1 = y2 = 1 is


feasible for the dual. The value of the primal objective is
Z = cT x = 10
and the value of the dual objective is
V = bT y = 40
Note that
cT x < bT y
which satisfying the Weak Duality Theorem 1.4.

1.8

Sensitivity Analysis in Linear Programming

Sensitivity analysis refers to the study of the changes in the optimal solution and
optimal value of objective function Z due to the input data coefficients. The need for
such an analysis arises in various circumstances. Often management is not entirely
sure about the values of the constants, and wants to know the effects of changes.
There may be different kinds of modifications:
1. Changes in the right-hand side constants, bi
2. Changes in the objective function coefficients ci
3. Changes in the elements aij of the coefficient matrix A
4. Introducing additional constraints or deleting some of the existing constraints
5. Adding or deleting decision variables
We will discuss here only changes in the right-hand side constants bi , which is the
most common in sensitivity analysis.
Example 1.23 A small towel company makes two types of towels, standard and
deluxe. Both types have to go through two processing departments, cutting and
sewing. Each standard towel needs 1 minute in the cutting department and 3 minutes in the sewing department. The total available time in cutting is 160 minutes
for a production run. Each deluxe towel needs 2 minutes in the cutting department
and 2 minutes in the sewing department. The total available time in sewing is 240
minutes for a production run. The profit on each standard towel is $1.00, whereas

55

Chapter One Linear Programming

the profit on each deluxe towel is $1.50. Determine the number of towels of each
type to produce to maximize profit.
Solution. Let x1 and x2 be the number of standard towels and deluxe towels, respectively. Then the LP problem is
Maximize

Z = x1 + 1.5x2

x1 + 2x2 160
3x1 + 2x2 240
x1 0,

(cutting dept.)
(sewing dept.)

x2 0

After converting the problem into the standard form and then apply the simplex
method, one can easily get the final tableau as
basis
x2
x1
Z

x1
0
1
0

x2
1
0
0

u1
3
4
- 12
5
8

u2
- 14
1
2
1
8

Z
0
0
1

constants
60
40
130

The optimal solution is


x1 = 40, x2 = 60, u1 = u2 = 0,

Zmax = 130

Now let us ask a typical sensitivity analysis question:


Suppose we increase the maximum number of minutes at the cutting department by
1 minute, that is, if the maximum minutes at the cutting department is 161 instead
of 160, what would be the optimal solution?
Then the revised LP problem will be
Maximize

Z = x1 + 1.5x2

x1 + 2x2 161
3x1 + 2x2 240
x1 0,

(cutting dept.)
(sewing dept.)

x2 0

Of course, we can again solve this revised problem using the simplex method. However, since the modification is not drastic, we would wounder whether there is easy
way to utilize the final tableau for the original problem instead of going through all
iteration steps for the revised problem. There is a way, and this way is the key idea
of the sensitivity analysis.
1. The slack variable for the cutting department is u1 , then use the u1 -column.

56

8.8 Sensitivity Analysis in Linear Programming

2. Modify the right most column (constants) using the u1 -column as subsequently
shown, giving the final tableau for the revised problem.
basis
x2
x1
Z

x1
0
1
0

x2
1
0
0

u1

u2
- 14

3
4
- 12
5
8

Z
0
0
1

1
2
1
8

constants
60 + 1 34
40 + 1 (- 12 )
130 + 1 58

(where in the last column, the first entry is the original entry, second one is one
unit (minutes) increased, and final one is the u1 -column entry), that is
basis
x2
x1
Z

x1
0
1
0

x2
1
0
0

u1
3
4
- 12
5
8

u2
- 14
1
2
1
8

Z
0
0
1

constants
60 34
39 12
130 58

then the optimal solution for the revised problem is


1
3
x1 = 39 , x2 = 60 , u1 = u2 = 0,
2
4

Zmax = 130

5
8

Let us try one more revised problem:


Assume that the maximum number of minutes at the sewing department is reduced
by 8, making the maximum minutes 240 8 = 232. The final tableau for this revised
problem will be given as follows:
basis
x2
x1
Z

x1
0
1
0

x2
1
0
0

u1
3
4
- 12
5
8

u2
- 14
1
2
1
8

Z
0
0
1

constants
60 + (-8) (- 14 ) = 62
40 + (-8) ( 12 ) = 36
130 + (-8) ( 18 ) = 129

then the optimal solution for the revised problem is


x1 = 36, x2 = 62, u1 = u2 = 0,

Zmax = 129

The bottom-row entry, 58 , represents the net profit increase for one unit (minute)
increase of the available time at the cutting department. It is called the shadow
price at the cutting department. Similarly, another bottom-row entry, 18 , is called
the shadow price at the sewing department.
In general, the shadow price for a constraint is defined as the change in the optimal
value of the objective function when one unit is increased in the right-hand side of
the constraint.
A negative entry in the bottom-row represents the net profit increase when one unit of
the variable in that column is introduced. For example, if a negative entry in the x1

Chapter One Linear Programming

57

column is 14 , then introducing one unit of x1 will result in $( 14 ) = 25 cents net profit
gain. Therefore, the bottom-row entry, 58 , in the preceding tableau represents that
the net profit loss in $( 58 ) when one unit of u1 is introduced keeping the constraint,
160, the same.
Now, suppose the constraint at the cutting department is changed from 160 to 161. If
this increment of 1 minute is credited to u1 as a slack, or unused time at the cutting
department. The total profit will remain the same because the unused time will not
contribute to profit increase. However, if this u1 = 1 is given up, or reduced
(which is the opposite of introduced), it will yield a net profit gain of $( 58 ).

1.9

Summary

In this chapter we gave a brief introduction to the idea of linear programming.


Problems are described by systems of linear inequalities. One can see that small
systems can be solved in a graphical manner but that large systems are solved using
row operations on matrices by means of the simplex method. For finding the basic
feasible solution to artificial systems, we discussed the Big M simplex method and
the Two-Phase simplex method.
In this chapter we also discussed the concept of duality in linear programming. Since
the optimal primal solution can be obtained directly from the optimal dual tableau
(and vice versa), it will be advantageous computationally to solve the dual when it
has fewer constraints than the primal. Duality provides an economic interpretation
that sheds light on the unit worth or shadow price of the different resources. It
also explain the condition of optimality by introducing the new economic definition
of imputed costs for each activity. We closed the chapter with a presentation of
the important technique of sensitivity analysis, which gives linear programming the
dynamic characteristic of modifying the optimum solution to reflect changes in the
model.

58

8.10 Problems

1.10

Problems

1. The Oakwood Furniture Company has 12.5 units of wood on hand from which
to manufacture tables and chairs. Making a table uses two units of wood and
making a chair uses one unit. Oakwoods distributor will pay $20 for each
table and $15 for each chair, but he will not accept more than eight chairs and
he wants at least twice as many chairs as tables. How many tables and chairs
should the company produce so as to maximize its revenue? Formulate this
as a linear programming problem.
2. The Mighty Silver Ball Company manufactures three kinds of pinball machines, each requiring a different manufacturing technique. The Super Deluxe
Machine requires 17 hours of labor, 8 hours of testing, and yields a profit of
$300. The Silver Ball Special requires 10 hours of labor, 4 hours of testing,
and yields a profit of $200. The Bumper King requires 2 hours of labor, 2
hours of testing, and yields a profit of $100. There are 1000 hours of labor
and 500 hours of testing available.
In addition, a marketing forecast has shown that the demand for the Super
Deluxe is no more that 50 machines, demand for the Silver Ball Special no
more than 80, and demand for Bumper King no more that 150. The manufacturer wants to determine the optimal production schedule that will maximize
his total profit. Formulate this as a linear programming problem.
3. Consider a diet problem in which a college student is interested in finding a
minimum cost diet that provides at least 21 units of Vitamin A and 12 units
of Vitamin B from five foods of the following properties:
Food
Vitamin A content
Vitamin B content
Cost per unit (cents)

1
1
0
20

2
0
1
20

3
1
2
31

4
1
1
11

5
2
1
12

Formulate this as a linear programming problem.


4. Consider a problem of scheduling the weekly production of a certain item for
the next 4 weeks. The production cost of the item is $10 for the first two weeks,
and $15 for the last two weeks. The weekly demands are 300, 700, 900, and
800 units, which must be met. The plant can produce a maximum of 700 units
each week. In addition the company can employ overtime during the second
and third weeks. This increases the weekly production by an additional 200
units, but the cost of production increases by $5 per unit. Excess production
can be stored at a cost of $3 an item per week. How should the production

Chapter One Linear Programming

59

be scheduled so as to minimize the total cost? Formulate this as a linear


programming problem.
5. An oil refinery can blend three grades of crude oil to produce regular and super
gasoline. Two possible blending processes are available. For each production
run the older process uses 5 units of crude A, 7 units of crude B, and 2 units
of crude C to produce 9 units of regular and 7 units of super gasoline. The
newer process uses 3 units of crude A, 9 units of crude B, and 4 units of
crude C to produce 5 units of regular and 9 units of super gasoline for each
production run. Because of prior contract commitments, the refinery must
produce at least 500 units of regular gasoline and at least 300 units of super
for the next month. It has available 1500 units of crude A, 1900 units of crude
B, and 1000 units of crude C. For each unit of regular gasoline produced the
refinery receives $6, and for each unit of super it receives $9. Determine how
to use resources of crude oil and two blending processes to meet the contract
commitments and, at the same time, to maximize revenue. Formulate this as
a linear programming problem.
6. A tailor has 80 square yards of cotton material and 120 square yards of woolen
material. A suit requires 2 square yards cotton and 1 square yard of wool. A
dress requires 1 square yard of cotton and 3 square yards of wool. How many
of each garment should the tailor make to maximize income, if a suit and a
dress each sell for $90? What is the maximum income? Formulate this as a
linear programming problem.
7. A trucking firm ships the containers of two companies, A and B. Each container from company A weighs 40 pounds and is 2 cubic feet in volume. Each
container from company B weighs 50 pounds and is 3 cubic feet in volume.
The trucking firm charges company A $2.20 for each container shipped and
charges company B $3.00 for each container shipped. If one of the firms
trucks can not carry more than 37, 000 pounds and can not hold more than
2000 cubic feet, how many containers from companies A and B should a truck
carry to maximize the shipping charges?
8. A company produces two types of cowboy hats. Each hat of the first type
requires twice as much labor time as does each hat of the second type. If all
hats are of the second type only, the company can produce a total of 500 hats
a day. The market limits daily sales of the first and second types to 150 and
200 hats. Assume that the profit per hat is $8 for type 1 and $5 for type 2.
Determine the number of hats of each type to produce to maximize profit.
9. A company manufactures two types of hand calculators, of model A and model
B. It takes 1 hour and 4 hours in labor time to manufacture each A and B,

60

8.10 Problems

respectively. The cost of manufacturing the A is $30 and that of manufacturing


the B is $20. The company has 1, 600 hours of labor time available and $18, 000
in running costs. The profit on each A is $10 and on each B is $8. What should
the production schedule be to ensure maximum profit?
10. A clothes manufacturer has 10 square yards of cotton material, 10 square yards
of wool material, 6 square yards of silk material. A pair of slacks requires 1
square yard of cotton, 2 square yards of wool, and 1 square yard of silk. A
skirt requires 2 square yards of cotton, 1 square yard of wool, and 1 square
yard of silk. The net profit on a pair of slacks is $3 and the net profit on a skirt
is $4. How many skirts and how many slacks should be made to maximize
profit?
11. A manufacturer produces sacks of chicken feed from two ingredients, A and B.
Each sack is to contain at least 10 ounces of nutrient N1 , at least 8 ounces of
nutrient N2 , and at least 12 ounces of nutrient N3 . Each pound of ingredient
A contains 2 ounce of nutrient N1 , 2 ounce of nutrient N2 , and 6 ounce of
nutrient N3 . Each pound of ingredient B contains 5 ounce of nutrient N1 ,
3 ounce of nutrient N2 , and 4 ounce of nutrient N3 . If ingredient A costs 8
cents per pound and ingredient B costs 9 cents per pound, how much of each
ingredient should the manufacturer use in each sack of feed to minimize his
cost?
12. The Apple Company has made a contract with the government to supply 1200
micro computers this year and 2500 next year. The company has the production capacity to make 1400 micro computers each year, and it has already
committed its production line for this level. Labor and management have
agreed that the production line can be used for at most 80 overtime shifts
each year, each shift costing the company an additional $20, 000. In each
overtime shift, 50 micro computers can be manufactured units made this year
but used to meet next years demand must be stored at a cost of $100 per
unit. How should the production be scheduled so as to minimize cost.
13. Solve the each of the following linear programming problem using the graphical
method.
(a) Maximize:
Z = 2x1 + x2
Subject to:
4x1 + x2 36
4x1 + 3x2 60
x1 0,

x2 0

61

Chapter One Linear Programming

(b)

Maximize:
Subject to:

Z = 2x1 + x2
4x1 + x2 16
x1 + x2 7
x1 0,

(c)

Maximize:
Subject to:

x2 0

Z = 4x1 + x2
2x1 + x2 4
6x1 + x2 8
x1 0,

(d)

Maximize:
Subject to:

x2 0

Z = x1 4x2
x1 + 2x2 5
x1 + 6x2 7
x1 0,

x2 0

14. Solve the each of the following linear programming problem using the graphical
method.
(a) Maximize:
Z = 6x1 2x2
Subject to:
x1 x2 1
3x1 x2 6
x1 0,
(b)

Maximize:
Subject to:

x2 0

Z = 4x1 + 4x2
2x1 + 7x2 21
7x1 + 2x2 49
x1 0,

(c)

Maximize:
Subject to:

x2 0

Z = 3x1 + 2x2
2x1 + x2 2
3x1 + 4x2 12
x1 0,

x2 0

62

8.10 Problems

(d)

Maximize:
Subject to:

Z = 5x1 + 2x2
x1 + x2 10
4x1 = 5
x1 0,

x2 0

15. Solve the each of the following linear programming problem using the graphical
method.
(a) Minimize:
Z = 3x1 x2
Subject to:
x1 + x2 150
4x1 + x2 450
x1 0,
(b)

Minimize:
Subject to:

x2 0

Z = 2x1 + x2
2x1 + 3x2 14
4x1 + 5x2 16
x1 0,

(c)

Minimize:
Subject to:

x2 0

Z = 2x1 + x2
2x1 + x2 440
4x1 + x2 680
x1 0,

(d)

Maximize:
Subject to:

x2 0

Z = x1 x2
x1 + x2 6
x1 x2 0
x1 x2 3
x1 0,

x2 0

16. Solve the each of the following linear programming problem using the graphical
method.
(a) Minimize:
Z = 3x1 + 5x2
Subject to:
3x1 + 2x2 36
3x1 + 5x2 45

63

Chapter One Linear Programming

x1 0,
(b)

Minimize:
Subject to:

x2 0

Z = 3x1 8x2
2x1 x2 4
3x1 + 11x2 33
3x1 + 4x2 24
x1 0,

(c)

Minimize:
Subject to:

x2 0

Z = 3x1 5x2
2x1 x2 2
4x1 x2 0
x2 3
x1 0,

(d)

Minimize:
Subject to:

x2 0

Z = 3x1 + 2x2
3x1 x2 5
x1 + x2 1
2x1 + 4x2 12
x1 0,

x2 0

17. Solve the each of the following linear programming problem using the simplex
method.
(a) Maximize:
Z = 3x1 + 2x2
Subject to:
x1 + 2x2 4
3x1 + 2x2 14
x1 x2
3
x1 0,
(b)

Maximize:
Subject to:

x2 0

Z = x1 + 3x2
4x1
5
x1 + 2x2 10
x2 4
x1 0,

x2 0

64

8.10 Problems

(c)

Maximize:
Subject to:

Z = 10x1 + 5x2
x1 + x2 180
3x1 + 2x2 480
x1 0,

(d)

Maximize:
Subject to:

x2 0

Z = x1 4x2
x1 + 2x2 5
x1 + 6x2 7
x1 0,

x2 0

18. Solve the Problem 13 using the simplex method.


19. Solve the each of the following linear programming problem using the simplex
method.
(a) Maximize:
Z = 2x1 + 4x2 + x3
Subject to:
x1 + 2x2 + 3x3 6
x1 + 4x2 + 5x3 5
x1 + 5x2 + 7x3 7
x1 0,
(b)

Minimize:
Subject to:

x2 0,

x3 0

Z = 2x1 x2 + 2x3
x1 + x2 + x3 4
x1 + 2x2 + 3x3 3
x1 + x2 x3 6
x1 0,

(c)

Maximize:
Subject to:

x2 0,

x3 0

Z = 100x1 + 200x2 + 50x3


5x1 + 5x2 + 10x3 1000
10x1 + 8x2 + 5x3 2000
10x1 + 5x2
500
x1 0,

x2 0,

x3 0

65

Chapter One Linear Programming

(d)

Z = 6x1 + 3x2 + 4x3

Minimize:
Subject to:
x1

2x2
x1 +

x2

x1 0,

x3
+ x3

30
50
20
120

x2 0,

x3 0

20. Solve the each of the following linear programming problem using the simplex
method.
(a) Minimize:
Z = x1 + 2x2 + 3x3 + 4x4
Subject to:
x1 + 2x2 + 2x3 + 3x4 20
2x1 + x2 + 3x3 + 2x4 20
x1 0,
(b)

Maximize:
Subject to:

x2 0,

x3 0,

x4 0

Z = x1 + 2x2 + 4x3 x4

5x1
+ 4x3 + 6x4 20
4x1 + 2x2 + 2x3 + 8x4 40
x1 0,
(c)

Minimize:
Subject to:

x2 0,

x3 0,

x4 0

Z = 3x1 + x2 + x3 + x4
2x1 + 2x2 + x3 + 2x4 4
3x1 + x2 + 2x3 + 4x4 6
x1 0,

(d)

Minimize:
Subject to:

x2 0,

x3 0,

x4 0

Z = x1 + 2x2 x3 + 3x4
2x1 + 4x2 + 5x3 + 6x4 24
4x1 + 4x2 + 2x3 + 2x4 4
x1 0,

x2 0,

x3 0,

x4 0

21. Use the Big M simplex method to solve each of the following linear programming problem.

66

8.10 Problems

(a)

Minimize:
Subject to:

Z = 4x1 + 4x2 + x3
x1 + x2 + x3 2
2x1 + x2 + 0x3 3
2x1 + x2 + 3x3 3
x1 0,

(b)

Maximize:
Subject to:

x2 0,

x3 0

Z = 3x1 + x2
x1 + x2 3
2x1 + x2 4
x1 + x2 = 3
x1 0,

(c)

Minimize:
Subject to:

x2 0

Z = 3x1 2x2
x1 + x2 = 10
x1 + 0x2 4
x1 0,

(d)

Minimize:
Subject to:

x2 0

Z = 2x1 + 3x2
2x1 + x2 4
x1 x2 1
x1 0,

x2 0

22. Use the Two-Phase simplex method to solve the Problem 21.
23. Write the duals of each of the following linear programming problem.
(a) Maximize:
Z = 5x1 + 2x2
Subject to:
x1 + x2 3
2x1 + 3x2 5
x1 0,
(b)

Maximize:
Subject to:

x2 0

Z = 5x1 + 6x2 + 4x3


x1 + 4x2 + 6x3 12
2x1 + x2 + 2x3 11

67

Chapter One Linear Programming

x1 0,
(c)

Minimize:
Subject to:

x2 0,

x3 0

Z = 6x1 + 3x2
6x1 3x2 + x3 2
3x1 + 4x2 + x3 5
x1 0,

(d)

Minimize:
Subject to:

x2 0,

x3 0

Z = 2x1 + x2 + 2x3
x1 + x2 + x3 = 4
x1 + x2 x3 6
x1 0,

x2 0,

x3 0

24. Write the duals each of the following linear programming problem.
(a) Minimize:
Z = 3x1 + 4x2 + 6x 3
Subject to:
x1 + x2 10
x1 0,
(b)

Maximize:
Subject to:

x3 0,

x2 0

Z = 5x1 + 2x2 + 3x3


x1 + 5x2 + 2x3 = 30
x1 x2 6x3 40
x1 0,

(c)

Minimize:
Subject to:

x2 0,

x3 0

Z = x1 + 2x2 + 3x3 + 4x4


2x1 + 2x2 + 2x3 + 3x4 30
2x1 + x2 + 3x3 + 2x4 20
x1 0, x2 0, x3 0, x4 0

(d)

Maximize:
Subject to:

Z = x1 + x2
2x1 + x2 = 5
3x1 x2 = 6
x1 0,

x2 0

68

8.10 Problems

(e)

Maximize:
Subject to:

Z = 2x1 + 4x2
4x1 + 3x2 = 50
2x1 + 5x2 = 75
x1 0,

(f )

Maximize:
Subject to:

x2 0

Z = x1 + 2x2 + 3x 3
2x1 + x2 + x3 = 5
x1 + 3x2 + 4x3 = 3
x1 + 4x2 + x3 = 8
x1 0,

x2 0

You might also like