Charnes & Cooper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

PROGRAMMING WITH LINEAR FRACTIONAL FUNC TIONALS

A. Charnes

Northwestern University

and

W. W. Cooper
Carnegie Institute o f Technology

INTRODUCTION
The problems that we shall deal with may be called by the one name, “programming
with linear fractional functionals.” Members of this class have been encountered in a variety
of contexts. One such occurrence [3] involved situations in which the more usual sensitivity
analyses were extended to problems involving plans for optimal data changes. In these
instances, linear programming inequalities were to be considered relative to a functional
formulated as a ratio of two variables wherein one variable, in the numerator, represented
the change in total cost and the other variable, in the denominator, represented the volume
changes that might attend the possible variations of a particular cost coefficient. Another
example was dealt with by M. Klein in [6]. There a problem in optimal maintenance and
repair policies was encountered in the context of a Markoff process formulation (see [3] also).
In that context a ratio of homogeneous linear functions w a s to be maximized subject to a single
homogeneous linear equation and a norming condition on non-negative variables. To handle
this problem, Klein applied a square-root transformation (which he attributed to C. Derman [41)
in order to effect a reduction to an equivalent linear programming problem. Finally, a special
instance of our general case was treated by J. R. Isbell and W. H. Marlow in their article on
“Attrition Games,” [5]. In considering a ratio of (possibly) nonhomogeneous linear forms subject
to general linear inequality constraints, Isbell and Marlow were able to establish a convergent
iterative process which involved replacing the ratio by the problem of optimizing a sequence
of different linear functionals. The linear functional at any stage in the iterations was deter-
mined by optimization of the linear functional at the preceding stage.
The objective of the present paper is to replace any “linear fractional programming
problemn with, at most, two straightforward linear programming problems that differ from
each other by only a change in sign in the functional and in one constraint. Also the variable
transformations to be utilized will be simpler than the square-root transformations employed
in [S]. Our transformations a r e also homeomorphisms; from which follows ths globality of
local optima with linear fractional functionals.

181
182 A. CHARNES AND W . W . COOPER

GENERAL LINEAR FRACTIONAL MODELS


The general class of linear fractional models is conveniently rendered in the follow-
ing form:

maximize

subject to Ax 5b
x 2 0,
where A is an m x n matrix and b is an m x 1 vector so that the two sets of constants for the
constraints a r e related by the n x 1 vector of variables, x. Similarly, CT and dT a r e trans-
poses of the n x 1 vectors of coefficients, c and d, respectively, while a and /3 a r e arbitrary
scalar constants.
It is assumed, unless otherwise noted, that the constraints of (1.1) a r e regular1 so
that the solution s e t

(1.2) X E {x;Ax <_ b, x 2 0)


is nonempty and bounded.
The following transformation of variables is now introduced:

(2.1) Y E tx
where t 2 0 is to be chosen so that

where y # 0 is a specified number. On multiplying numerator and denominator and the system
of inequalities in (1.1) by t and taking (2.2) into account, we obtain the linear programming
problem

maximize cTy + crt f L(y,t)

(3) subject to Ay - b t 2 0

T
d y + B t = y

y,t 2 . 0 .

We now proceed to prove

LEMMA 1: Every (y,t) satisfying the constraints of (3) has t > 0,


PROOF: Suppose (9,O) satisfied the constraints of (3). Let 2 be any element of X.
Then x = 2
c1-
+ p 9 is in X for p > 0 since A 3 2 0, 9 2 0. But then X is unbounded contrary
to the regularity hypothesis imposed on X. Q.E .D.

'If n e c e s s a r y , regularization p r o c e d u r e s a r e available t o bring about the indicated conditions.


See [6].
PROGRAMMING WITH LINEAR FRACTIONAL F UNC T IONALS 183

THEOREM 1:
T
If (i) 0 < sgn ( y ) = sgn (d x* + P) for x* an optimal solution of (l.l),and
(ii) (y*, t*) is an optimal solution of (3),
then y*/t* is an optimal solution of (1.1).
PROOF: Suppose the theorem were false, i.e., assume that there exists an optimal
x* E X such that

c Tx* + a
T > cT(y*/t*) + (Y

d x* + 9 dT(y*/t*) + p
By condition (i)

dTX* + p = B y

for some 0 > 0. Consider

$ = e -1x * , t = -1
e.

Then

0 -1(dTX* + 0) = d
T$ + 30 = y

and ($,i)also satisfies A 9 - bt 0, $,? 2 0. But

cTx* + (Y - e-l(cTx* + a) - cT$


- + (Yf - cT$ +
- at
dTx* + B-l(dTx* + p) dTf +of Y

Also

cT (G) a - cTy* + at* - cTy*


- + at*
(g)+ P
+

dT dTy* + pt* Y

But now

cTx* + a CT@) + a
>
dTx* + p dTF) + p'
Since, by hypothesis (i), y # 0 we have

c q + at > cTy* + t*,

a contradiction to (y*,t*) optimal for (3). Q.E.D.


If sgn (dTx* + p) < 0 for x* an optimal solution ( l . l ) , then replacing (cT, a ) and
(dT, p ) by their negatives, the functional is unaltered and for the new (dT, 0) we would have
sgn (dTx* + 0) > 0. Thus, we may state
THEOREM 2: For any X regular, to solve the problem (1.1) it suffices to solve the
two ordinary linear programming problems,
184 A. CHARNES AND W . W . COOPER

maximize cTy + a t

(4.1) subject t o Ay - bt 50
dTy + /3t = 1

y,t 20
and
maximize -cTy - at

(4.2)
subject t o Ay - b t 5 0

REMARK 1: It should be observed that the s a m e reduction can be made using the
numerator instead of the denominator since

CTX + ff d Tx + p
(5) max z max (-1)
dTx + p CTX + a
REMARK 2: Thus, if one knows the sign of either the numerator or the denominator
f o r the functional, at an optimum, one need only solve a single ordinary linear programming
problem, i.e., one of either (4.1) or (4.2), in place of the linear fractional programming
problem (1.1). (In particular, non-vanishing of the denominator over the set X implies uni-
signance there, hence only one problem to solve.)
Now we proceed to exhaust all of the remaining possibilities in o r d e r to show that the
linear programming problems (4.1) and (4.2), continue to correctly characterize the situations
f o r the linear fractional problem. F i r s t , we prove
T
THEOREM 3: If f o r all x E X, d x + = 0 then problems (4.1) and (4.2) are both
inconsistent.
T
PROOF: If d x + = 0 it is impossible to obtain

T
+t(d x + 0) = +(dTy + Pt) = 1. Q.E.D.

Next, we observe that if t h e r e are points in both dTx + p = 0 and dTx +p # 0 then, by con-
vexity, any point in dTx + 0 a limit of a sequence of points {x"} for which dTxn + p
= 0 is
= cn # 0 and cn+ 0. Observe further, then, what must be happening on the corresponding
sequence {yn, tn} :
PROGRAMMING WITH LINEAR FRACTIONAL FUNCTIONALS 185

T T 2
(5.1) ftn(d X" + 6) = f d yn f = tn En = 1.

Thus tn = ~/E,,+.CO. If an optimum of R(x) (here intended to include also max R(x) = =J) is
approached by approaching a point of dTx +0 = 0, then the corresponding sequence involves
tn+m. Since the linear programming problem (4.1) o r (4.2), being computationally solved
will have been regularized, this behavior will be evidenced in the attainment of an optimum
that involves the artificial bound, U.3 Thus, we may extend the previous developments to
include every possibility as follows:
THEOREM 4: The following corresponding statements are equivalent:

Linear Fractional Linear Programming

(i) All x E X satisfy cTx + (Y = d


Tx + /3 = 0. (4.1) and (4.2) a r e inconsistent.

(ii) There exists xn such that R(xn)+max R(x) t* in (4.1) o r (4.2) involves the
with dTxn + /3 = En # 0, e n - + 0.
= artificial bound U.

(iii) R(x*) = max R(x) with dTx* + /3 # 0. x* = y*/t* from (4.1) o r (4.2).

CONCLUSION
It may be noted that the interesting situations of Derman, Klein, & aJ., involve _a
fortiori only (4.1) in (iii). There are also further extensions which w e shall treat elsewhere.
The most important case is one which involves a separable concave function for the numerator
and a separable convex function for the denominator. If these a r e also piecewise linear, we
can reduce them to the linear fractional programming analysis that has just been concluded.

ACKNOWLEDGMENT
Part of the research underlying this paper was undertaken for the project Temporal
Planning and Management Decision under Risk and Uncertainty at Northwestern University and
part for the project Planning and Control of Industrial Operations at Carnegie Institute of
Technology. Both projects a r e under contract with the U.S. Office of Naval Research. Repro-
duction of this paper in whole o r in part is permitted for any purpose of the United States
Government. Contract Nonr-1228( lo), Project NR 047-021 and Contract Nonr-760(01), Project
M-047-011.

2The "2" a r e entered i n l i e u of t h e r a t h e r obvious, but extended, verbalizations that would be


needed t o d e a l with a l l of the sign possibilities and c r o s s r e f e r e n c e s t o (4.1) and (4.2).
3Cf. [ 61.
186 A. CHARNES AND W. W . COOPER

BIBLIOGRAPHY
[ 11 Barlow, R. E., and Hunter, L. C., “Mathematical Models for System Reliability,” The
Sylvania Technologist XIII, 1 and 2 (1960).
[ 2 J Charnes, A., and Cooper, W. W., Management Models and Industrial Applications of Linear
Programming (John Wiley and Sons, Inc., New York, 1961).
[ 31 Charnes, A., and Cooper, W. W., “Systems Evaluation and Repricing Theorems,” O.N.R.
Memorandum No. 31, September 1960.
[ 41 Derman, C., “On Sequential Decisions and Markov Chains“ (To appear in Management
science‘).
[ 51 Isbell, J. R., and Marlow, W. H., “Attrition Games,” Naval Research Logistics Quarterly,
-3, 1 and 2, 71-93 (1956).
[ 6 J Klein, M., “Inspection-Maintenance-Replacement Schedule under Markovian Deterioration,”
Statistical Engineering Group Technical Report 14, New York:Columbia (To appear in
Management Science‘).

‘Referee’s note.

* * *

You might also like