arXiv:1608.00020v1 [math.OC] 29 Jul 2016
An Inexact Potential Reduction Method for Linear
Programming
Lukas Schork∗
Jacek Gondzio†
School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, Scotland, UK
Technical Report, July 2016
Abstract
A class of interior point methods using inexact directions is analysed. The linear
system arising in interior point methods for linear programming is reformulated such
that the solution is less sensitive to perturbations in the right-hand side. For the new
system an implementable condition is formulated that controls the relative error in
the solution. Based on this condition, a feasible and an infeasible potential reduction
method are described which retain the convergence and complexity bounds known for
exact directions.
1
Introduction
The primal-dual interior point method (IPM) is one of the most widely used methods for
solving large linear programming problems. The method can be analysed and implemented
as a path-following algorithm, in which the iterates follow a central trajectory toward the
solution set, or as a potential reduction algorithm, which makes progress by systematically
reducing a potential function. Most implementations make use of the path-following concept
[3].
This paper analyses a variant of the IPM that works with inexactly computed step directions. Inexact directions occur in implementations which solve the linear equation systems
by iterative methods. The analysis given here is closely related to such an implementation
and provides criteria to control the level of inexactness in the computation.
The paper introduces two interior point algorithms that work with inexact directions.
The first algorithm requires a strictly feasible starting point and keeps all iterates feasible.
The second algorithm can start from an infeasible point and achieves feasibility in the limit.
Both algorithms are formulated and analysed as potential reduction methods. It is proved
that in both cases the inexact methods retain the convergence and complexity bounds of
the exact ones.
The linear program is stated in standard form of a primal-dual pair
minimize cT x
T
maximize b y
subject to Ax = b, x ≥ 0,
T
subject to A y + z = c, z ≥ 0,
∗
[email protected]
†
[email protected]
1
(1)
(2)
in which A is an m × n matrix of full row rank. An IPM generates a sequence of iterates
xk , y k , z k by taking steps along the Newton direction to the nonlinear system
Ax − b
T
(3)
F (x, y, z) :=
A y + z − c = 0,
Xz − µe
in which X := diag(x), e is the n-vector of ones and µ > 0 is a parameter that is gradually
reduced to zero. The step directions are computed from the linear system
A
0
0
∆x∗
b − Axk
∗
T k
k
0 AT
(4)
I
∆y = c − A y − z ,
Zk
0
Xk
−X k z k + µe
∆z ∗
in which X k := diag(xk ) and Z k := diag(z k ). The step sizes are chosen to keep xk and z k
positive.
The potential reduction
method is a particular instance of the IPM. It sets µ = (xk )T z k /(n+
√
ν) for a constant ν ≥ n and chooses a step size to decrease a potential function by at least
a certain constant. This paper uses the Tanabe-Todd-Ye potential function [9, 10]
φ(x, z) := (n + ν) ln(xT z) −
n
X
i=1
ln(xi zi ) − n ln n.
The inexact methods work with step directions of the form
A
0
0
∆x
b − Axk
0 AT
∆y = c − AT y k − z k ,
I
Zk
0 Xk
∆z
−X k z k + µe + ξ 0
(5)
in which a residual ξ 0 remains in the complementarity equations. The primal and dual
feasibility equations must be satisfied exactly. Conditions will be imposed on ξ0 to guarantee
that the step decreases φ sufficiently.
2
The Inexact Potential Reduction Method
Considering one iterate xk , y k , z k , we define diagonal matrices
D := (X k )1/2 (Z k )−1/2 ,
W := (X k Z k )1/2 ,
and w := W e. To analyse the step directions it is convenient to write the Newton system
(4) in the scaled quantities ∆u∗ := D−1 ∆x∗ and ∆v ∗ := D∆z ∗ , which is
AD
0
0
∆u∗
b − Axk
p
∗
T
T
k
k
0
(6)
DA
I
∆y = D(c − A y − z ) =: q .
∗
−1
∆v
I
0
I
−w + µW e
r
The inexact solution corresponding to a
AD
0
0
DAT
I
0
residual ξ in the scaled system then satisfies
0
∆u
p
I
∆y = q .
I
∆v
r+ξ
2
(7)
The inexact potential reduction algorithms make use of the following conditions on the
residual, in which κ ∈ [0, 1) and k·k is the Euclidean norm:
2
−rT ξ ≤ κ krk ,
kξk ≤ κ min(k∆uk , k∆vk),
(8a)
(8b)
−wT ξ ≤ κn/(n + ν) kwk2 .
(8c)
Algorithm 1 is the inexact version of the feasible potential reduction method described
in [4, 11]. All iterates belong to the strictly feasible set
F o := (x, y, z) : Ax = b, AT y + z = c, (x, z) > 0 ,
which is assumed to be nonempty. The algorithm does not require condition (8c).
√
Algorithm 1. Given x0 , y 0 , z 0 ∈ F o and ε > 0. Choose ν ≥ n and κ ∈ [0, 1). Set
4
δ := 0.15(1 − κ) and k := 0.
1. If (xk )T z k ≤ ε then stop.
2. Let µ := (xk )T z k /(n + ν). Compute the solution to (7) with residual ξ that satisfies
(8a)–(8b). Set ∆x := D∆u and ∆z := D−1 ∆v.
3. Find step size αk such that
φ(xk + αk ∆x, z k + αk ∆z) ≤ φ(xk , z k ) − δ.
(9)
4. Set xk+1 , y k+1 , z k+1 := xk , y k , z k + αk (∆x, ∆y, ∆z), k := k + 1 and go to 1.
The following theorem, which is proved in Section 3, states that Algorithm 1 retains the
complexity bound of the exact version analysed in [4, 11].
Theorem 1. Let x0 , y 0 , z 0 ∈ F o and L ≥ 0 such that φ(x0 , z 0 ) = O(νL). Suppose that
ln(1/ε) = O(L). Then Algorithm 1 terminates in O(νL) iterations provided that κ is chosen
independently of n.
Algorithm 2 is an infeasible inexact potential reduction method, as its sequence of iterates
does not, in general, belong to F o . It extends Algorithm 1 from [6] to work with inexact
directions. Given positive constants ρ and ε, it finds ε-accurate approximations to solutions
x∗ to (1) and (y ∗ , z ∗ ) to (2), if they exist, such that
k(x∗ , z ∗ )k∞ ≤ ρ.
√
Algorithm 2. Given ρ > 0 and ε > 0. Set x0 , y 0 , z 0 = ρ (e, 0, e). Choose n ≤ ν ≤ 2n
and κ ∈ [0, 1). Set δ := (1 − κ)4 /(1600(n + ν)2 ) and k := 0.
1. If (xk )T z k ≤ ε then stop.
2. Let µ := (xk )T z k /(n + ν). Compute the solution to (7) with residual ξ that satisfies
(8a)–(8c). Set ∆x := D∆u and ∆z := D−1 ∆v.
3. Find step size αk such that
φ(xk + αk ∆x, z k + αk ∆z) ≤ φ(xk , z k ) − δ,
k
k
T
k
k
k
k T
(10a)
k
(x + α ∆x) (z + α ∆z) ≥ (1 − α )(x ) z .
If no such step size exists then stop.
3
(10b)
4. Set xk+1 , y k+1 , z k+1 := xk , y k , z k + αk (∆x, ∆y, ∆z), k := k + 1 and go to 1.
The following theorem, which is proved in Section 4, states that Algorithm 2 retains the
complexity bound of the exact infeasible potential reduction method [6].
Theorem 2. Let L ≥ ln n such that ρ = O(L). Suppose that ln(1/ε) = O(L). Then
Algorithm 2 terminates in O(ν(n + ν)2 L) iterations provided that κ is chosen independently
of n. If the algorithm stops in step 1 then the iterate is an ε-approximate solution; otherwise
it stops in step 3 showing that there are no optimal solutions x∗ to (1) and (y ∗ , z ∗ ) to (2)
such that k(x∗ , z ∗ )k∞ ≤ ρ.
The following lemma is key to the analysis of the inexact potential reduction methods
given in the next two sections. It exploits the particular form of the scaled Newton system
to prove that condition (8b) bounds the relative error in the inexact solution.
Lemma 1. Given solutions to (6) and (7), suppose that (8b) holds for κ ∈ [0, 1). Then
k∆u − ∆u∗ k
κ
≤
,
k∆u∗ k
1−κ
k∆v − ∆v∗ k
κ
≤
.
k∆v ∗ k
1−κ
Proof. Denoting P := DAT (AD2 AT )−1 AD, the solution to (6) is
∆u∗ = DAT (AD2 AT )−1 p − (I − P )q + (I − P )r,
∆y ∗ = (AD2 AT )−1 (p + ADq − ADr),
∆v ∗ = −DAT (AD2 AT )−1 p + (I − P )q + P r.
It follows that
∆u − ∆u∗ = (I − P )ξ,
∆v − ∆v ∗ = P ξ.
Because P and (I − P ) are projection operators, kP k ≤ 1 and k(I − P )k ≤ 1. Therefore the
absolute errors are bounded by the norm of the residual,
k∆u − ∆u∗ k ≤ kξk ,
k∆v − ∆v∗ k ≤ kξk .
On the other hand, it follows from the triangle inequality and (8b) that
k∆u∗ k = k∆u − (I − P )ξk ≥ k∆uk − kξk ≥ (1 − κ) k∆uk ,
∗
k∆v k = k∆v − P ξk ≥ k∆vk − kξk ≥ (1 − κ) k∆vk .
Combining both inequalities and (8b) gives
kξk
κ k∆uk
κ
k∆u − ∆u∗ k
≤
≤
=
,
k∆u∗ k
k∆u∗ k
(1 − κ) k∆uk
1−κ
k∆v − ∆v∗ k
kξk
κ k∆vk
κ
≤
≤
=
k∆v ∗ k
k∆v ∗ k
(1 − κ) k∆vk
1−κ
as claimed.
4
(12a)
(12b)
3
Proof of Theorem 1
This and the next section use two technical results from Mizuno, Kojima and Todd [6],
which are stated in the following two lemmas.
Lemma 2. For any n-vectors x > 0, z > 0, ∆x, ∆z and α > 0 such that αX −1 ∆x
τ and αZ −1 ∆z ∞ ≤ τ for a constant τ ∈ (0, 1) it holds true that
∞
≤
φ(x + α∆x, z + α∆z) ≤ φ(x, z) + g1 α + g2 α2
with coefficients
g1 =
n+ν
e − (XZ)−1 e
xT z
T
(Z∆x + X∆z),
2
X −1 ∆x + Z −1 ∆z
∆xT ∆z
g2 = (n + ν)
+
xT z
2(1 − τ )
√
Lemma 3. For any n-vector w > 0 and ν ≥ n
√
n+ν
3
−1
,
W e− T w ≥
w w
2wmin
2
.
where W := diag(w) and wmin := mini wi .
Applying Lemma 3 to the vector r defined in (6) shows that
√
3
1
krk = −w + µW −1 e = µ − w + W −1 e ≥ µ
.
µ
2wmin
(13)
The following lemma extends the analysis of the feasible potential reduction method
given in [11]. It shows that Algorithm 1 finds a step size that reduces φ by at least the
prescribed value in each iteration.
Lemma 4. In the k-th iteration of Algorithm 1 (9) holds for
α :=
where wmin := mini
wmin
(1 − κ)3 ,
2 krk
p
xki zik .
Proof. It follows from the first two block equations in (7) and p = 0, q = 0 that
∆uT ∆v = −∆uT DAT ∆y = −(AD∆u)T ∆y = 0,
2
2
2
and analogously (∆u∗ )T ∆v ∗ = 0 from (6). Therefore k∆u∗ k + k∆v ∗ k = krk and from
(12a), (12b) and the definition of α
α krk
1
α k∆u∗ k
≤
≤ ,
wmin 1 − κ
wmin 1 − κ
2
α k∆v ∗ k
α krk
1
k∆vk ≤
≤
≤ .
wmin 1 − κ
wmin 1 − κ
2
αX −1 ∆x
∞
≤ α W −1 k∆uk ≤
αZ −1 ∆z
∞
≤ α W −1
Therefore τ := 1/2 satisfies the assumptions of Lemma 2, so that
φ(xk + α∆x, z k + α∆z) − φ(xk , z k ) ≤ g1 α + g2 α2
5
with coefficients
g1 =
n+ν
e − W −2 e
wT w
g2 = W −1 ∆u
2
T
W (∆u + ∆v)
+ W −1 ∆v
2
.
To show that φ is sufficiently reduced along the direction (∆x, ∆z) it is necessary to show
that g1 is negative and bounded away from zero, while g2 is bounded. From the definition
of r and condition (8a) it follows that
T
n+ν
n+ν
−1
w
−
W
e
(∆u + ∆v) = − T r T (r + ξ)
wT w
w w
n+ν
2
≤ −(1 − κ) T krk .
w w
g1 =
(14a)
(14b)
For the second order term it follows from (12a), (12b) that
g2 = W −1 ∆u
∗ 2
≤
2
+ W −1 ∆v
2
∗ 2
≤
1
2
wmin
2
2
2
k∆uk + k∆vk
k∆u k + k∆v k
krk
= 2
.
2 (1 − κ)2
wmin
wmin (1 − κ)2
Inserting the bounds on g1 and g2 into the quadratic form and using the definition of α gives
φ(xk + α∆x, z k + α∆z) − φ(xk , z k )
≤ −(1 − κ)
2
n+ν
krk
2
α2
krk α + 2
wT w
wmin (1 − κ)2
= −(1 − κ)4
(1 − κ)4
n + ν wmin
krk
+
.
wT w 2
4
Finally, using the bound on krk from (13) gives
φ(xk + α∆x, z k + α∆z) − φ(xk , z k )
!
√
3 1
4
≤ (1 − κ) −
+
4
4
≤ −0.15(1 − κ)4 = −δ
as claimed.
The proof of Theorem 1 is immediate. Since φ(x, z) ≥ ν ln(xT z), the termination
condition
ν ln (xk )T z k ≤ ν ln(ε)
is satisfied when
φ(xk , z k ) ≤ φ(x0 , z 0 ) − kδ ≤ ν ln(ε).
(15)
Since under the assumption of the theorem φ(x0 , z 0 ) = O(νL) and ln(1/ε) = O(L), and
since δ is independent of n, (15) holds for k ≥ K = O(νL).
6
4
Proof of Theorem 2
The proof of the theorem is based on Mizuno, Kojima and Todd [6]. We define a sequence
{θk } by
θ0 := 1 and θk+1 := (1 − αk )θk for k ≥ 0.
(16)
Since the first two block equations in (3) are linear and satisfied exactly by a full step of the
algorithm
Axk − b, AT y k + z k − c = θk Ax0 − b, AT y 0 + z 0 − c .
The following lemma is obtained from Lemma 4 in [6] by setting γ0 = 1 and γ1 = 1.
Lemma 5. Let ρ > 0 and suppose that
x0 , y 0 , z 0 = ρ (e, 0, e) ,
Axk − b, AT y k + z k − c = θk Ax0 − b, AT y 0 + z 0 − c ,
(xk )T z k ≥ θk (x0 )T z 0 .
(17)
If there exist solutions x∗ to (1) and (y ∗ , z ∗ ) to (2) such that k(x∗ , z ∗ )k∞ ≤ ρ then the
solution to (6) at xk , y k , z k satisfies
5(xk )T z k
,
wmin
5(xk )T z k
,
k∆v∗ k ≤
wmin
k∆u∗ k ≤
where wmin := mini
p
xki zik .
The following lemma is based on Lemma 5 in [6]. It shows that when optimal solutions
to (1) and (2) exist, then Algorithm 2 can find a step size in each iteration that satisfies
(10a) and (10b).
Lemma 6. If there exist optimal solutions x∗ to (1) and (y ∗ , z ∗ ) to (2) such that k(x∗ , z ∗ )k∞ ≤
ρ then (10a) and (10b) hold for
2
(1 − κ)3 wmin
200(n + ν)(xk )T z k
p
:= mini xki zik .
α :=
in the k-th iteration, where wmin
Proof. A simple calculation shows that by definition of x0 , z 0 and because of (10b) the
assumptions of Lemma 5 are satisfied. Combining the lemma with (12a), (12b) shows that
5(xk )T z k
,
(1 − κ)wmin
5(xk )T z k
.
k∆vk ≤
(1 − κ)wmin
k∆uk ≤
It follows that
αX −1 ∆x ≤ α W −1 k∆uk ≤ α
αZ −1 ∆z ≤ α W −1 k∆vk ≤ α
7
5(xk )T z k
(1 − κ)2
1
=
≤
,
2
(1 − κ)wmin
40(n + ν)
40
(1 − κ)2
1
5(xk )T z k
=
≤
.
2
(1 − κ)wmin
40(n + ν)
40
Therefore τ := 1/40 satisfies the assumption of Lemma 2, so that
φ(xk + α∆x, z k + α∆z) ≤ φ(xk , z k ) + g1 α + g2 α2
with coefficients
g1 =
g2 =
n+ν
e − W −2 e
wT w
T
W (∆u + ∆v),
2
W −1 ∆u + W −1 ∆v
∆uT ∆v
(n + ν)
+
wT w
2(1 − τ )
2
!
.
It will be shown that g1 is negative and bounded away from zero, while g2 is bounded.
Combining (14b) and (13) gives
g1 ≤ −(1 − κ)
3
1
2
krk ≤ −(1 − κ)µ 2 .
µ
4wmin
Next, from the bound on k∆uk and k∆vk it follows that
T
|∆u ∆v| ≤ k∆uk k∆vk ≤
5wT w
(1 − κ)wmin
2
,
(18)
which implies that
(n + ν)
n+ν
∆uT ∆v
≤ T
wT w
w w
5wT w
(1 − κ)wmin
2
≤
n+ν
n
5wT w
2
(1 − κ)wmin
2
,
(19)
2
where the last inequality is obtained by multiplying with w T w/(nwmin
) ≥ 1. Moreover, the
bound on k∆uk and k∆vk also implies that
2
W −1 ∆u + W −1 ∆v
2(1 − τ )
2
≤
1
1−τ
5wT w
2
(1 − κ)wmin
2
.
(20)
Adding up (19) and (20) and using ν ≤ 2n gives
g2 ≤
1
n+ν
+
n
1−τ
5wT w
2
(1 − κ)wmin
2
≤5
5wT w
2
(1 − κ)wmin
2
.
Inserting g1 , g2 and the definition of α into the quadratic form gives
φ(xk + α∆x, z k + α∆z) − φ(xk , z k )
2
wT w 3
5wT w
α2
α+5
≤ −(1 − κ)
2
2
n + ν 4wmin
(1 − κ)wmin
2 !
3
(1 − κ)4
5
= −δ,
−
=
+5
(n + ν)2
4 · 200
200
which shows that α satisfies (10a).
Finally, to verify that α satisfies (10b), a straightforward calculation shows that
n
∆z T xk + ∆xT z k = ∆v T w + ∆uT w = wT (r + ξ) =
− 1 wT w + w T ξ
n+ν
8
and consequently
(xk + α∆x)T (z k + α∆z) = (xk )T z k + α(∆z T xk + ∆xT z k ) + α2 ∆xT ∆z
n
= (1 − α)w T w + α
wT w + wT ξ + α∆uT ∆v .
n+ν
Using (18) and (8c) it follows for the term in parenthesis that
n
(1 − κ)n T
w T w + wT ξ + α∆uT ∆v ≥
w w−α
n+ν
n+ν
(1 − κ)wT w
n−
=
n+ν
5wT w
(1 − κ)wmin
1
> 0.
8
2
Therefore α satisfies (10b), which completes the proof.
Theorem 2 follows from the lemma by the same argumentation as in [6]. Under the hypothesis of the theorem φ(x0 , z 0 ) = O(νL) and ln(1/ε) = O(L). Since φ(x, z) ≥ ν ln(xT z)
and the potential function decreases by at least δ in each iteration, Algorithm 2 terminates in
O(νL/δ) = O(ν(n + ν)2 L) iterations. When the algorithm stops in step 1, then (xk )T z k ≤ ε
and because of (10b)
Axk − b, AT y k + z k − c ≤ ε Ax0 − b, AT y 0 + z 0 − c /(x0 )T z 0 ,
so that the final iterate is indeed an ε-approximate solution. On the other hand, if there
exist optimal solutions x∗ to (1) and (y ∗ , z ∗ ) to (2) such that k(x∗ , z ∗ )k ≤ ρ, then it follows
from Lemma 6 that a step size exists which satisfies (10a) and (10b). Therefore, if the
algorithm stops in step 3, then there are no such solutions.
Remark 1. Theorem 2 imposed the upper bound ν ≤ 2n, which is not needed in the analysis
of the exact potential reduction method. The actual value of this bound, however, is not
important and the proof remains valid by adapting α and δ as long as ν = O(n).
5
Discussion
The analysis has shown some insights into the conditions (8a)–(8c). It has been seen from
(14a) that −rT ξ < krk2 is sufficient and necessary for (∆x, ∆z) to be a descent direction
for φ, making (8a) a necessary condition in a potential reduction method. Condition (8b)
bounds the curvature of φ along (∆x, ∆z). When the iterate is feasible this condition can
be replaced by krk ≤ c kξk for an arbitrary constant c, since then
2
2
2
k∆uk + k∆vk = kr + ξk ≤ (1 + c)2 krk
2
gives the required bound on g2 in Lemma 4. For an infeasible iterate, however, condition
(8b) is needed in its form to bound k∆uk and k∆vk. Finally, condition (8c) guarantees
that in the infeasible algorithm the step size restriction (10b) can be satisfied.
Inexact directions of the form (5) have been used and analysed in [1, 7] in the pathfollowing method, which sets µ = σxT z/n for σ < 1 and chooses the step size to keep
xi zi ≥ γxT z/n for a constant γ ∈ (0, 1). Both papers use a basic-nonbasic splitting of the
variables and solve (7) with residual ξ = (ξ B , ξ N ) = (ξ B , 0). [7] imposes the condition
kξ B k ≤
(1 − γ)σ
√
4 n
9
q
xT z/n,
(21)
whereas
kWB ξ B k∞ ≤ ηxT z/n
(22)
is used in [1] with η < 1 depending on σ and γ. Both conditions seem to require more
effort by an iterative method than the conditions used in this paper. (21) obviously becomes
restrictive for large problems. (22) is not affected by the problem dimension, but the infinity
norm does not tolerate outliers in WB ξB .
Another form of inexact direction has been analysed in [5], which solves the complementarity equations exactly and allows a residual in the primal and dual equations. Due to
the form of the Newton system, solving the complementarity equations exactly is trivial,
whereas computing directions that satisfy primal feasibility requires particular preconditioning techniques [2, 7, 8]. The analysis in [5] shows, however, that a residual in the feasibility
equations must be measured in a norm depending on A, which seems not to be accessible
in an implementation. Therefore this form of inexact direction is hardly useful in practice.
References
[1] G. Al-Jeiroudi and J. Gondzio. Convergence analysis of the inexact infeasible interiorpoint method for linear optimization. J. Optim. Theory Appl., 141(2):231–247, 2009.
[2] Ghussoun Al-Jeiroudi, Jacek Gondzio, and Julian Hall. Preconditioning indefinite systems in interior point methods for large scale linear optimisation. Optim. Methods
Softw., 23(3):345–363, 2008.
[3] Erling D. Andersen, Jacek Gondzio, Csaba Mészáros, and Xiaojie Xu. Implementation
of interior-point methods for large scale linear programs. In Interior point methods of
mathematical programming, volume 5 of Appl. Optim., pages 189–252. Kluwer Acad.
Publ., Dordrecht, 1996.
√
[4] Masakazu Kojima, Shinji Mizuno, and Akiko Yoshise. An O( n L) iteration potential
reduction algorithm for linear complementarity problems. Math. Programming, 50(3,
(Ser. A)):331–342, 1991.
[5] Shinji Mizuno and Florian Jarre. Global and polynomial-time convergence of an
infeasible-interior-point algorithm using inexact computation. Math. Program., 84(1,
Ser. A):105–122, 1999.
[6] Shinji Mizuno, Masakazu Kojima, and Michael J. Todd. Infeasible-interior-point primaldual potential-reduction algorithms for linear programming. SIAM J. Optim., 5(1):52–
67, 1995.
[7] R.D.C. Monteiro and J.W. O’Neal. Convergence analysis of a long-step primal-dual
infeasible interior-point lp algorithm based on iterative linear solvers. Technical report,
School of ISyE, Georgia Tech, USA, 2003.
[8] A. R. L. Oliveira and D. C. Sorensen. A new class of preconditioners for large-scale
linear systems from interior point methods for linear programming. Linear Algebra
Appl., 394:1–24, 2005.
[9] Kunio Tanabe. Centered Newton method for mathematical programming. In System
modelling and optimization (Tokyo, 1987), volume 113 of Lecture Notes in Control and
Inform. Sci., pages 197–206. Springer, Berlin, 1988.
[10] Michael J. Todd and Yinyu Ye. A centered projective algorithm for linear programming.
Math. Oper. Res., 15(3):508–529, 1990.
10
[11] Stephen J. Wright. Primal-dual interior-point methods. Society for Industrial and
Applied Mathematics (SIAM), Philadelphia, PA, 1997.
11