SS symmetry
Article
A Mean Extragradient Method for Solving
Variational Inequalities
Apichit Buakird 1 , Nimit Nimana 1, *
1
2
3
*
and Narin Petrot 2,3
Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand;
[email protected] or
[email protected]
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand;
[email protected]
Center of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University,
Phitsanulok 65000, Thailand
Correspondence:
[email protected]
Abstract: We propose a modified extragradient method for solving the variational inequality problem
in a Hilbert space. The method is a combination of the well-known subgradient extragradient with the
Mann’s mean value method in which the updated iterate is picked in the convex hull of all previous
iterates. We show weak convergence of the mean value iterate to a solution of the variational
inequality problem, provided that a condition on the corresponding averaging matrix is fulfilled.
Some numerical experiments are given to show the effectiveness of the obtained theoretical result.
Keywords: averaging matrix; extragradient method; mean value iteration; variational inequalities;
weak convergence
Citation: Buakird, A.; Nimana, N.;
Petrot, N. A Mean Extragradient
Method for Solving Variational
Inequalities. Symmetry 2021, 13, 462.
https://doi.org/10.3390/sym13030462
Academic Editor: Sun Young Cho
1. Introduction
Let H be a real Hilbert space with an inner product h·, ·i and the associated norm
k · k. Let C be a nonempty closed convex subset of H, and let F : H → H be a monotone
operator defined by
h x − y, F ( x ) − F (y)i ≥ 0,
for all x, y ∈ H, and a L-Lipschitz continuous operator defined by
k F ( x ) − F (y)k ≤ Lk x − yk,
Received: 21 February 2021
Accepted: 9 March 2021
Published: 12 March 2021
for all x, y ∈ H. The (Stampacchia) variational inequality problem is to find a point x ∗ ∈ C
such that
h F ( x ∗ ), z − x ∗ i ≥ 0 for all z ∈ C.
Publisher’s Note: MDPI stays neutral
(1)
with regard to jurisdictional claims in
published maps and institutional affiliations.
We will denote the solution set of the considered variational inequality by VIP( F, C )
and assume that it is nonempty. Since VIP( F, C ) has been utilized for modeling many
mathematical and practical situations (see [1] for insight discussions), many iterative
methods have been proposed for solving it. The classical method is due to Goldstein [2]
which can be read in the form: for a given x1 ∈ H, calculate
Copyright: © 2021 by the authors.
xk+1 := PC ( xk − τF ( xk )),
Licensee MDPI, Basel, Switzerland.
k ∈ N,
(2)
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
where τ > 0 is a step-size parameter and PC is the metric projection onto C. By assuming
that F is η-strongly monotone and L-Lipschitz continuous and τ ∈ (0, 2η/L2 ), it has been
proved that the sequence generated by (2) converges strongly to the unique solution of
VIP( F, C ).
4.0/).
Symmetry 2021, 13, 462. https://doi.org/10.3390/sym13030462
https://www.mdpi.com/journal/symmetry
Symmetry 2021, 13, 462
2 of 14
As the convergence of the iterative scheme in (2) needs to use the strong monotonicity
of F which is quite restricted, in 1976, Korpelevich [3] proposed the so-called extragradient
method (in short, EM), which is defined by the following form: for a given x1 ∈ H, calculate
(
yk := PC ( xk − τF ( xk )),
(3)
xn+1 := PC ( xk − τF (yk )),
k ∈ N.
In the setting of a finite dimensional space, it has been proved that the sequence generated
by EM (3) converges to a solution of VIP( F, C ) governed by Lipschitz continuity and
monotonicity of F. From such starting point, several variants of Korpelevich’s EM have
been investigated, for instance [4–9] and references there cited in. Especially, we only
underline here the work of Censor, Gibali and Reich [10]. As one can see from the above
scheme, EM requires the performing of two metric projections in each iteration. For this
reason, EM will be suitable for the case when the constrained set C is simple enough so
that the metric projection PC onto C has a closed-form expression, otherwise one needs
to solve a hidden minimization sub-problem. To avoid this situation, Censor, Gibali and
Reich proposed the so-called subgradient–extragradient method (SEM), which requires
only one metric projection onto C for updating yk , meanwhile, another one is replaced
by the metric projection onto a half-space containing C for updating the next iterate xk+1 .
The method essentially has the form:
(
yk := PC ( xk − τF ( xk )),
(4)
xn+1 := PTk ( xk − τF (yk )),
k ∈ N,
where
Tk := {w ∈ H : h( xk − τF ( xk )) − yk , w − yk i ≤ 0}.
It is worth noting that the closed-form expression of PTk is explicitly given in the literature
(see the Formula (7) for further details). The weak convergence result is also given in the
paper [10]. Several variants of SEM have been investigated, see for instance [11–16]. Note
that, even if SEM has the advantage of reducing the performance of the metric projection
onto C when performing xk+1 , there still is the metric projection when evaluating yk , in this
situation the inner-loop iteration remains when the constrained set C is not simple enough,
for example, the intersection of a finite number of nonempty closed convex simple sets.
On the other hand, let us move to another aspect of the nonlinear problem. Let
T : H → H be a nonlinear operator, the celebrate fixed-point problem is to find x ∗ ∈
Fix T := { x ∈ H : x = Tx } 6= ∅. In order to solve this problem, we recall the classical
Picard’s iteration which updates the next iterate xk+1 by using the information of the
current iterate xk , that is
xk+1 := Txk ,
k ∈ N.
This kind of method is known as the memoryless scheme, and it is well-known in the
literature that the sequence generated by Picard’s iterative method may fail to converge to
a point in Fix T. In 1953, Mann [17] proposed a modified version of Picard’s iteration as
xk+1 := Tx k ,
k ∈ N,
where x k denotes a convex combination of the iterates { x j }kj=1 , or in another word, x k
is a point in the convex hull of all previous iterates. This method is known as Mann’s
mean value iteration. The advantage of Mann’s mean value iteration is underlined for
avoiding some numerical desirable situations, for instance, the generated sequence may
have zig–zag or spiral behavior around the solution set, see [18] for more insight discussion.
Some works based on the idea of Mann’s mean value iteration have been investigated,
for instance [18,19].
In this work, we present an iterative method by utilizing the ideas of the celebrated
SEM together with Mann’s mean value iteration for solving VIP( F, C ) governed by mono-
Symmetry 2021, 13, 462
3 of 14
tone and Lipschitz continuous operator. We show that the sequence generated by the
proposed method converges weakly to a solution of VIP( F, C ). To demonstrate the numerical behavior of the proposed method, we consider the constrained minimization problem
in which the constrained set is given by the intersection of a finite family nonempty closed
convex simple sets. We present numerical experiments which show that, under some
suitable parameters, the proposed method outperforms the existing one.
2. Preliminaries
For convenience, we present here some notations which are used throughout the
paper. For more details, the reader may consult the reference books [20,21].
We denote the strong convergence and weak convergence of a sequence { xk }∞
k =1 to
x ∈ H by xk → x and xk ⇀ x, respectively. We denote the identity operator on H by Id.
Let C be a nonempty, closed, and convex subset of H. For each point x ∈ H, there
exists a unique nearest point in C, denoted by PC ( x ), that is,
k x − PC ( x )k = inf k x − yk.
y∈C
(5)
The mapping PC : H → C is called the metric projection of H onto C. Note that PC is a
nonexpansive mapping of H onto C, i.e.,
k PC ( x ) − PC (y)k ≤ k x − yk,
∀ x, y ∈ H.
Moreover, the metric projection PC satisfies the variational property:
h x − PC ( x ), PC ( x ) − yi ≥ 0,
∀ x ∈ H, y ∈ C.
(6)
Let a ∈ H \ {0} and β ∈ R, we define the hyperplane in H by
H ( a; β) := { x ∈ H : h a, x i = β},
and the half-space in H by
H≤ ( a; β) := { x ∈ H : h a, x i ≤ β}.
It is clear that both hyperplane and half-space are closed and convex sets. Moreover, it
is important to note that the metric projection onto the half-space H≤ ( a; β) can be done
explicitly as the following formula:
(
h a,x i− β
x − k ak2 a, if h a, x i > β,
(7)
PH≤ ( a;β) ( x ) :=
x,
if h a, x i ≤ β.
For a point x ∈ H and a nonempty closed convex C ⊂ H, we say that a point z ∈ H
separates C from x if,
h x − z, z − yi ≥ 0,
∀y ∈ C.
We say that an operator T : C → H is a separator of C if the point Tx separates C from a
point x for all x ∈ H. It is clear from the relation (6) that the projection PC is a separator of
C. It is worth noting that for any x ∈
/ C, the hyperplane H ( x − PC ( x ); h PC ( x ), x − PC ( x )i)
cuts the space H into two half-spaces. One space contains the element x while the other
one contains the subset C. We also know that
C ⊂ H≤ ( x − PC ( x ); h PC ( x ), x − PC ( x )i),
and the hyperplane H≤ ( x − PC ( x ); h PC ( x ), x − PC ( x )i) is a supporting hyperplane to C at
the point PC ( x ).
Symmetry 2021, 13, 462
4 of 14
Let A : H → 2H be a set-valued operator. We denote its graph by
Gr( A) := {( x, u) ∈ H × H : u ∈ Ax }.
We denote the set of all zeros of A by
A−1 (0) := { x ∈ H : 0 ∈ A( x )}.
The operator A is said to be monotone if
h x − y, u − vi ≥ 0,
for all ( x, u), (y, v) ∈ Gr( A), and it is called maximally monotone if its graph is not properly
contained in the graph of any other monotone operator. Note that if A is maximally
monotone, then A−1 (0) is a convex and closed set.
Let C ⊂ H be a nonempty closed convex set. We denote by NC ( x ) the normal cone to
C at x ∈ C, i.e.,
NC ( x ) := {y ∈ H : hy, z − x i ≤ 0, ∀z ∈ C }.
Let F : H → H be a monotone continuous operator and C be a nonempty closed
convex subset of H. Define the operator A : H → 2H by
A( x ) :=
F ( x ) + NC ( x ), for x ∈ C,
∅,
for x ∈
/ C.
Then, we have A is a maximally monotone operator, and the following important property holds:
VIP( F, C ) = A−1 (0).
3. Mann’s Type Mean Extragradient Algorithm
In this section, we present a mean extragradient algorithm for solving the considered
variational inequality problem.
We start with recalling the so-called averaging matrix as follows. An infinite lower
triangular row matrix [αk,j ]∞
k,j=1 is said to be averaging if the following conditions are satisfied:
(A1) for all k, j ≥ 1, αk,j ≥ 0 ;
(A2) for all k ≥ 1, if j > k, then αk,j = 0;
(A3) for all k ≥ 1, ∑kj=1 αk,j = 1;
(A4) for all j ≥ 1, limk→+∞ αk,j = 0.
∞
For a sequence { xk }∞
k =1 ⊂ H and an averaging matrix [ αk,j ]k,j=1 , we denote the mean
iterate by
k
xk =
∑ αk,j x j ,
j =1
for all k ≥ 1.
Now, we are in position to state the Mann mean extragradient method (Mann-MEM)
as follows Algorithm 1.
Symmetry 2021, 13, 462
5 of 14
Algorithm 1: Mann’s type mean extragradient method (Mann-MEM).
Initialization: Select a point x1 ∈ H, a parameter τ > 0 and an averaging matrix
[αk,j ]∞
k,j=1 .
Step 1: Given a current iterate xk ∈ H, compute the mean iterate
k
xk =
∑ αk,j x j .
j =1
Compute
yk = PC ( x k − τF ( x k )).
Step 2: If yk = x k , then x k ∈ VIP( F, C ) and STOP.
If not, construct the half-space Tk defined by
Tk := {w ∈ H : h( x k − τF ( x k )) − yk , w − yk i ≤ 0},
and calculate the next iterate
xk+1 = PTk ( x k − τF (yk )).
Update k = k + 1 and go to Step 1.
Remark 1. In the case that the averaging matrix [αk,j ]∞
k,j=1 is the identity matrix, then Mean-MEM is
reduced to the classical subgradient extragradient method proposed by Censor et al. [10] Algorithm 4.1.
The following proposition confirms us a stopping criterion of Mann-MEM on Step 2.
∞
Proposition 1. Let { x k }∞
k=1 and { yk }k=1 be sequences generated by Mann − MEM. If there is
k0 ∈ N such that yk0 = x k0 , then x k0 ∈ VIP( F, C ).
Proof. Let k0 ∈ N be such that yk0 = x k0 . Then, by the definition of yk , we have x k0 =
yk0 = PC ( x k0 − τF ( x k0 )), which yields that x k0 ∈ C. For every z ∈ C, we have from the
inequality (6) that
z − x k0 , x k0 − τF ( x k0 ) − x k0 ≤ 0,
and then
z − x k0 , F ( x k0 ) ≥ 0,
which holds by the fact that τ > 0. Hence, we conclude that x k0 ∈ VIP( F, C ) as required.
According to Proposition 1, for the rest of our convergence analysis, we may assume
throughout this section that Mann-MEM does not terminate after a finite number of
iterations, that is, we assume that yk 6= x k for all k ≥ 1.
The following technical lemma is a key tool in order to prove the convergence result
of a sequence generated by Mann − MEM.
Lemma 1. Let { xk }∞
k =1 be a sequence generated by Mann − MEM. For every k ≥ 1 and u ∈
VIP( F, C ), it holds that
k xk+1 − uk2 ≤ k x k − uk2 − (1 − τ 2 L2 )k x k − yk k2 ≤
k
∑ αk,j kx j − uk2 − (1 − τ2 L2 )k xk − yk k2 .
j =1
Proof. Let k ≥ 1 and u ∈ VIP( F, C ) be fixed. Since F is monotone, we note that
h F (yk ) − F (u), yk − ui ≥ 0,
Symmetry 2021, 13, 462
6 of 14
which implies that
h F (yk ), yk − ui ≥ h F (u), yk − ui ≥ 0,
where the second inequality holds true by the fact that yk ∈ C and u ∈ VIP( F, C ). Thus,
we also have
h F ( y k ), x k +1 − u i ≥ h F ( y k ), x k +1 − y k i.
(8)
Now, invoking the definition of Tk , we note that
h xk+1 − yk , ( x k − τF ( x k )) − yk i ≤ 0.
and, it follows that
h xk+1 − yk , x k − τF (yk ) − yk i = h xk+1 − yk , x k − τF ( x k ) − yk i
+h xk+1 − yk , −τ ( F (yk ) − F ( x k ))i
≤ τ h xk+1 − yk , F ( x k ) − F (yk )i.
(9)
Denoting zk := x k − τF (yk ), we note that
k x k +1 − u k2
= k PTk (zk ) − uk2
= k PTk (zk ) − zk + zk − uk2
= k PTk (zk ) − zk k2 + kzk − uk2 + 2h PTk (zk ) − zk , zk − ui.
(10)
Note that, it follows from the variational property of PTk that
0
≥ 2hzk − PTk (zk ), u − PTk (zk )i
= 2kzk − PTk (zk )k2 + 2h PTk (zk ) − zk , zk − ui,
and which yields that
kzk − PTk (zk )k2 + 2h PTk (zk ) − zk , zk − ui ≤ −kzk − PTk (zk )k2 .
(11)
By substituting (11) in (10), we obtain
k x k +1 − u k2
≤ kzk − uk2 − kzk − PTk (zk )k2
= k x k − τF (yk ) − uk2 − k x k − τF (yk ) − xk+1 k2
= k x k − uk2 + τ 2 k F (yk )k2 − 2τ h F (yk ), x k − ui
−k x k − xk+1 k2 − τ 2 k F (yk )k2 + 2τ h F (yk ), x k − xk+1 i
= k x k − uk2 − k x k − xk+1 k2 + 2τ h F (yk ), u − xk+1 i.
Thus, from above inequality and by using (8), (9), we have
k x k +1 − u k2
≤ k x k − uk2 − k x k − xk+1 k2 + 2τ h F (yk ), yk − xk+1 i
≤ k x k − uk2 − k x k − yk + yk − xk+1 k2 + 2τ h F (yk ), yk − xk+1 i
= k x k − u k2 − k x k − y k k2 − k y k − x k +1 k2 − 2h x k − y k , y k − x k +1 i
+2τ h F (yk ), yk − xk+1 i
= k x k − uk2 − k x k − yk k2 − kyk − xk+1 k2 − 2hyk − xk+1 , x k − yk − τF (yk )i
≤ k x k − uk2 − k x k − yk k2 − kyk − xk+1 k2 + 2τ hyk − xk+1 , F ( x k ) − F (yk )i
≤ k x k − uk2 − k x k − yk k2 − kyk − xk+1 k2 + 2τ kyk − xk+1 kk F ( x k ) − F (yk )k.
(12)
Symmetry 2021, 13, 462
7 of 14
By using the L-Lipschitz continuity of F and the fact that 2ab ≤ a2 + b2 for all a, b ∈ R,
we have
k x k +1 − u k2
≤ k x k − u k2 − k x k − y k k2 − k y k − x k +1 k2
+2τLkyk − xk+1 kk x k − yk k
≤ k x k − u k2 − k x k − y k k2 − k y k − x k +1 k2
+ τ 2 L2 k x k − y k k2 + k y k − x k +1 k2
= k x k − uk2 − (1 − τ 2 L2 )k x k − yk k2 .
Finally, by using the assumption that [αk,j ]∞
k,j=1 is an averaging matrix, and the convexity of k · k2 , we have
k x k +1 − u k2
≤ k x k − uk2 − (1 − τ 2 L2 )k x k − yk k2
k
=
∑ αk,j u
j =1
j =1
− (1 − τ 2 L2 )k x k − yk k2
2
k
∑ αk,j (x j − u)
=
2
k
∑ αk,j x j −
− (1 − τ 2 L2 )k x k − yk k2
j =1
k
≤
∑ αk,j kx j − uk2 − (1 − τ2 L2 )kxk − yk k2 ,
j =1
and the proof is completed.
Next, we recall the following concept which plays a crucial role in the convergence
analysis of our work. The following proposition is very useful in our convergence proof,
and it is due to [22] (Section 3.5, Theorem 4).
∞
Proposition 2. Let { ϕk }∞
k =1 be a real sequence, r ∈ R, and [ αk,j ]k,j=1 be an averaging matrix.
If ϕk → r, then ϕk := ∑kj=1 αk,j ϕ j → r.
An averaging matrix [αk,j ]∞
k,j=1 is said to be M-concentrating [18] if for every nonnega∞
tive real sequences { ϕk }∞
,
and
{ε}∞
k =1
k =1 such that ∑k =1 ε k < + ∞ and it holds that
ϕ k +1 ≤ ϕ k + ε k ,
(13)
where ϕk := ∑kj=1 αk,j ϕ j , for all k ≥ 1, we have the limit limk→∞ ϕk exists.
Note that in view of Lemma 1, if we add an additional prior criterion on τ so that the
term (1 − τ 2 L2 )k x k − yk k2 of the right-hand side is nonpositive, together with the assumtion that the averaging matrix [αk,j ]∞
k,j=1 is M-concentrating, it will yield the convergence of
the sequence {k xk − uk2 }∞
k =1 . Now, we are in a position to formally state the convergence
analysis of Mann-MEM.
Theorem 1. Assume that the averaging matrix [αk,j ]∞
k,j=1 is M-concentrating and τ ∈ (0, 1/L ).
Then any sequence { x k }∞
generated
by
Mann
−
MEM converges weakly to a solution in
k =1
VIP( F, C ).
Proof. Let k ≥ 1 and u ∈ VIP( F, C ) be fixed. Now, we note from Lemma 1 that
k x k +1 − u k2 ≤
k
∑ αk,j kx j − uk2 − (1 − τ2 L2 )kxk − yk k2 .
j =1
(14)
Symmetry 2021, 13, 462
8 of 14
Since τ ∈ (0, 1/L), we have
0 < 1 − τ 2 L2 < 1,
(15)
and then the inequality (14) can be written as
k x k +1 − u k2 ≤
k
∑ αk,j k x j − uk2 .
j =1
In view of ϕ := k xk − uk2 and ε k := 0 for every k ≥ 1 in (13), and by using the assumption that the averaging matrix [αk,j ]∞
k,j=1 is M-concentrating, we obtain that limk →∞ || xk −
uk2 exists, say r (u) ∈ R. Invoking Lemma 2, we have limk→∞ ∑kj=1 αk,j k x j − uk2 exists
with the same limit r (u), and subsequently, it follows from these together with (14) and
0 < 1 − τ 2 L2 < 1 that
lim k x k − yk k = 0.
(16)
k→∞
Moreover, we note that from Lemma 1 again that
k x k +1 − u k2 ≤ k x k − u k2 ≤
k
∑ αk,j kx j − uk2 ,
j =1
we also have limk→∞ k x k − uk2 = r (u).
′
Since the sequence { x k }∞
k =1 is bounded, there are a weak cluster point x ∈ H and a
∞
′
′
subset { x ki }i=1 such that x ki ⇀ x . Thus, it follows from (16) that yki ⇀ x .
Now, let us define the operator A : H → 2H by
A(v) :=
F (v) + NC (v), for v ∈ C,
∅,
otherwise.
Then, we know that A is a maximally monotone operator and A−1 (0) = VIP( F, C ).
Further, for (v, w) ∈ G ( A), that is w ∈ A(v) = F (v) + NC (v), we have w − F (v) ∈ NC (v)
which means that
h w − F ( v ), v − y i ≥ 0
∀y ∈ C.
(17)
Thus, by the variational property of yk , we have
h x k − τF ( x k ) − yk , yk − vi ≥ 0,
that is,
yk − x k
τ
+ F ( x k ), v − y k
≥ 0,
(18)
for all k ≥ 1. Hence, by using (17) and (18) and replacing y by yki and yk by yki , respectively,
we have
hw, v − yki i ≥ h F (v), v − yki i
≥
=
yki − x ki
h F ( v ), v − y k i i −
+ F ( x k i ), v − y k i
τ
h F ( v ) − F ( y k i ), v − y k i i + h F ( y k i ) − F ( x k i ), v − y k i i
yki − x ki
, v − yki
−
τ
Symmetry 2021, 13, 462
9 of 14
Taking the limit as i → ∞, we obtain
hw, v − x ′ i ≥ 0.
Now, since A is a maximally monotone operator, we obtain that x ′ ∈ A−1 (0) =
VIP( F, C ).
Next, we show that the whole sequence converges weakly to x ′ . Assume that there is a
∞
′
′
subsequence { x j }∞
j=1 of { x k }k =1 such that it converges weakly to some y 6 = x . By following
′
all above statements, we also obtain that y ∈ VIP( F, C ) and limk→∞ k x k − y′ k. Invoking
the Opial’s condition, we note that
lim k x k − x ′ k = lim inf k x ki − x ′ k
i→∞
k→∞
< lim inf k x ki − y′ k
i→∞
=
=
lim k x ki − y′ k
i→∞
lim k x k j − y′ k
j→∞
< lim inf k x k j − x ′ k = lim k x k − x ′ k,
j→∞
k→∞
which is a contradiction. Therefore, x ′ = y′ , and hence we conclude that { x k }∞
k =1 converges
weakly to x ′ .
Next, we will discuss an important example of the M-concentrating averaging matrix,
for simplicity, we will make use of the following notions. For a given averaging matrix
[αk,j ]∞
k,j=1 , we denote
α′k,j
:=
ρj
:=
αk+1,j − (1 − αk+1,k+1 )αk,j ,
(
)
∞
max
∑ αk,j − 1, 0
,
k= j
Jk
:=
{ j ∈ N : αk,j > 0},
for all k, j ≥ 1.
An averaging matrix [αk,j ]∞
k,j=1 is said to satisfy the generalized segmenting condition [18] if
∞
k
∑ ∑ |α′k,j | < +∞.
k =1 j =1
If α′k,j = 0 for all k, j ≥ 1, then [αk,j ]∞
k,j=1 is said to satisfy the segmenting condition.
The following proposition indicates the sufficient and necessary condition for an
averaging matrix satisfying the generalized segmenting condition to be M-concentrating.
Proposition 3. Let [αk,j ]∞
k,j=1 be an averaging matrix satisfying the generalized segmenting condi∞
tion. Then, [αk,j ]k,j=1 is M-concentrating if and only if lim infk→∞ αk,k > 0.
Proof. The sufficient to be M-concentrating can be found in [18] (Example 2.5). The necessary case is proved in [23] (Proposition 3.4)
Example 1. An averaging matrix satisfies the generalized segmenting condition and lim inf αk,k > 0
is the infinite matrix [αk,j ]∞
k,j=1 which is defined by
αk,j
(1 − α ) k −1
:=
0
α (1 − α ) k − j
if j = 1 and k ≥ 1,
if j ≥ 2 and k < j,
if j ≥ 2 and k ≥ j,
k→∞
Symmetry 2021, 13, 462
10 of 14
where the parameter α ∈ (0, 1).
4. Numerical Result
In this section, we present the effectiveness of the proposed algorithm by minimizing
the distance of a given point over the intersection of a finite number of linear half-spaces.
Let c ∈ Rn and ai ∈ Rn and bi ≥ 0 be given data, for all i = 1, . . . , m. In this
experiment, we want to solve the constrained minimization problem of the form:
minimize
subject to
1
2
2 k x − ck
h a i , x i ≤ bi , i
(19)
= 1, . . . , m.
Note that the function f := 12 k · −ck2 is convex Fréchet differentiable with ∇ f is 1-Lipschitz
continuous gradient, and the constrained set Ci := { x ∈ Rn : h ai , x i ≤ bi }, i = 1, . . . , m, is a
nonempty closed convex set. Thus, the problem (19) fits into the setting of the variational
T
inequality problem (1), where F = ∇ f and C = im=1 Ci . One can easily see that F is
1-Lipschitz continuous. In this situation, the obtained theoretical results hold and we can
apply Mann-MEM for solving the problem (19).
All the experiments were performed under MATLAB 9.6 (R2019a) running on a
MacBook Pro 13-inch, 2019 with a 2.4 GHz Intel Core i5 processor and 8 GB 2133 MHz
LPDDR3 memory. All computational times are given in seconds (sec.). In all tables of
computational results, SEM means the classical subgradient extragradient method [10],
while Mann-MEM means the Mann type mean extragradient method with the generalized
segmenting averaging matrix [αk,j ]∞
k,j=1 is given by
αk,j
(1 − α ) k −1
:=
0
α (1 − α ) k − j
if j = 1 and k ≥ 1,
if j ≥ 2 and k < j,
if j ≥ 2 and k ≥ j,
(20)
where the parameter α ∈ (0, 1). Note that the set Tk := H≤ (( x k − τF ( x k )) − yk ; hyk , ( x k −
τF ( x k )) − yk i) in Mann-MEM is a supporting hyperplane to C at the point yk . In this
situation, the metric projection PTk can be computed explicitly by the Formula (7) provided
that the estimate ( x k − τF ( x k )) − yk 6= 0. Nevertheless, if the estimate ( x k − τF ( x k )) − yk =
0, we have that the half-space Tk turns out to be the whole space H so that the iterate xk+1
is nothing else but the estimate x k − τF (yk ).
Observe that the extragradient type methods require the computation of the metric
projection onto the constrained set C which is the intersection of a finite number of linear
half-spaces. Of course, the metric projection PC of this constrained set is not computed
explicitly, and we need to solve the sub-optimization problem (5) in order to obtain the
metric projection onto the constrained set. To deal with this situation, we make use of
the classical Halpern iteration by performing the inner loop: pick arbitrary initial point
ϕ1 ∈ Rn and a sequence {λi }i∞=1 , we compute
ϕi+1 := λi ( x k − τF ( x k )) + (1 − λi ) PCm PCm−1 · · · PC2 PC1 ϕi ,
∀i ≥ 1.
(21)
It is well-known that if the sequence {λi }i∞=1 ⊂ (0, 1) satisfies limi→∞ λi = 0, ∑i∞=1 λi =
+∞, and ∑i∞=1 |λi+1 − λi | < +∞, then the sequence { ϕi }i∞=1 converges to the unique
point PC ( x k − τF ( x k )) (see [20] Theorem 30.1), which is nothing else than the point yk in
Mann-MEM. In order to approximate the point yk , in all experiment, we use the stopping
kϕ
−ϕ k
criterion kiϕ+1k+1i ≤ 10−8 for the inner loop. Notice that this strategy is also used when
i
performing SEM.
In the first experiment, we considered behavior of SEM and Mann-MEM in a very
simple situation. We choose n = 2, m = 3, c = [0.1, 0.1]⊤ , a1 = [−1.5, 1]⊤ , a2 = [1, −1]⊤ ,
a3 = [1, −2], and b1 = b2 = b3 = 0. It can be noted that the unique solution in the
problem is nothing else than the point [0.1, 0.1]⊤ . We start with the influence of the stepsize
λk = λ/(k + 1) for various choices of parameter λ ∈ (0, 2) when performing SEM and
Symmetry 2021, 13, 462
11 of 14
Mann-MEM. We choose the starting point x1 = [−0.2, −0.15]⊤ , the stepsize τ = 0.5,
and the parameter α in (20) to be 0.9. We terminate the methods by, for SEM, the stopping
criterions k xk − ck ≤ 10−5 or after 100 iterations, whichever came first, and for Mann-MEM,
the stopping criterions k x k − ck ≤ 10−5 or after 100 iterations, whichever came first. We
present in Table 1 the influences of the parameters λ ∈ [1.3, 1.9] on the computational time
(Time), the number of iterations (k) (#(Iters)), and the total number of inner iterations (i)
given by (21) (#(Inner)) when the stopping criterions were met.
Table 1. Influences of the stepsize λk = λ/(k + 1) for several paramters λ > 0 when performing
subgradient–extragradient method (SEM) and Mann mean extragradient method (Mann-MEM).
Method
λ
Time
#(Iters)
#(Inner)
SEM
1.3
1.4
1.5
1.6
1.7
1.8
1.9
0.1826
0.1501
0.1177
0.0906
>0.2871
0.0899
0.0699
14
15
15
16
>100
30
28
47,630
37,476
28,591
23,691
>84,555
23,648
17,749
Mann-MEM
1.3
1.4
1.5
1.6
1.7
1.8
1.9
>1.0595
>0.7589
0.1180
0.0924
0.0906
0.0851
0.0607
>100
>100
18
18
19
30
23
>319,322
>224,422
33,285
25,846
21,495
23,508
15,925
It can be seen from Table 1 that, in each of these two algorithms tested, the larger values
of parameter λ give the better algorithm performances, that is the least computational time
is achieved when the parameter λ is as large as possible. This behavior may probably due
to the larger stepsize λk , which is defined by the parameter λ, can make the inner loop (21)
terminate in fewer iterations so that the algorithmic runtime decreases. However, we can
see that SEM with λ = 1.7 and Mann-MEM with λ = 1.3, 1.4 need more than 100 iterations
to reach the stopping criterion. We observe that the high performance of both SEM and
Mann-MEM is obtained by the choice of λ = 1.9, moreover, Mann-MEM with λ = 1.9 gives
the best result of algorithm runtime 0.0607 seconds.
In Figure 1, we perform the experiments with varying the the stepsize τ > 0 in the
two tested methods. With the same setting as above experiment and putting the inner-loop
stepsize λk = 1.9/(k + 1) for SEM and Mann-MEM. We observe that the best computational
time for both SEM and Mann-MEM is obtained by the choice of τ = 0.6
Figure 1. Influences of the stepsize τ > 0 when performing SEM and Mann-MEM.
Symmetry 2021, 13, 462
12 of 14
For more insight into the convergence behavior of Mann-MEM, we also consider the
influence of the parameter α given in Mann-MEM. We put λk = 1.9/(k + 1), and τ = 0.6,
and the results are presented in Figure 2. It can be observed that the least computational
time and the number of iterations are achieved when the parameter α is quite large, that is,
the best algorithm’s performance is obtained by the choice of α = 0.99.
Figure 2. Influences of the parameter α > 0 for Mann-MEM.
In the next experiment, we also considered the solving of the problem (19) by the
aforementioned tested methods. We compare the methods for various dimensions n and the
number of constraints m. We put vectors ai , i = 1, . . . , m, whose coordinates are randomly
chosen from the interval [−m, m], positive real numbers bi = 0.5, i = 1, . . . , m, and the
initial point x1 is a vector whose coordinates are randomly chosen from the interval [0, 1].
We set the point c to be a vector whose all coordinates are 1, and choose the best choices
of parameters λ = 1.9, and stepsize τ = 0.6 for SEM and λ = 1.9, τ = 0.6, and α = 0.99
for Mann-MEM. In the following numerical experiments, in order to terminate SEM, we
applied the following stopping criterion
k x k +1 − x k k
max
, k xk − yk k ≤ 10−5 ,
(22)
k xk k + 1
and in order to terminate Mann-MEM, we applied the following stopping criterion
k x k +1 − x k k
, k x k − yk k ≤ 10−5 .
max
k xk k + 1
(23)
We performed 10 independent tests for any collections of high dimensions n = 500, 1000,
2000, and 3000 and the number of constraints m = 50, 100, and 200. The results are
presented in Table 2, where the average computational runtime and the average number of
iterations for any collection of n and m are presented.
Symmetry 2021, 13, 462
13 of 14
Table 2. Behavior of SEM, Mann-MEM for different dimensions (n) and different number of constraints (m).
n
SEM
m
Mann-MEM
Time
#(Iters)
Time
#(Iters)
500
50
100
200
38.7986
94.3647
248.4960
51.0
51.0
50.6
36.3368
88.4383
239.0405
51.2
51.0
51.0
1000
50
100
200
61.8089
143.5451
368.2668
52.0
52.0
52.0
58.6253
137.0350
344.8198
53.0
53.0
52.7
2000
50
100
200
123.3731
257.4444
604.0555
53.1
53.0
53.0
118.4089
245.7529
576.3775
54.0
54.0
54.0
3000
50
100
200
247.8855
452.0647
1070.5699
54.0
54.0
54.0
242.2706
440.8821
1031.5349
55.0
55.0
55.0
It is clear from Table 2 that Mann-MEM is more efficient than SEM in the sense that
Mann-MEM requires less computation than SEM in the average computational runtime.
One notable behavior is that for the case when m is quite large, Mann-MEM requires
significantly below the average computational runtime. For each dimension, we observe
that the larger problem sizes need more average computational runtime. This suggests
that the use of the generalized segmenting averaging matrix is more efficient than SEM.
In this situation, we can note that the essential superiority of Mann-MEM with respect to
SEM is dependent on the optimal choice of the averaging matrix [αk,j ]∞
k,j=1 which is, in our
experiments, the generalized segmenting averaging matrix.
5. Some Concluding Remarks
The objective of this work was the solving of a variational inequality problem governed
by a monotone and Lipschitz continuous operator. We associated to it the so-called Mann’s
mean extragradient method, and proved weak convergence of the generated sequence
of iterates to a solution to the considered problem. Numerical experiments show that
under some suitable parameters, the proposed method has a better convergence behavior
compared to the classical subgradient-extragradient method. For future work, some
comments are in order.
(i)
Let us observe that the convergence of Mann-MEM requires us to know the Lipschitz
constant L of the operator F, nevertheless, it is sometimes difficult to indicate exactly
the Lipschitz constant, so that Mann-MEM and its convergence result can not be
practically applicable. It is very interesting to consider a variant Mann-MEM with a
variable stepsize {τk }∞
k =1 in place of the fixed stepsize τ > 0 and the prior knowledge
of L is not necessarily known.
(ii) It can be noted that the superiority of Mann-MEM with respect to SEM is depended on
the optimal choice of the averaging matrix [αk,j ]∞
k,j=1 . It is also very interesting to find
more possible examples of averaging matrices satisfying the M-concentrating condition.
Author Contributions: Conceptualization, N.N. and N.P.; funding acquisition, N.N.; investigation,
A.B., N.N. and N.P.; methodology, A.B., N.N. and N.P.; project administration, N.N.; software, N.N.
and N.P.; writing-original draft, A.B., N.N. and N.P.; writing-review and editing, A.B., N.N. and N.P.
All authors have read and agreed to the published version of the manuscript.
Funding: This work is supported by the Thailand Research Fund and Office of the Higher Education
Commission under the Project MRG6280079.
Institutional Review Board Statement: Not applicable.
Symmetry 2021, 13, 462
14 of 14
Informed Consent Statement: Not applicable.
Acknowledgments: The authors are thankful to the Editor and two anonymous referees for comments and remarks which improved the quality and presentation of the paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY,
USA, 1980.
Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–710.
Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. I Mat. Metody 1976, 12, 747–756.
Cho, S.Y. Hybrid algorithms for variational inequalities involving a strict pseudocontraction. Symmetry 2019, 11, 1502,
doi:10.3390/sym11121502.
Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational
inequality problems. Acta Appl. Math. 2020, 169, 217–245.
Hieu, D.V.; Cho, Y.J.; Xiao, Y.-B.; Kumam, P. Relaxed extragradient algorithm for solving pseudomonotone variational inequalities
in Hilbert spaces. Optimization 2020, 69, 2279–2304.
Muangchoo, K.; Alreshidi, N.A.; Argyros, I.K. Approximation results for variational inequalities involving pseudomonotone
bifunction in real Hilbert spaces. Symmetry 2021, 13, 182, doi:10.3390/sym13020182.
Thong, D.V.; Vinh, N.T.; Cho, Y.J. New strong convergence theorem of the inertial projection and contraction method for
variational inequality problems. Numer. Algorithms 2020, 84, 285–305.
Yao, Y.; Postolache, M.; Yao, J.-C. Strong convergence of an extragradient algorithm for variational inequality and fixed point
problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12.
Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J.
Optim. Theory Appl. 2011, 148, 318–335.
Gibali, A. A new non-Lipschitzian method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim. 2015,
6, 41–51.
Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities
in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412.
Malitsky, Y.; Semenov, V. An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 2014, 50, 271–277.
Thong, D.V.; Hieu, D.V. Modified subgradient extragradient algorithms for variational inequality problems and fixed point
problems. Optimization 2018, 67, 83–102.
Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality
problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307.
Yang, J.; Liu, H.; Li, G. Convergence of a subgradient extragradient algorithm for solving monotone variational inequalities.
Numer. Algorithms 2020, 84, 389–405.
Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510.
Combettes P.L.; Pennanen, T. Generalized Mann iterates for constructing fixed points in Hilbert spaces. J. Math. Anal. Appl. 2002,
275, 521–536.
Combettes, P.L.; Glaudin, L.E. Quasi-nonexpansive iterations on the affine hull of orbits: From Mann’s mean value algorithm to
inertial methods. SIAM J. Optim. 2017, 27, 2356–2380.
Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer: Cham, Switzerland, 2017.
Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics 2057; Springer: Berlin,
Germany, 2012.
Knopp, K. Infinite Sequences and Series; Dover: New York, NY, USA, 1956.
Jaipranop C.; Saejung, S. On the strong convergence of sequences of Halpern type in Hilbert spaces. Optimization 2018, 67,
1895–1922.