0% found this document useful (0 votes)
58 views13 pages

Robust Model Predictive Control For A Class of Discrete-Time Markovian Jump Linear Systems With Operation Mode Disordering

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 13

Received December 10, 2018, accepted December 29, 2018, date of publication January 9, 2019, date of current version

January 29, 2019.


Digital Object Identifier 10.1109/ACCESS.2019.2891506

Robust Model Predictive Control for a Class of


Discrete-Time Markovian Jump Linear Systems
With Operation Mode Disordering
HONGBIN CAI 1, PING LI 2, CHENGLI SU2 , AND JIANGTAO CAO2
1 School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2 School of Information and Control Engineering, Liaoning Shihua University, Fushun 113001, China

Corresponding author: Ping Li ([email protected])


This work was supported in part by the National Natural Science Foundation of China under Grant 61673199 and in part by the State Key
Laboratory of Industrial Control Technology, Zhejiang University, China, through the Open Research Project, under Grant ICT1800400.

ABSTRACT For a class of discrete-time Markovian jump linear systems subject to operation mode
disordering, a robust model predictive control method can be proposed to solve this issue. A bijective
mapping scheme between the original random process and a new random process is studied to cope with
the problem of operation mode disordering. At each sampling time, the original ‘‘min–max’’ optimization
problem is transformed into a convex optimization problem with linear matrix inequalities so that the
complexity of solving the optimization problem can be greatly reduced. The sufficient stability condition
of the Markovian jump linear systems can be achieved by using the Lyapunov stability theory. Moreover,
a state feedback control law is obtained, which minimizes an infinite prediction horizon performance cost.
Furthermore, the cases of uncertain and unknown transition probabilities are also considered in this paper.
The simulation results show that the proposed method can guarantee the optimal control performance and
the stability of Markovian jump linear systems.

INDEX TERMS Robust model predictive control, Markovian jump linear systems (MJLSs), operation mode
disordering, linear matrix inequalities (LMIs).

I. INTRODUCTION petrochemical processes, flotation processes, pharmaceutical


Markovian jump linear systems (MJLSs) are a particu- processes, crystallization processes and networked con-
lar class of hybrid systems that have become increasingly trol systems [23]–[27]. This method is a kind of closed-
important because of their many applications and theoretical loop optimization control strategy based on a model such
value [1], [2]. In practice, a great number of systems whose as the T-S model [28]–[30] and Hammerstein-Wiener
parameters or structures suddenly change can be described model [31]–[33]. The goals of MPC are to predict the
by a Markov mode, such as electrical systems, networked future dynamic behavior of the system, to achieve rolling
control systems [3], [4] and so on. The main characteristic optimization, and to provide feedback correction of the
of MJLSs is that the dynamic system change is both time model error [34]–[42]. Moreover, for universal uncertainties
trigger and event trigger. Discrete event-trigger changes are and disturbances, a large number of results based on the
known as the modes of the system. The mode switching robust model predictive control (RMPC) method have been
law can be described by a Markov chain [5]. In the past found [43]–[47]. It is known that MJLSs are also treated as
decades, MJLSs have been the focus of many studies, such a class of uncertain systems, where the uncertainty conforms
as stability analysis [6]–[8], sampled-data control [9], [10], to certain statistical probabilities. The probability informa-
state estimation [11], neural network control [12], H∞ filter- tion can be reasonably included in designing a predictive
ing and control [13]–[15], Fault detection [16], fault-tolerant controller, which can achieve better control performance and
control [17] and sliding mode control [18]–[22]. reduce conservativeness.
On the other hand, model predictive control (MPC) is Recently, some important results from MJLSs have been
a very popular and practical control strategy. MPC has obtained based on the MPC method. In [48], the one-
already been used in many applications, for example, in step MPC scheme for uncertain discrete-time MJLSs whose

2169-3536
2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 10415
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

transition probabilities were assumed to be convex sets was disordering have not been sufficiently studied, which moti-
proposed. For discrete-time MJLSs with polytopic uncer- vates the study in this paper.
tainties, a novel multi-step mode-dependent MPC method In this paper, the main contributions can be highlighted
was proposed and the mean-square stability of three cases as follows: (1). For a class of discrete-time MJLSs sub-
was guaranteed [49]. Yang and Karimi et al. [50] studied ject to operation mode disordering, a novel RMPC method
a novel MPC method for a class of uncertain fuzzy MJLSs can be proposed to guarantee the stability and the optimal
with partially unknown transition probabilities. Moreover, for performance of system. (2). To better solve the issue of
discrete-time non-homogeneous MJLSs with time-varying operation mode disordering in MJLSs, transition probability
transition probability matrices, a N-step off-line subopti- with uncertainty and incomplete information have also been
mal MPC was proposed [51]. Reference [52] researched researched, respectively. In this study, the discussed problems
the stochastic model predictive control (SMPC) of nonlinear are more general than most of those in existing literatures.
MJLSs. The terminal conditions of invariance and stability (3). From the perspective’ technology, due to consider the
were used to fulfill the robustness constraint and guarantee transition probability with uncertainty and incomplete infor-
mean-square stability. Chitraganti et al. [53] studied an one- mation, the complexity of the optimization problem can be
step receding horizon control method for discrete-time state- greatly increased, which are also the challenge and innovation
dependent MJLSs subject to probabilistic state constraints of this study.
and unbounded disturbances. On the other hand, for MJLSs The rest of the paper is arranged as follows. The problem
subject to input/state constraints, an MPC method based description is given in section 2. Section 3 proposes the
on a periodic invariant set was designed [54]. Furthermore, RMPC method for MJLSs with operation mode disorder-
a kind of discrete-time nonlinear Markovian jump system ing. Section 4 contains the main results. Section 5 provides
with non-homogeneous transition probabilities was devel- numerical examples. Section 6 gives a conclusion and our
oped by designing an MPC method [55]. Reference [56], future works.
for MJLSs with polytopic uncertainties in both system Notations: Rn represents n-dimensional Euclidean space.
matrices and transition probability matrices, studied a robust x(k|k) denotes the measured state at sampling time k.
distributed model predictive control (DMPC) strategy. At the AT denotes the transpose of matrix A. u(k +i|k) and x(k +i|k)
same time, the stable receding-horizon scenario predic- represent the control input and predicted state at step k,
tive control of constrained discrete-time MJLSs was also respectively. kx(k)k2 represents the Euclidean norm of state
studied [57]. For the Markovian jump linear systems with vector x(k).  refers to the sample space, F refers to the
bounded disturbance, Lu et al. [58] studied the con- σ -algebras of the subsets of the sample space, and P refers to
strained model predictive control method to achieve the the probability measure on F. E refers to the mathematical
disturbance rejection. For a class of constrained discrete- expectation. The symbol ‘‘∗00 denotes the symmetric parts of
time Markovian nonlinear stochastic switching systems, symmetric matrices. I represents the identity matrix of known
Dombrovskii et al. [59] proposed an MPC method, which compatible dimensions, and (G)? = G + GT .
studied the dynamic investment portfolio selection prob-
lem in the presence of market frictions. Zhang et al. [60] II. PROBLEM FORMULATION
researched that for saturating systems with packet dropouts, Consider a class of discrete-time MJLSs defined on a com-
a distributed model predictive control strategy has been plete probability space (, F, P)
proposed.
We note that the MPC method is dependent on the oper- x(k + 1) = Aθ1 (k) x(k) + Bθ1 (k) u(k) (1)
ation modes of the system in all of the above studies. It is
assumed that the system modes and operation modes of the where x(k) ∈ Rnx represents the system state, u(k) ∈ Rnu
controllers are synchronous and in the right sequence. How- represents the control input, and θ1 (k) represents the oper-
ever, in practice, operation mode disordering is universal. ation modes of the system. Aθ1 (k) and Bθ1 (k) represent the
For example, in networked control systems, the data packets system matrices of known compatible dimensions. The orig-
can choose multiple paths when being transmitted and can inal operation mode {θ1 (k), k ∈ Z} is a Markov chain that
thus experience different time delays along these different takes values in the discrete finite set M1 = {1, 2, . . . , N1 }.
paths. This can result in packets that were launched earlier Therefore, the mode-dependent state feedback controller can
reaching the target point later than packets that were sent be written as
later. In other words, the data packets arrive in the incorrect
u(k) = Kθ1 (k) x(k) (2)
order. If the control signals of the MJLSs are transmitted by
unreliable networks, disordered operation modes can easily However, θ1 (k) is transmitted through multiple channels in
occur. Moreover, this can also lead to instability of the con- networked control system, which may suffer from operation
trol system and poor control performance. Therefore, it is mode disordering. Then, the corresponding controller can be
very important to deal with the problem of MJLSs that have described as follows
operation mode disordering [61]. Until now, to the best of
our knowledge, discrete-time MJLSs with operation mode u(k) = Kθ2 (k) x(k) (3)

10416 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

where {θ2 (k), k ∈ Z} is another Markov chain that takes where


values in a discrete finite set M2 = {1, 2, . . . , N2 }. ∞
X
To deal with operation mode disordering, the new operation J∞ (k) = E { [x T (k + τ |k)Qθ(k+τ ) x(k + τ |k)
modes, θ (k), of the system need to be obtained from the τ =0
augmented vector {θ1 (k), θ2 (k)}. Thus we introduce a bijec- + uT (k + τ |k)Rθ(k+τ ) u(k + τ |k)]}
tive mapping between θ (k) and {θ1 (k), θ2 (k)}. The bijective
mapping is described as and Qθ (k+τ ) and Rθ (k+τ ) are positive symmetric weighting
matrices of the states and inputs, respectively, and are of
9 : M1 × M2 → M (4) known appropriate dimension. Obviously, the closed-loop
where η = 9(η1 , η2 ) with η ∈ M, η1 ∈ M1 and η2 ∈ M2 , system is described as follows
and
x(k + τ + 1|k)
9 −1 : M → M1 × M2 (5) = [Aθ (k+τ ) + Bθ (k+τ ) (Kθ(k+τ ) + 1Kθ (k+τ ) )]x(k + τ |k)
where (η1 , η2 ) = 9 −1 (η)
with η1 = 91−1 (η)
∈ M1 and (12)
η2 = 92−1 (η) ∈ M2 . It is assumed that {θ (k), k ∈ Z}
is a stationary ergodic Markov chain that takes values in
M = M1 × M2 , where M ∈ {1, 2, . . . , N } and the element Definition 1: For any initial condition (x0 , θ0 ), the closed-
number N = N1 × N2 . Therefore, it can be shown that loop system (12) is stochastically stable if

x(k + 1) = Aθ(k) x(k) + Bθ (k) u(k) (6) X
E{ x T (k)x(k)|x0 , θ0 } < ∞
Furthermore, for the operation mode disordering, the non- k=0
fragile state feedback controller can be designed Lemma 1: [43]: Assume that positive symmetrical matri-
u(k) = (Kθ (k) + 1Kθ (k) )x(k) (7) ces Q(x), R(x) and S(x) are affine functions of x. The follow-
ing linear matrix inequality holds:
satisfying  
Q(x) S(x)
1KθT(k) 1Kθ (k) ≤ Wθ(k) (8) >0 (13)
S T (x) R(x)
where It is equivalent to
1Kθ (k) = Kθ1 (k) − Kθ(k) (9)
R(x) > 0, Q(x) − S(x)R−1 (x)S T (x) > 0 (14)
and 1Kθ (k) is the control gain fluctuation. Wθ(k) is a positive
matrix, and we define Aη = Aη1 , Bη = Bη1 and Kη = Kη2 or
for any θ(k) = η ∈ M. The transition Q probability matrix of Q(x) > 0, R(x) − S T (x)Q−1 (x)S(x) > 0 (15)
operation modes θ(k) is defined as , (πηµ ) ∈ RN ×N
Lemma 2: [63]: Consider any positive definite symmetric
Pr{θ (k + 1) = µ|θ (k) = η} = πηµ (10)
matrix P ∈ Rn and given any vectors X , Y ∈ Rn , the follow-
PN
where 0 < πηµ < 1 and µ=1 πηµ = 1 for any η, µ ∈ M. ing inequality holds:
Remark 1: It is worth mentioning that it is different in
form from the traditional asynchronous controller. The tradi- 2X T PY ≤ εX T PX + ε −1 Y T PY , ∀ε > 0 (16)
tional model uses two independent probability distributions, Remark 2: The idea behind RMPC is to design the non-
which do not directly deal with the operation mode disorder- fragile state feedback controller (7) and substitute it into the
ing [62]. However, in this paper, the non-fragile method is optimization problem (11). At each sampling period, the state
used to cope with the asynchronous mode signals, and the feedback control gain matrix is obtained by solving the opti-
way of the augmented Markov modes are also taken advan- mization problem (11). The first control input of the obtained
tage of dealing with the asynchronous phenomenon, which control sequence is then applied to the closed-loop system.
can sufficiently consider the impact of mode disordering. At the next sampling period, the ‘‘min-max’’ optimization
problem (11) can be again solved with the updated state
III. DESIGN OF RMPC FOR MJLS WITH OPERATION information to obtain a new control input.
MODE DISORDERING Remark 3: It is well known that one of the characteristics
At each sampling time k, the aim of the RMPC scheme is of MPC is a prediction model. In other words, the future
to derive a state feedback control law such that the infi- behavior of the system can be predicted based on the pre-
nite horizon cost function (11) for MJLSs subject to opera- diction model at each sampling time. Then, at the time
tion mode disordering is minimized. Consider the following step k + τ , the transition probability matrix of operation
‘‘min-max’’ performance index modes θ (k) is assumed to be
min max J∞ (k) (11)
u(k+τ |k),τ ≥0 Pr{θ (k + τ + 1) = l|θ(k + τ ) = h} = πhl (17)

VOLUME 7, 2019 10417


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

where 0 < πhl < 1 and N l=1 πhl = 1 for any h, l ∈ M.


P
subject to
In this paper, we write the following variables in a con- √ √ T
−(G)? + Xh

cise form: Aθ (k+τ ) = Ah , Bθ (k+τ ) = Bh , Kθ (k+τ ) = Kh , 2(γ2 + γ3 )GT GT 2Yh

Kθ1 (k+τ ) = Kh1 , Qθ (k+τ ) = Qh , Rθ (k+τ ) = Rh , and  ∗ −W h 0 0

Wθ (k+τ ) = Wh .  ∗ ∗ −Qh 0

However, at each sampling time, it is difficult to solve the 
optimization problem (11) directly. Therefore, we need to

 ∗ ∗ ∗ −Rh
∗ ∗ ∗ ∗

derive an upper bound of the performance index. Correspond- 

ing to the closed-loop system (12), the following Lyapunov  ∗ ∗ ∗ ∗
function is considered

∗ ∗ ∗ ∗
V (x(k|k), θ(k), k) = x (k|k)Pθ (k) x(k|k)
T
(18) √ √
2πh1 ĀTh 2πhN ĀTh

···
where Pθ (k) = PTθ (k) > 0. At each sampling time k, the fol- 0 ··· 0 

lowing robust stability condition should be satisfied 0 ··· 0 


0 ··· 0
E [V (x(k + τ + 1|k))] − V (x(k + τ |k)) ≤0 (24)

−X1 ··· 0 
≤ −E [x T (k + τ |k)Qθ (k+τ ) x(k + τ |k) 
.. ..

+ uT (k + τ |k)Rθ (k+τ ) u(k + τ |k)] . .

(19) ∗ 

Taking the expectation of both sides of (19) and summing ∗ ··· −XN
from τ = 0 to τ = ∞, one obtains √ √
πh1 Bh · · ·
T πhN BTh
 
−γ2 I

X  ∗

−X1 ··· 0


E [V (x(k + τ + 1|k)) − V (x(k + τ |k))] 
.. ≤0

(25)
.

τ =0
 ∗
 ∗ 0 


X ∗ ∗ ··· −XN
≤− E [x T (k + τ |k)Qθ (k+τ ) x(k + τ |k)
−(G)? + W h (Yh1 − Yh )T
" #
τ =0
+ uT (k + τ |k)Rθ (k+τ ) u(k + τ |k)] (20) ≤0 (26)
∗ −I
It is assumed that the closed-loop system is asymptoti- "
−γ1 I x T (k|k)
#
cally stable. Since x(∞|k) = 0, it can be concluded that ≤0 (27)
∗ −Xη
V (x(∞|k)) = 0. Then,

X and
E { [V (x(k + τ + 1|k)) − V (x(k + τ |k))]} ≤ −J∞ (k)
τ =0 Rh ≤ γ3 I (28)
(21)
where
The upper bound of the performance index can be derived as
follows Āh = Ah G + Bh Yh , Wh−1 = W h , Q−1
h = Qh , Rh = Rh .
−1

J∞ (k) ≤ E {V [x(k|k)]} ≤ γ1 (22) Then, the gain Kh1 of controller (2) and the gain Kh of
controller (7) can be obtained as
where γ1 is a given positive scalar. The main results are
presented in the following theorems. Kh1 = Yh1 G−1

IV. MAIN RESULT and


Theorem 1: Consider the system (1), and let x(k|k) =
Kh = Yh G−1
x(k) be the measured system state at each sampling time k.
For given symmetric matrices Qh > 0 and Rh > 0 and Proof: For (22), applying to the Schur complement,
scalars γ1 > 0, γ2 > 0 and γ3 > 0, there exists a non- (27) can be obtained. According to (19), it can be equal to
fragile state feedback controller (7) such that the performance the following inequality
index is minimized. Then, the closed-loop system (12) can
be stochastically stable if there exist matrices Xh > 0, N
X
G > 0, Yh > 0, Yh1 > 0 and Wh > 0 satisfying [Ah + Bh (Kh + 1Kh )]T πhl Pl [Ah + Bh (Kh + 1Kh )]
l=1
min γ1 (23)
Xh ,G,Yh ,Yh1 ,Wh − Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (29)

10418 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

Furthermore, it can also be derived that By applying the Schur complement lemma, (38) can be trans-
N formed into the inequality as follows
X
√ √ T
(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) 2(γ2 + γ3 )GT

−GT Ph G GT 2Yh
l=1 −1
N

X  ∗ −Wh 0 0
+ 2(Ah + Bh Kh )T πhl Pl Bh 1Kh 
−1

 ∗ ∗ −Qh 0
l=1
N 
X  ∗ ∗ ∗ −R−1
+ (Bh 1Kh )T πhl Pl Bh 1Kh − Ph + Qh + KhT Rh Kh  h
∗ ∗ ∗ ∗

l=1

+ 2KhT Rh 1Kh + 1KhT Rh 1Kh ≤ 0 (30)


 ∗ ∗ ∗ ∗
Moreover, it is concluded that ∗ ∗ ∗ ∗
√ √
N 2πh1 ĀTh · · · 2πhN ĀTh

X
2(Ah + Bh Kh )T πhl Pl Bh 1Kh
0 ··· 0


l=1 
N
X 0 ··· 0 
≤ (Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )

0 ··· 0

≤0 (39)
l=1 
N
X −X1 ··· 0 

+ (Bh 1Kh )T πhl Pl Bh 1Kh (31) .. .. 
∗ . .

l=1

and ∗ ··· −XN

2KhT Rh 1Kh ≤ 1KhT Rh 1Kh + KhT Rh Kh (32) As for the nonlinear term −GT Ph G, it can be shown that

By applying conditions (31) and (32) to inequality (30), (30) −GT Ph G ≤ −(G)? + Xh (40)
can be rewritten as Therefore, (24) can be directly obtained. Similarly, taking
N
X into account (34), (25) can also be guaranteed. On the other
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) hand, by substituting (9) into (36) and multiplying the right
l=1 and left sides by G and its transpose, respectively, we have
N
−GT Wh G + (Yh1 − Yh )T (Yh1 − Yh ) ≤ 0
X
+ 2(Bh 1Kh )T πhl Pl Bh 1Kh (41)
l=1
As for the nonlinear term −GT W h G, we use the fact that
− Ph + Qh + 2KhT Rh Kh + 21KhT Rh 1Kh ≤ 0 (33)
−GT Wh G ≤ −(G)? + Wh−1 (42)
We can see that
N Finally, it is easy to deduce the inequality (26). Therefore,
from (19), when τ = 0, one can get
X
BTh πhl Pl Bh ≤ γ2 I (34)
l=1
1V (x(k), θ(k), k)
Rh ≤ γ3 I (35)
= E [V (x(k + 1))] − V (x(k))
and
= x T (k)3x(k) ≤ −λmin (−3)x T (k)x(k) ≤ −ρx T (k)x(k)
1KhT 1Kh ≤ Wh (36) (43)
Therefore, (33) can be reformulated as where
N
X N
X
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) + 2(γ2 + γ3 )Wh 3 = [Aη + Bη (Kη + 1Kη )]T πηµ Pµ
l=1 µ=1
− Ph + Qh + 2KhT Rh Kh ≤ 0 (37) × [Aη + Bη (Kη + 1Kη )] − Pη
Multiplying the right and left sides by G > 0 and its trans- and λmin (−3) denotes the minimal eigenvalue of (−3) and
pose, respectively, and definition P−1
h = Xh , Yh = Kh G and
ρ = inf{λmin (−3)}, for any η, µ ∈ M. Taking the expecta-
Yh1 = Kh1 G. Then, (37) is also equivalent to tion of both sides of (43) and summing from k = 0 to k = ∞,
N X∞
E{ 1V (x(k), θ(k), k)} = E [V (x(∞))] − V (x(0))
X
2(Ah G + Bh Yh ) T
πhl Pl (Ah G + Bh Yh )
l=1 k=0

+ 2(γ2 + γ3 )G Wh G T X
≤ −ρE { x T (k)x(k)} (44)
− GT Ph G + GT Qh G + 2YhT Rh Yh ≤ 0 (38) k=0

VOLUME 7, 2019 10419


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

√ √ √
2π̃hN ĀTh 2ξh1 ĀTh 2ξhN ĀTh

then the following inequality holds ··· ···
··· 0 0 ··· 0 

··· 0 0 ··· 0 
∞ 
X 1 ··· 0 0 ··· 0
E{ x T (k)x(k)} ≤ {V (x(0)) − E [V (x(∞))]}

ρ

··· 0 0 ··· 0 
k=0
..

≤0
1 . 0 0 ··· 0 
≤ V (x(0)) (45) 
ρ ∗ −XN 0 ··· 0 

∗ ∗ −M h ··· 0 
..

.

which implies ∗ ∗ ∗ 0 
∗ ∗ ∗ ∗ −M h

1 (48)
X √ √
E{ x T (k)x(k)|x0 , θ0 } ≤ V (x(0)) < ∞ (46) π̃h1 BTh π̃hN BTh

−γ3 I + Rh ···
ρ
k=0  ∗ −X1 ··· 0
..

.

 ∗ ∗ 0
From Definition 1, it can be got that the system is stochasti- 
 ∗ ∗ ∗ −XN
cally stable. This completes the proof. 
 ∗ ∗ ∗ ∗
From the above results, we can see that πhl plays an impor- 
 ∗ ∗ ∗ ∗
tant role in designing the RMPC, guaranteeing the robust ∗ ∗ ∗ ∗
stability and the optimal control performance of the closed- √ √
ξh1 BTh ξhN BTh

loop system. However, in practice, πhl cannot be exactly ···
obtained. Therefore, it is essential that the RMPC takes into 0 ··· 0 

account cases in which the transition probabilities cannot be 0 ··· 0 

determined with certainty. It follows that 0 ··· 0 
≤0 (49)

−M h ··· 0 
..

πhl = π̃hl + 1π̃hl , π̃hl ∈ [0, 1] .

(47) ∗ 0 

∗ ∗ −M h
where π̃hl denotes the estimation of πhl in accordance √ √
ξh1 BTh ξhN BTh
 
−γ2 I ···
with (10). 1π̃hl ∈ [−ξhl , ξhl ], where ξhl ∈ [0, 1] denotes  ∗ −X1 ··· 0 
the admissible uncertainty. Therefore, the following theorem  
 .. ≤0 (50)
can be obtained.  ∗
 ∗ . 0 

Theorem 2: Consider the system (1), and let x(k|k) = x(k) ∗ ∗ ··· −XN
be the measured system state at each sampling time k. For
√ √
given symmetric matrices Qh > 0 and Rh > 0 and scalars ξh1 ĀTh ξhN ĀTh
 
−γ4 I ···
γ1 > 0, γ2 > 0, γ3 > 0, γ4 > 0 and ξhl > 0, there  ∗ −X1 ··· 0 
 
exists a non-fragile state feedback controller (7) such that  .. ≤0 (51)
the performance index is minimized. Then the closed-loop  ∗

∗ . 0


system (12) is stochastically stable if there exist matrices ∗ ∗ ··· −XN
Xh > 0, G > 0, Yh > 0, Yh1 > 0, Mh > 0 and Wh > 0
satisfying and
−(I )? + M h I T
 
<0 (52)
√ √ √ ∗ −Xl
2 2(γ3 − γ2 )GT GT 2YhT 2π̃h1 ĀTh

where
∗ −W h 0 0 0


Āh = Ah G + Bh Yh , W h = Wh−1 , M h = Mh−1

 ∗ ∗ −Qh 0 0
2 = −(G)? + Xh − 2γ4 I ,

h = Qh , Rh = Rh .
Q−1
 −1
 ∗ ∗ ∗ −Rh 0


 ∗ ∗ ∗ ∗ −X1 Proof: Taking into account the proof of Theorem 1,
it is obvious that condition (29) should be affected by the

 ∗ ∗ ∗ ∗ ∗
uncertainty (47) such that


 ∗ ∗ ∗ ∗ ∗
 N
 ∗ ∗ ∗ ∗ ∗ X
 [Ah +Bh (Kh +1Kh )]T (π̃hl +1π̃hl )Pl [Ah +Bh (Kh +1Kh )]
 ∗ ∗ ∗ ∗ ∗
 l=1
∗ ∗ ∗ ∗ ∗ − Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (53)

10420 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

N
To address the uncertainty, (53) can be changed such that X
− 2(Ah + Bh Kh )T ξhl Pl (Ah + Bh Kh )
[Ah + Bh (Kh + 1Kh )]T l=1
N N N
X X X N
×[ (π̃hl + 1π̃hl )Pl + ξhl Pl − ξhl Pl X
− 21KhT BTh ξhl Pl Bh 1Kh
l=1 l=1 l=1
N N l=1
X X
− (1π̃hl + ξhl )Mh + ξhl Mh ][Ah + Bh (Kh + 1Kh )] N
X N
X
l=1 l=1 + 21KhT (Rh + BTh π̃hl Pl Bh + BTh ξhl Mh Bh )1Kh
− Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (54) l=1 l=1

Furthermore, it can be concluded that − Ph + Qh + 2KhT Rh Kh ≤ 0 (60)


N N
X X where
[Ah + Bh (Kh + 1Kh )]T [ (π̃hl − ξhl )Pl + (1π̃hl + ξhl )
N
l=1 l=1 X
N
X BTh ξhl Pl Bh ≤ γ2 I (61)
× (Pl − Mh )+ ξhl Mh ][Ah +Bh (Kh +1Kh )] − Ph + Qh l=1
l=1
and
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (55)
N N
It can also be guaranteed that
X X
Rh + BTh π̃hl Pl Bh + BTh ξhl Mh Bh ≤ γ3 I (62)
N
X N
X l=1 l=1
[Ah + Bh (Kh + 1Kh )]T [ (π̃hl − ξhl )Pl + ξhl Mh ]
Applying the Schur complement, (50) and (49) can be derived
l=1 l=1
[Ah + Bh (Kh + 1Kh )] − Ph + Qh from (61) and (62), respectively. Moreover, (60) can be
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤0 (56) expressed as follows
N
and X
N
2(Ah + Bh Kh )T π̃hl Pl (Ah + Bh Kh )
X l=1
(1π̃hl + ξhl )(Pl − Mh ) < 0 (57)
l=1 N
X
+ 2(Ah + Bh Kh ) T
ξhl Mh (Ah + Bh Kh )
As for (56), it is equivalent to
l=1
N N N
X X X N
[(Ah +Bh Kh )+Bh 1Kh ]T ( π̃hl Pl − ξhl Pl + ξhl Mh ) X
− 2(Ah + Bh Kh )T ξhl Pl (Ah + Bh Kh )
l=1 l=1 l=1
[(Ah + Bh Kh ) + Bh 1Kh ] − Ph + Qh l=1

+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (58) + 2(γ3 − γ2 )Wh − Ph + Qh + 2KhT Rh Kh ≤ 0 (63)


It can also be shown that Multiplying the right and left sides by G and its transpose,
XN N
X N
X respectively, (63) is equivalent to
(Ah +Bh Kh )T ( π̃hl Pl − ξhl Pl + ξhl Mh )(Ah + Bh Kh )
N
l=1 l=1 l=1 X
N
X N
X N
X 2(Ah G + Bh Yh )T π̃hl Pl (Ah G + Bh Yh )
+ 2(Ah +Bh Kh ) ( T
π̃hl Pl − ξhl Pl + ξhl Mh )Bh 1Kh l=1
l=1 l=1 l=1 N
N N N
X
X X X + 2(Ah G + Bh Yh )T ξhl Mh (Ah G + Bh Yh )
+ (Bh 1Kh )T ( π̃hl Pl − ξhl Pl + ξhl Mh )Bh 1Kh l=1
l=1 l=1 l=1
N
− Ph + Qh + KhT Rh Kh + 2KhT Rh 1Kh + 1KhT Rh 1Kh ≤ 0 X
− 2(Ah G + Bh Yh ) T
ξhl Pl (Ah G + Bh Yh )
(59)
l=1
Similar to the (31) and (32), the following inequality can + 2(γ3 − γ2 )GT Wh G − GT Ph G
be obtained that
N + GT Qh G + 2YhT Rh Yh ≤ 0 (64)
X
2(Ah + Bh Kh ) T
π̃hl Pl (Ah + Bh Kh )
where
l=1
N
X N
X
+ 2(Ah + Bh Kh )T ξhl Mh (Ah + Bh Kh ) (Ah G + Bh Yh )T ξhl Pl (Ah G + Bh Yh ) ≤ γ4 I (65)
l=1 l=1

VOLUME 7, 2019 10421


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

Similarly, (51) can be derived very obvious. Then, (64) can the performance index is minimized. Then, the closed-loop
also be rewritten as system (12) is stochastically stable if there exist matrices
N
X Xh > 0, G > 0, Yh > 0, Yh1 > 0, Wh > 0 and Vh > 0
2(Ah G + Bh Yh )T π̃hl Pl (Ah G + Bh Yh ) satisfying
l=1 √ √ T
−(G)? + Xh − 2γ4 I 2(γ3 − γ2 )GT GT

N
2Yh
∗ −W 0 0
X
+ 2(Ah G + Bh Yh )T ξhl Mh (Ah G + Bh Yh ) h


l=1

 ∗ ∗ −Q h 0
− 2γ4 I + 2(γ3 − γ2 )GT Wh G − GT Ph G + GT Qh G

 ∗ ∗ ∗ −R h
 ∗ ∗ ∗ ∗
+ 2YhT Rh Yh ≤ 0 (66)


 ∗ ∗ ∗ ∗
By using the Schur complement, inequality (66) can be con-
 ∗ ∗ ∗ ∗
verted to the inequality (48). As for (57), it holds if and only ∗ ∗ ∗ ∗
√ T
√ T
√ T
if the following inequality is satisfied 2πh1 Āh · · · 2πhm Āh 2Āh
0 ··· 0 0 
Pl − Mh < 0 (67) 
0 ··· 0 0  
Similarly, (67) can be written as follows 0 ··· 0 0 
≤0 (73)

−X1 ··· 0 0 
IT
 
−Mh ..
<0

(68) .

∗ −Xl ∗ 0 0  
∗ ∗ −Xm 0 
where the following inequality should also be considered
∗ ∗ ∗ −V h
−I T Mh I < −(I )? + M h (69) 
−γ3 I + Rh

πh1 Bh · · ·
T √
πhm BTh BTh

Then, (52) can be obtained, and the proof is complete.


 ∗ −X1 ··· 0 0 
..
 
In addition, it is necessary that another general case be . ≤0
 
 ∗ ∗ 0 0
considered, that in which the transition probability matrix is
 
 ∗ ∗ ∗ −Xm 0 
partially unknown. In other words, the transition probability ∗ ∗ ∗ ∗ −V h
matrix must be divided into unknown and known parts. For
(74)
example, for the closed-loop system in (12) with four opera- √ √
πh1 BTh · · · πhm BTh
 
−γ2 I
tion modes, the transition probability matrix can be described
as
 ∗ −V h ··· 0 
. ≤0 (75)
 
 . .
π12 π14
 
? ?  ∗ ∗ 0 
Y  π21 π22 ? π24  ∗ ∗ ∗ −V h
=  π31 π32
 (70) √ √
? ?  πh1 Āh · · ·
T πhm ĀTh
 
−γ4 I
? ? π43 π44  ∗ −V h ··· 0 
.. ≤0 (76)
 
where ‘‘?’’ denotes the unknown elements. For any h ∈ M 
 ∗ ∗ . 0 
and l ∈ M, we define ∗ ∗ ∗ −V h
[ h
Mh = Mhk Mk (71) and
where −(I )? + V h IT
 
<0 (77)
Mhk = {l : πhl is known, l ∈ M}, ∗ −Xl
h
Mk = {l : πhl is unknown, l ∈ M} (72) where

Furthermore, Mhk = {πh1 , · · · , πhm }, where {πhi , i ∈ 1, · · · , Āh = Ah G + Bh Yh , V h = Vh−1 , W h = Wh−1


m, ∀ 1 ≤ m ≤ N } refers to the known element in
h h = Qh ,
Q−1 h = Rh .
R−1
row h and column i. Mk = {π h1 , · · · , π h(N −m) }, where
{π hj , j ∈ 1, · · · , N − m} refers to the unknown element Proof: Taking into account the proof of Theorem 1, it can
in row h and column j. Similar to the proof of Theorem 2, be seen that the following term in condition (29) will be
the following theorem can also be obtained. affected in the case of partially unknown transition probabil-
Theorem 3: Consider the system (1), and let x(k|k) = ities
x(k) be the measured system state at each sampling time k. X
For given symmetric matrices Qh > 0 and Rh > 0 and [Ah + Bh (Kh + 1Kh )]T πhl Pl [Ah + Bh (Kh + 1Kh )]
scalars γ1 > 0, γ2 > 0, γ3 > 0 and γ4 > 0, there l∈Mh
exists a non-fragile state feedback controller (7) such that − Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (78)

10422 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

According to (71), one obtains + 2(Ah + Bh Kh )T Vh (Ah + Bh Kh )


X
+ 21KhT (Rh + BTh πhl Pl Bh + BTh Vh Bh )1Kh
X X
[Ah + Bh (Kh + 1Kh )]T ( πhl Pl + πhl Pl )
l∈Mhk
l∈Mhk l∈Mk
h
X
× [Ah + Bh (Kh + 1Kh )] − Ph + Qh − 21KhT BTh πhl Vh Bh 1Kh
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (79) l∈Mhk

− Ph + Qh + 2KhT Rh Kh ≤ 0 (85)
which is also equivalent to
X X To satisfy the inequality, the following conditions should
[Ah + Bh (Kh + 1Kh )]T [ πhl Pl + πhl (Pl − Vh )
be guaranteed
l∈Mhk l∈Mk
h
X
+ ᾱh Vh ][Ah + Bh (Kh + 1Kh )] − Ph + Qh BTh πhl Vh Bh ≤ γ2 I (86)
l∈Mhk
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (80)
and
where X
ᾱh =
X
πhl = 1 −
X
πhl Rh + BTh πhl Pl Bh + BTh Vh Bh ≤ γ3 I (87)
h l∈Mhk
l∈Mk l∈Mhk
Applying to the Schur complement lemma, (74) and (75) can
If inequality (80) is to be satisfied, we need to have
X be obtained. Then, (85) can be written as
[Ah + Bh (Kh + 1Kh )]T [ πhl Pl + ᾱh Vh ] X
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )
l∈Mhk
l∈Mhk
× [Ah + Bh (Kh + 1Kh )] − Ph + Qh
+ 2(Ah + Bh Kh )T Vh (Ah + Bh Kh )
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (81) X
− 2(Ah + Bh Kh )T πhl Vh (Ah + Bh Kh )
and l∈Mhk
X
πhl (Pl − Vh ) < 0 (82) + 2(γ3 − γ2 )Wh − Ph + Qh + 2KhT Rh Kh ≤ 0 (88)
h
l∈Mk
Multiplying the right and left sides by G and its transpose,
As for (81), it can be shown to be equivalent to respectively, we have
X X
[(Ah + Bh Kh ) + Bh 1Kh ]T [ πhl Pl + Vh 2(Ah G + Bh Yh )T πhl Pl (Ah G + Bh Yh )
l∈Mhk l∈Mhk

+ 2(Ah G + Bh Yh )T Vh (Ah G + Bh Yh )
X
− πhl Vh ][(Ah + Bh Kh ) + Bh 1Kh ] − Ph + Qh X
l∈Mhk − 2(Ah G + Bh Yh )T πhl Vh (Ah G + Bh Yh )
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (83) l∈Mhk

+ 2(γ3 − γ2 )G Wh G − GT Ph G
T
Furthermore, (83) can be obtained as
X X + GT Qh G + 2YhT Rh Yh ≤ 0 (89)
(Ah + Bh Kh )T ( πhl Pl + Vh − πhl Vh )(Ah + Bh Kh )
l∈Mhk l∈Mhk
where
X
(Ah G + Bh Yh )T πhl Vh (Ah G + Bh Yh ) ≤ γ4 I
X X
+ 2(Ah + Bh Kh )T ( πhl Pl + Vh − πhl Vh )Bh 1Kh (90)
l∈Mhk l∈Mhk l∈Mhk
X X
+ (Bh 1Kh )T ( πhl Pl + Vh − πhl Vh )Bh 1Kh Inequality (90) can be converted to (76). Finally,
l∈Mhk l∈Mhk
X
2(Ah G + Bh Yh )T πhl Pl (Ah G + Bh Yh )
− Ph + Qh + KhT Rh Kh + 2KhT Rh 1Kh + 1KhT Rh 1Kh ≤ 0 l∈Mhk
(84) + 2(Ah G + Bh Yh )T Vh (Ah G + Bh Yh )
Similar to (31) and (32), it can be rewritten as follows: − 2γ4 I + 2(γ3 − γ2 )GT Wh G − GT Ph G + GT Qh G
X + 2YhT Rh Yh ≤ 0 (91)
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )
l∈Mhk Moreover, (73) can be obtained from (91). As for (82), it holds
X if and only if the following inequality is satisfied
− 2(Ah + Bh Kh )T πhl Vh (Ah + Bh Kh )
l∈Mhk Pl − Vh < 0 (92)

VOLUME 7, 2019 10423


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

This is equivalent to TABLE 1. The bijective mapping relations between (η1 , η2 ) and η.

IT
 
−Vh
<0 (93)
∗ −Xl

Furthermore, it should also be guaranteed that

−I T Vh I < −(I )? + V h (94)

Similarly, (77) can also be derived, and the proof is V. NUMERICAL EXAMPLE
complete. The following numerical example can be used to illustrate
Lemma 3: The feasible solutions of optimization problem the effectiveness of the proposed control method. Consider a
at time k are also feasible solutions for all time instants discrete-time MJLS with two modes.
t > k. Therefore, at time k, if the optimization prob- Mode 1:
lem is feasible, for all time instants t > k, it is also
   
−0.2 0.25 0.1
A1 = , B1 =
feasible. −0.1 −0.16 0.2
Proof: Assume that the optimization problem has feasi-
Mode 2:
ble solution at time k. Note that (27) is the mere constraint    
dependent on the states of the system, then, for all future 0.1 0.15 0.17
A2 = , B2 =
system states x(k + τ ), τ ≥ 1, it only needs to guarantee −0.2 −0.1 −0.1
that (27) is feasible. At time k, when the optimization problem Without loss of generality, the transition probability matrix of
is feasible, x T (k + 1|k)Xη−1 x(k + 1|k) < γ1 holds. More- the Markov chain is given by
over, at time k + 1, for the measured state x(k + 1|k +  
1) = x(k + 1), it can be obtained very easy that x T (k + 0.2 0.8
Pr =
1|k + 1)Xη−1 x(k + 1|k + 1) < γ1 holds. In other words, 0.4 0.6
the feasibility can be guaranteed based on the above analysis. The weighting matrices are chosen as Qh = diag{0.5, 0.5},
Similarly, this argument can be continued for time k + 2,
 h = 6, and
R T h ∈ {1, 2, 3, 4}. The initial values are x0 =
k + 3, · · · . 0.2 0.13 , γ1 = 2, γ2 = 1 and γ3 = 0.2. In this
Theorem 4: Consider a class of discrete-time MJLSs sub- example, for the sake of proving the effectiveness of proposed
ject to operation mode disordering. According to the proof of method, [43] can be considered. The following three cases
lemma 1, the closed-loop system (12) is asymptotically stable should be shown: Considering mode disordering and apply-
with the state feedback gain matrix as Kh1 = Yh1 G−1 and ing the conventional RMPC method in [43], the simulation
Kh = Yh G−1 . results are as follows
Proof: Based on the proof of feasibility, at time k + 1, In this example, the operation modes of θ1 (k) take values
we can construct the same feasible solution as that at time k. in M1 = [1, 2]. When θ1 (k) is subject to mode disor-
At
S∗time k, assume that the optimal solution is expressed as dering, the process is described as another Markov process
k = {X h , G, Y ,
h SYh1 , W ,
h 1γ ∗ (k)}. Then, at time k + 1,
{θ2 (k), M2 = [1, 2]}. Therefore, according to the proposed
the feasible solution k+1 = {Xh , G, Yh , Yh1 , Wh , γ1 (k + 1)} method, a bijective mapping is defined as 9(η) = η1 +
can be constructed, which is theSoptimal solution at the time k. 2(η2 − 1). The system corresponding to the operation modes
Moreover, it is easy to see that k+1 can satisfy the optimiza- after bijective mapping is shown in Table 1.
tion problem. Therefore, based on the optimal theory, we can The following transition probability matrix can be obtained
get γ1∗ (k + 1) ≤ γ1 (k + 1) = γ1∗ (k).  
0.1 0.2 0.3 0.4
On the other hand, the following Lyapunov function  0.3 0.1 0.1 0.5 
V (x(k|k), θ(k), k) = x T (k|k)Pθ (k) x(k|k) need to be estab- Pr =  0.1 0.4 0.2 0.3 .

lished, where Pθ (k) is the optimal solution of optimization
0.2 0.3 0.2 0.3
problem at time k. Furthermore, based on the proof of fea-
sibility, one has x T (k + 1|k + 1)Pθ (k+1) x(k + 1|k + 1) ≤ The simulation figures are shown in Figures 1-4. From
x T (k + 1|k + 1)Pθ (k) x(k + 1|k + 1). This is because Pθ (k+1) Figure 2, we can see that the closed-loop system with oper-
is optimal, but Pθ (k) is only feasible at time k + 1. According ation mode disordering cannot guarantee better control per-
to (19), when τ = 0, one can get x T (k + 1|k)Pθ (k) x(k + formance using the conventional RMPC method in [43].
1|k) ≤ x T (k|k)Pθ(k) x(k|k). Meanwhile, it has x T (k + 1|k + Although the asymptotic stability of the closed-loop system
1)Pθ (k) x(k + 1|k + 1) ≤ x T (k + 1|k)Pθ (k) x(k + 1|k). can also be achieved by the conventional RMPC method,
So, the following inequality can be obtained x T (k + 1|k + the system needs a long time to achieve it. Furthermore, from
1)Pθ (k+1) x(k +1|k +1) ≤ x T (k|k)Pθ (k) x(k|k). The Lyapunov Figure 4, it is obvious that the asymptotic stability of the
function is strictly decreasing for the closed-loop system. closed-loop system with operation mode disordering based
Therefore, the closed-loop system is asymptotically stable. on the proposed control method is achieved approximately
The proof is completed. four-times faster than with the conventional RMPC method.

10424 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

FIGURE 1. The original operation modes, θ1 , and disordered operation FIGURE 4. The state response of closed-loop system with mode
modes, θ2 , of the system. disordering using the proposed method.

FIGURE 2. The state response of closed-loop system with mode


FIGURE 5. The state response of closed-loop system with mode
disordering using the [43] method.
disordering and the case of uncertainty in the transition probability
matrix using the proposed method.

FIGURE 3. The operation modes, θ (k), of the system after bijective


mapping.
FIGURE 6. The state response of closed-loop system with mode
disordering and the case of unknown transition probability using the
proposed method.
In other words, it is very important that the case of opera-
tion mode disordering is considered in designing the control T
values are x0 = 0.4 0.2 , γ1 = 2, γ2 = 1, γ3 = 10 and

system, which can greatly reduce the impact of the operation
mode disordering. γ4 = 1. The simulation results are shown in Figure 6.
Considering the case of uncertainty in the transition prob- According to the simulation results shown in Figure 5 and
ability matrix and applying the proposed control method Figure 6, when the transition probability matrix of the sys-
in Theorem 2, we obtain the weighting matrices: Qh = tem with operation mode disordering is subjected to uncer-
diag{0.5, 0.5},
 Rh =T6, h ∈ {1, 2, 3, 4}. The initial values tainty or incomplete information, the closed-loop system
are x0 = 0.2 0.6 , γ1 = 2, γ2 = 1, γ3 = 10 and can still achieve asymptotic stability very fast by using the
γ4 = 1. Additionally, ξhl = 0.6, where h, l ∈ {1, 2, 3, 4}. proposed control method. It is worth mentioning that due
The simulation results are shown in Figure 5. to considering uncertainty and unknown cases, the con-
Considering the case of unknown transition probabil- servativeness can be greatly reduced. All in all, the pro-
ity matrix and applying the proposed control method in posed control method can achieve better control performance,
Theorem 3 of this paper. The weighting matrices are chosen which can also guarantee the feasibility and strong robust
as: Qh = diag{0.5, 0.5}, Rh = 6, h ∈ {1, 2, 3, 4}. The initial stability.

VOLUME 7, 2019 10425


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

VI. CONCLUSION [15] I. Ghous, Z. Xiang, and H. R. Karimi, ‘‘H∞ control of 2-D continuous
In this paper, for a class of discrete-time MJLSs subject Markovian jump delayed systems with partially unknown transition prob-
abilities,’’ Inf. Sci., vols. 382–383, pp. 274–291, Mar. 2017.
to operation mode disordering, a RMPC method has been [16] F. Li, P. Shi, C.-C. Lim, and L. Wu, ‘‘Fault detection filtering for nonho-
studied. To deal with operation mode disordering, a bijective mogeneous Markovian jump systems via a fuzzy approach,’’ IEEE Trans.
mapping scheme between the original random process and Fuzzy Syst., vol. 26, no. 1, pp. 131–141, Feb. 2018.
[17] Z. Li and G. Wang, ‘‘Stabilization of discrete-time systems via a partially
a new random process has been introduced. At each sam- disabled controller experiencing forced dwell times,’’ IEEE Access, vol. 6,
pling time, the complex ‘‘min-max’’ optimization problem is pp. 27001–27009, 2018.
transformed into a convex optimization problem with LMIs, [18] G. Sun, L. Wu, Z. Kuang, Z. Ma, and J. Liu, ‘‘Practical tracking control
of linear motor via fractional-order sliding mode,’’ Automatica, vol. 94,
greatly reducing the complexity of solving the optimization pp. 221–235, Aug. 2018.
problem. Furthermore, the conservativeness of the system [19] L. Wu, Y. Gao, J. Liu, and H. Li, ‘‘Event-triggered sliding mode con-
has been reduced because probability information has been trol of stochastic systems via output feedback,’’ Automatica, vol. 82,
pp. 79–92, Aug. 2017.
included in designing the predictive controller. Moreover, [20] F. Li, C. Du, C. Yang, and W. Gui, ‘‘Passivity-based asynchronous sliding
the cases of uncertain and unknown transition probabilities mode control for delayed singular Markovian jump systems,’’ IEEE Trans.
have also been considered. In all of these cases, the stochas- Autom. Control, vol. 63, no. 8, pp. 2715–2721, Aug. 2018.
[21] J. Song, Y. Niu, and Y. Zou, ‘‘Asynchronous sliding mode control of
tic stability of the closed-loop system has been guaranteed. Markovian jump systems with time-varying delays and partly accessible
The simulation results illustrate that the proposed method mode detection probabilities,’’ Automatica, vol. 93, pp. 33–41, Jul. 2018.
is both feasible and very effective. The future work is that [22] H. Li, P. Shi, and D. Yao, ‘‘Adaptive sliding-mode control of Markov
jump nonlinear systems with actuator faults,’’ IEEE Trans. Autom. Control,
when Markovian jump linear systems subject to actuator and vol. 62, no. 4, pp. 1933–1939, Apr. 2017.
sensor faults, the robustness and stability of system will be [23] C.-C. Tsai, S.-C. Lin, T.-Y. Wang, and F.-J. Teng, ‘‘Stochastic model refer-
researched. ence predictive temperature control with integral action for an industrial
oil-cooling process,’’ Control Eng. Pract., vol. 17, no. 2, pp. 302–310,
2009.
REFERENCES [24] Y. Tang, C. Peng, S. Yin, J. Qiu, H. Gao, and O. Kaynak, ‘‘Robust model
predictive control under saturations and packet dropouts with application
[1] O. L. V. Costa, M. D. Fragoso, and R. P. Marques, Discrete-Time Markov to networked flotation processes,’’ IEEE Trans. Autom. Sci. Eng., vol. 11,
Jump Linear Systems. London, U.K.: Springer-Verlag, 2005. no. 4, pp. 1056–1064, Oct. 2014.
[2] Z. Li, T. Zhang, C. Ma, H. Li, and X. Li, ‘‘Robust passivity control for [25] F. Gagnon, A. Desbiens, É. Poulin, P.-P. Lapointe-Garant, and J.-S. Simard,
2-D uncertain Markovian jump linear discrete-time systems,’’ IEEE ‘‘Nonlinear model predictive control of a batch fluidized bed dryer
Access, vol. 5, pp. 12176–12184, 2017. for pharmaceutical particles,’’ Control Eng. Pract., vol. 64, pp. 88–101,
[3] A. Cifter, ‘‘Forecasting electricity price volatility with the Markov- Jul. 2017.
switching GARCH model: Evidence from the Nordic electric power mar- [26] Y. Cao, D. Acevedo, Z. K. Nagy, and C. D. Laird, ‘‘Real-time feasible
ket,’’ Electr. Power Syst. Res., vol. 102, no. 9, pp. 61–67, 2013. multi-objective optimization based nonlinear model predictive control of
[4] J. Wang, M. S. Chen, H. Shen, J. H. Park, and Z. G. Wu, ‘‘A Markov particle size and shape in a batch crystallization process,’’ Control Eng.
jump model approach to reliable event-triggered retarded dynamic output Pract., vol. 69, pp. 1–8, Dec. 2017.
feedback H∞ control for networked systems,’’ Nonlinear Anal., Hybrid [27] G. Franzè, F. Tedesco, and D. Famularo, ‘‘Model predictive control for con-
Syst., vol. 26, pp. 137–150, Nov. 2017. strained networked systems subject to data losses,’’ Automatica, vol. 54,
[5] M. S. Ali, K. Meenakshi, and N. Gunasekaran, ‘‘Finite time H∞ bounded- pp. 272–278, Apr. 2015.
ness of discrete-time Markovian jump neural networks with time-varying [28] W. Yang, G. Feng, and T. Zhang, ‘‘Robust model predictive control for
delays,’’ Int. J. Control, Autom. Syst., vol. 16, no. 1, pp. 181–188, 2018. discrete-time Takagi–Sugeno fuzzy systems with structured uncertainties
[6] G. Wang, Z. Li, Q. Zhang, and C. Yang, ‘‘Robust finite-time stability and persistent disturbances,’’ IEEE Trans. Fuzzy Syst., vol. 22, no. 5,
and stabilization of uncertain Markovian jump systems with time-varying pp. 1213–1228, Oct. 2014.
delay,’’ Appl. Math. Comput., vol. 293, pp. 377–393, Jan. 2017. [29] J. Liu, Y. Gao, S. Geng, and L. Wu, ‘‘Nonlinear control of variable speed
[7] J. R. Chávez-Fuentes, J. E. Mayta, E. F. Costa, and M. H. Terra, ‘‘Stochastic wind turbines via fuzzy techniques,’’ IEEE Access, vol. 5, pp. 27–34, 2017.
and exponential stability of discrete-time Markov jump linear singular [30] L. Teng, Y. Y. Wang, W. J. Cai, and H. Li, ‘‘Robust model predictive control
systems,’’ Syst. Control Lett., vol. 107, pp. 92–99, Sep. 2017. of discrete nonlinear systems with time delays and disturbances via T-S
[8] S. Cong, ‘‘A result on almost sure stability of linear continuous-time fuzzy approach,’’ J. Process Control, vol. 53, pp. 70–79, May 2017.
Markovian switching systems,’’ IEEE Trans. Autom. Control, vol. 63, no. 7, [31] B. Ding and X. Ping, ‘‘Dynamic output feedback model predictive control
pp. 2226–2233, Jul. 2018. for nonlinear systems represented by Hammerstein–Wiener model,’’ J.
Process Control, vol. 22, pp. 1773–1784, Oct. 2012.
[9] G. W. Gabriel and J. C. Geromel, ‘‘Performance evaluation of sampled-data
[32] M. Ławryńczuk, ‘‘Nonlinear predictive control for Hammerstein–Wiener
control of Markov jump linear systems,’’ Automatica, vol. 86, pp. 212–215,
systems,’’ ISA Trans., vol. 55, pp. 49–62, Mar. 2015.
Dec. 2017.
[33] F. Khani and M. Haeri, ‘‘Robust model predictive control of nonlinear
[10] G. W. Gabriel, T. R. Gonçalves, and J. C. Geromel, ‘‘Optimal and robust processes represented by Wiener or Hammerstein models,’’ Chem. Eng.
sampled-data control of Markov jump linear systems: A differential LMI Sci., vol. 129, pp. 223–231, Jun. 2015.
approach,’’ IEEE Trans. Autom. Control, vol. 63, no. 9, pp. 3054–3060,
[34] B. Vatankhah and M. Farrokhi, ‘‘Nonlinear model-predictive control with
Sep. 2018.
disturbance rejection property using adaptive neural networks,’’ J. Franklin
[11] W. Liu, P. Shi, and J.-S. Pan, ‘‘State estimation for discrete-time Markov Inst., vol. 354, no. 13, pp. 5201–5220, 2017.
jump linear systems with time-correlated and mode-dependent measure- [35] G. Lou, W. Gu, W. Sheng, X. Song, and F. Gao, ‘‘Distributed model
ment noise,’’ Automatica, vol. 85, pp. 9–21, Nov. 2017. predictive secondary voltage control of islanded microgrids with feedback
[12] Z. Wang, J. Yuan, Y. Pan, and D. Che, ‘‘Adaptive neural control for high linearization,’’ IEEE Access, vol. 6, pp. 50169–50178, 2018.
order Markovian jump nonlinear systems with unmodeled dynamics and [36] H. Zheng, T. Zou, J. Hu, and H. Yu, ‘‘A framework for adaptive pre-
dead zone inputs,’’ Neurocomputing, vol. 247, pp. 62–72, Jul. 2017. dictive control system based on zone control,’’ IEEE Access, vol. 6,
[13] S. Xing and F. Deng, ‘‘Delay-dependent H∞ filtering for discrete singular pp. 49513–49522, 2018.
Markov jump systems with Wiener process and partly unknown transition [37] K. Hashimoto, S. Adachi, and D. V. Dimarogonas, ‘‘Event-triggered inter-
probabilities,’’ J. Franklin Inst., vol. 355, pp. 6062–6086, Sep. 2018. mittent sampling for nonlinear model predictive control,’’ Automatica,
[14] J. Liu, C. Wu, Z. Wang, and L. Wu, ‘‘Reliable filter design for sensor vol. 81, pp. 148–155, Jul. 2017.
networks using type-2 fuzzy framework,’’ IEEE Trans. Ind. Informat., [38] T. A. N. Heirung, B. E. Ydstie, and B. Foss, ‘‘Dual adaptive model
vol. 13, no. 4, pp. 1742–1752, Aug. 2017. predictive control,’’ Automatica, vol. 80, pp. 340–348, Jun. 2017.

10426 VOLUME 7, 2019


H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering

[39] H. C. La, A. Potschka, and H. G. Bock, ‘‘Partial stability for [62] Z.-G. Wu, P. Shi, H. Su, and J. Chu, ‘‘Asynchronous l2 −l∞ filtering for
nonlinear model predictive control,’’ Automatica, vol. 78, pp. 14–19, discrete-time stochastic Markov jump systems with randomly occurred
Apr. 2017. sensor nonlinearities,’’ Automatica, vol. 50, no. 1, pp. 180–186, Jan. 2014.
[40] A. Garg, F. P. C. Gomes, P. Mhaskar, and M. R. Thompson, ‘‘Model [63] J. Wang and B. Ding, ‘‘Two-step output feedback predictive control for
predictive control of uni-axial rotational molding process,’’ Comput. Chem. Hammerstein systems with networked-induced time delays,’’ Int. J. Syst.
Eng., vol. 121, pp. 306–316, Feb. 2019. Sci., vol. 49, no. 13, pp. 2753–2762, 2018.
[41] A. J. Gallego, G. M. Merello, M. Berenguel, and E. F. Camacho, ‘‘Gain-
scheduling model predictive control of a Fresnel collector field,’’ Control
Eng. Pract., vol. 82, pp. 1–13, Jan. 2019.
[42] P. Sopasakis, D. Herceg, A. Bemporad, and P. Patrinos, ‘‘Risk-
averse model predictive control,’’ Automatica, vol. 100, pp. 281–288,
Feb. 2019.
[43] M. V. Kothare, V. Balakrishnan, and M. Morari, ‘‘Robust constrained
model predictive control using linear matrix inequalities,’’ Automatica, HONGBIN CAI received the M.S. degree in con-
vol. 32, no. 10, pp. 1361–1379, 1996. trol theory and control engineering from Liaoning
[44] F. A. Cuzzola, J. C. Geromel, and M. Morari, ‘‘An improved approach for Shihua University. He is currently pursuing the
constrained robust model predictive control,’’ Automatica, vol. 38, no. 7, Ph.D. degree in control science and engineering
pp. 1183–1189, 2002. from Northwestern Polytechnical University. His
[45] P. Bumroongsri and S. Kheawhom, ‘‘Robust model predictive control research interests include industrial process con-
with time-varying tubes,’’ Int. J. Control, Autom. Syst., vol. 15, no. 4, trol and model predictive control.
pp. 1479–1484, 2017.
[46] I. Nodozi and M. Rahmani, ‘‘LMI-based model predictive control for
switched nonlinear systems,’’ J. Process Control, vol. 59, pp. 49–58,
Nov. 2017.
[47] Y. Kim, T. H. Oh, T. Park, and J. M. Lee, ‘‘Backstepping control inte-
grated with Lyapunov-based model predictive control,’’ J. Process Control,
vol. 73, pp. 137–146, Jan. 2019.
[48] B.-G. Park and W. H. Kwon, ‘‘Robust one-step receding horizon control
of discrete-time Markovian jump uncertain systems,’’ Automatica, vol. 38,
no. 7, pp. 1229–1235, 2002.
[49] J. Lu, D. Li, and Y. Xi, ‘‘Constrained model predictive control synthesis PING LI received the Ph.D. degree in control
for uncertain discrete-time Markovian jump linear systems,’’ IET Control science and engineering from Zhejiang University,
Theory Appl., vol. 7, no. 5, pp. 707–719, 2013. in 1995. His research interests include industrial
[50] T. Yang and H. R. Karimi, ‘‘LMI-based model predictive control for a class process control and optimization, model predictive
of constrained uncertain fuzzy Markov jump systems,’’ Math. Problems control, and adaptive control.
Eng., vol. 2013, pp. 1–13, Sep. 2013.
[51] Y. Q. Liu and F. Liu, ‘‘N-step off-line MPC design of nonhomogeneous
Markov jump systems: A suboptimal case,’’ J. Franklin Inst., vol. 351,
no. 1, pp. 174–186, 2014.
[52] P. Patrinos, P. Sopasakis, H. Sarmiveis, and A. Bemporad,
‘‘Stochastic model predictive control for constrained discrete-time
Markovian switching systems,’’ Automatica, vol. 50, pp. 2504–2514,
Oct. 2014.
[53] S. Chitraganti, S. Aberkane, C. Aubrun, G. Valencia-Palomo, and
V. Dragan, ‘‘On control of discrete-time state-dependent jump linear sys-
tems with probabilistic constraints: A receding horizon approach,’’ Syst.
Control Lett., vol. 74, pp. 81–89, Dec. 2014. CHENGLI SU received the Ph.D. degree in control
[54] J. Cheng and F. Liu, ‘‘Feedback predictive control based on periodic science and engineering from Zhejiang University,
invariant set for Markov jump systems,’’ Circuits, Syst., Signal Process., in 2006. His research interests include industrial
vol. 34, no. 8, pp. 2681–2693, 2015. process control and optimization, and model pre-
[55] Y. Liu, Y. Yin, F. Liu, and K. L. Teo, ‘‘Constrained MPC design of nonlinear dictive control.
Markov jump system with nonhomogeneous process,’’ Nonlinear Anal.,
Hybrid Syst., vol. 17, pp. 1–9, Aug. 2015.
[56] Y. Song, S. Liu, and G. Wei, ‘‘Constrained robust distributed model pre-
dictive control for uncertain discrete-time Markovian jump linear system,’’
J. Franklin Inst., vol. 352, no. 1, pp. 73–92, 2015.
[57] A. Sala, M. Hernández-Mejías, and C. Ariño, ‘‘Stable receding-horizon
scenario predictive control for Markov-jump linear systems,’’ Automatica,
vol. 86, pp. 121–128, Dec. 2017.
[58] J. Lu, Y. Xi, D. Li, Y. Xu, and Z. Gan, ‘‘Model predictive control
synthesis for constrained Markovian jump linear systems with bounded
disturbance,’’ IET Control Theory Appl., vol. 11, no. 18, pp. 3288–3296,
2017.
[59] V. Dombrovskii, T. Obyedko, and M. Samorodova, ‘‘Model predictive con- JIANGTAO CAO received the Ph.D. degree in con-
trol of constrained Markovian jump nonlinear stochastic systems and port- trol science and engineering from the University of
folio optimization under market frictions,’’ Automatica, vol. 87, pp. 61–68, Portsmouth, in 2009. His research interests include
Jan. 2018. industrial process control and optimization, model
[60] L. Zhang, W. Xie, and J. Liu, ‘‘Robust control of saturating systems with predictive control, and fuzzy control systems.
Markovian packet dropouts under distributed MPC,’’ ISA Trans., to be
published, doi: 10.1016/j.isatra.2018.08.027.
[61] G. Wang, ‘‘H∞ control of singular Markovian jump systems with
operation modes disordering in controllers,’’ Neurocomputing, vol. 142,
pp. 275–281, Apr. 2014.

VOLUME 7, 2019 10427

You might also like