Robust Model Predictive Control For A Class of Discrete-Time Markovian Jump Linear Systems With Operation Mode Disordering
Robust Model Predictive Control For A Class of Discrete-Time Markovian Jump Linear Systems With Operation Mode Disordering
Robust Model Predictive Control For A Class of Discrete-Time Markovian Jump Linear Systems With Operation Mode Disordering
ABSTRACT For a class of discrete-time Markovian jump linear systems subject to operation mode
disordering, a robust model predictive control method can be proposed to solve this issue. A bijective
mapping scheme between the original random process and a new random process is studied to cope with
the problem of operation mode disordering. At each sampling time, the original ‘‘min–max’’ optimization
problem is transformed into a convex optimization problem with linear matrix inequalities so that the
complexity of solving the optimization problem can be greatly reduced. The sufficient stability condition
of the Markovian jump linear systems can be achieved by using the Lyapunov stability theory. Moreover,
a state feedback control law is obtained, which minimizes an infinite prediction horizon performance cost.
Furthermore, the cases of uncertain and unknown transition probabilities are also considered in this paper.
The simulation results show that the proposed method can guarantee the optimal control performance and
the stability of Markovian jump linear systems.
INDEX TERMS Robust model predictive control, Markovian jump linear systems (MJLSs), operation mode
disordering, linear matrix inequalities (LMIs).
2169-3536
2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 10415
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
H. Cai et al.: Robust MPC for a Class of Discrete-Time MJLSs With Operation Mode Disordering
transition probabilities were assumed to be convex sets was disordering have not been sufficiently studied, which moti-
proposed. For discrete-time MJLSs with polytopic uncer- vates the study in this paper.
tainties, a novel multi-step mode-dependent MPC method In this paper, the main contributions can be highlighted
was proposed and the mean-square stability of three cases as follows: (1). For a class of discrete-time MJLSs sub-
was guaranteed [49]. Yang and Karimi et al. [50] studied ject to operation mode disordering, a novel RMPC method
a novel MPC method for a class of uncertain fuzzy MJLSs can be proposed to guarantee the stability and the optimal
with partially unknown transition probabilities. Moreover, for performance of system. (2). To better solve the issue of
discrete-time non-homogeneous MJLSs with time-varying operation mode disordering in MJLSs, transition probability
transition probability matrices, a N-step off-line subopti- with uncertainty and incomplete information have also been
mal MPC was proposed [51]. Reference [52] researched researched, respectively. In this study, the discussed problems
the stochastic model predictive control (SMPC) of nonlinear are more general than most of those in existing literatures.
MJLSs. The terminal conditions of invariance and stability (3). From the perspective’ technology, due to consider the
were used to fulfill the robustness constraint and guarantee transition probability with uncertainty and incomplete infor-
mean-square stability. Chitraganti et al. [53] studied an one- mation, the complexity of the optimization problem can be
step receding horizon control method for discrete-time state- greatly increased, which are also the challenge and innovation
dependent MJLSs subject to probabilistic state constraints of this study.
and unbounded disturbances. On the other hand, for MJLSs The rest of the paper is arranged as follows. The problem
subject to input/state constraints, an MPC method based description is given in section 2. Section 3 proposes the
on a periodic invariant set was designed [54]. Furthermore, RMPC method for MJLSs with operation mode disorder-
a kind of discrete-time nonlinear Markovian jump system ing. Section 4 contains the main results. Section 5 provides
with non-homogeneous transition probabilities was devel- numerical examples. Section 6 gives a conclusion and our
oped by designing an MPC method [55]. Reference [56], future works.
for MJLSs with polytopic uncertainties in both system Notations: Rn represents n-dimensional Euclidean space.
matrices and transition probability matrices, studied a robust x(k|k) denotes the measured state at sampling time k.
distributed model predictive control (DMPC) strategy. At the AT denotes the transpose of matrix A. u(k +i|k) and x(k +i|k)
same time, the stable receding-horizon scenario predic- represent the control input and predicted state at step k,
tive control of constrained discrete-time MJLSs was also respectively. kx(k)k2 represents the Euclidean norm of state
studied [57]. For the Markovian jump linear systems with vector x(k). refers to the sample space, F refers to the
bounded disturbance, Lu et al. [58] studied the con- σ -algebras of the subsets of the sample space, and P refers to
strained model predictive control method to achieve the the probability measure on F. E refers to the mathematical
disturbance rejection. For a class of constrained discrete- expectation. The symbol ‘‘∗00 denotes the symmetric parts of
time Markovian nonlinear stochastic switching systems, symmetric matrices. I represents the identity matrix of known
Dombrovskii et al. [59] proposed an MPC method, which compatible dimensions, and (G)? = G + GT .
studied the dynamic investment portfolio selection prob-
lem in the presence of market frictions. Zhang et al. [60] II. PROBLEM FORMULATION
researched that for saturating systems with packet dropouts, Consider a class of discrete-time MJLSs defined on a com-
a distributed model predictive control strategy has been plete probability space (, F, P)
proposed.
We note that the MPC method is dependent on the oper- x(k + 1) = Aθ1 (k) x(k) + Bθ1 (k) u(k) (1)
ation modes of the system in all of the above studies. It is
assumed that the system modes and operation modes of the where x(k) ∈ Rnx represents the system state, u(k) ∈ Rnu
controllers are synchronous and in the right sequence. How- represents the control input, and θ1 (k) represents the oper-
ever, in practice, operation mode disordering is universal. ation modes of the system. Aθ1 (k) and Bθ1 (k) represent the
For example, in networked control systems, the data packets system matrices of known compatible dimensions. The orig-
can choose multiple paths when being transmitted and can inal operation mode {θ1 (k), k ∈ Z} is a Markov chain that
thus experience different time delays along these different takes values in the discrete finite set M1 = {1, 2, . . . , N1 }.
paths. This can result in packets that were launched earlier Therefore, the mode-dependent state feedback controller can
reaching the target point later than packets that were sent be written as
later. In other words, the data packets arrive in the incorrect
u(k) = Kθ1 (k) x(k) (2)
order. If the control signals of the MJLSs are transmitted by
unreliable networks, disordered operation modes can easily However, θ1 (k) is transmitted through multiple channels in
occur. Moreover, this can also lead to instability of the con- networked control system, which may suffer from operation
trol system and poor control performance. Therefore, it is mode disordering. Then, the corresponding controller can be
very important to deal with the problem of MJLSs that have described as follows
operation mode disordering [61]. Until now, to the best of
our knowledge, discrete-time MJLSs with operation mode u(k) = Kθ2 (k) x(k) (3)
Taking the expectation of both sides of (19) and summing ∗ ··· −XN
from τ = 0 to τ = ∞, one obtains √ √
πh1 Bh · · ·
T πhN BTh
−γ2 I
∞
X ∗
−X1 ··· 0
E [V (x(k + τ + 1|k)) − V (x(k + τ |k))]
.. ≤0
(25)
.
τ =0
∗
∗ 0
∞
X ∗ ∗ ··· −XN
≤− E [x T (k + τ |k)Qθ (k+τ ) x(k + τ |k)
−(G)? + W h (Yh1 − Yh )T
" #
τ =0
+ uT (k + τ |k)Rθ (k+τ ) u(k + τ |k)] (20) ≤0 (26)
∗ −I
It is assumed that the closed-loop system is asymptoti- "
−γ1 I x T (k|k)
#
cally stable. Since x(∞|k) = 0, it can be concluded that ≤0 (27)
∗ −Xη
V (x(∞|k)) = 0. Then,
∞
X and
E { [V (x(k + τ + 1|k)) − V (x(k + τ |k))]} ≤ −J∞ (k)
τ =0 Rh ≤ γ3 I (28)
(21)
where
The upper bound of the performance index can be derived as
follows Āh = Ah G + Bh Yh , Wh−1 = W h , Q−1
h = Qh , Rh = Rh .
−1
J∞ (k) ≤ E {V [x(k|k)]} ≤ γ1 (22) Then, the gain Kh1 of controller (2) and the gain Kh of
controller (7) can be obtained as
where γ1 is a given positive scalar. The main results are
presented in the following theorems. Kh1 = Yh1 G−1
Furthermore, it can also be derived that By applying the Schur complement lemma, (38) can be trans-
N formed into the inequality as follows
X
√ √ T
(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) 2(γ2 + γ3 )GT
−GT Ph G GT 2Yh
l=1 −1
N
X ∗ −Wh 0 0
+ 2(Ah + Bh Kh )T πhl Pl Bh 1Kh
−1
∗ ∗ −Qh 0
l=1
N
X ∗ ∗ ∗ −R−1
+ (Bh 1Kh )T πhl Pl Bh 1Kh − Ph + Qh + KhT Rh Kh h
∗ ∗ ∗ ∗
l=1
+ 2KhT Rh 1Kh + 1KhT Rh 1Kh ≤ 0 (30)
∗ ∗ ∗ ∗
Moreover, it is concluded that ∗ ∗ ∗ ∗
√ √
N 2πh1 ĀTh · · · 2πhN ĀTh
X
2(Ah + Bh Kh )T πhl Pl Bh 1Kh
0 ··· 0
l=1
N
X 0 ··· 0
≤ (Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )
0 ··· 0
≤0 (39)
l=1
N
X −X1 ··· 0
+ (Bh 1Kh )T πhl Pl Bh 1Kh (31) .. ..
∗ . .
l=1
and ∗ ··· −XN
2KhT Rh 1Kh ≤ 1KhT Rh 1Kh + KhT Rh Kh (32) As for the nonlinear term −GT Ph G, it can be shown that
By applying conditions (31) and (32) to inequality (30), (30) −GT Ph G ≤ −(G)? + Xh (40)
can be rewritten as Therefore, (24) can be directly obtained. Similarly, taking
N
X into account (34), (25) can also be guaranteed. On the other
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) hand, by substituting (9) into (36) and multiplying the right
l=1 and left sides by G and its transpose, respectively, we have
N
−GT Wh G + (Yh1 − Yh )T (Yh1 − Yh ) ≤ 0
X
+ 2(Bh 1Kh )T πhl Pl Bh 1Kh (41)
l=1
As for the nonlinear term −GT W h G, we use the fact that
− Ph + Qh + 2KhT Rh Kh + 21KhT Rh 1Kh ≤ 0 (33)
−GT Wh G ≤ −(G)? + Wh−1 (42)
We can see that
N Finally, it is easy to deduce the inequality (26). Therefore,
from (19), when τ = 0, one can get
X
BTh πhl Pl Bh ≤ γ2 I (34)
l=1
1V (x(k), θ(k), k)
Rh ≤ γ3 I (35)
= E [V (x(k + 1))] − V (x(k))
and
= x T (k)3x(k) ≤ −λmin (−3)x T (k)x(k) ≤ −ρx T (k)x(k)
1KhT 1Kh ≤ Wh (36) (43)
Therefore, (33) can be reformulated as where
N
X N
X
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh ) + 2(γ2 + γ3 )Wh 3 = [Aη + Bη (Kη + 1Kη )]T πηµ Pµ
l=1 µ=1
− Ph + Qh + 2KhT Rh Kh ≤ 0 (37) × [Aη + Bη (Kη + 1Kη )] − Pη
Multiplying the right and left sides by G > 0 and its trans- and λmin (−3) denotes the minimal eigenvalue of (−3) and
pose, respectively, and definition P−1
h = Xh , Yh = Kh G and
ρ = inf{λmin (−3)}, for any η, µ ∈ M. Taking the expecta-
Yh1 = Kh1 G. Then, (37) is also equivalent to tion of both sides of (43) and summing from k = 0 to k = ∞,
N X∞
E{ 1V (x(k), θ(k), k)} = E [V (x(∞))] − V (x(0))
X
2(Ah G + Bh Yh ) T
πhl Pl (Ah G + Bh Yh )
l=1 k=0
∞
+ 2(γ2 + γ3 )G Wh G T X
≤ −ρE { x T (k)x(k)} (44)
− GT Ph G + GT Qh G + 2YhT Rh Yh ≤ 0 (38) k=0
√ √ √
2π̃hN ĀTh 2ξh1 ĀTh 2ξhN ĀTh
then the following inequality holds ··· ···
··· 0 0 ··· 0
··· 0 0 ··· 0
∞
X 1 ··· 0 0 ··· 0
E{ x T (k)x(k)} ≤ {V (x(0)) − E [V (x(∞))]}
ρ
··· 0 0 ··· 0
k=0
..
≤0
1 . 0 0 ··· 0
≤ V (x(0)) (45)
ρ ∗ −XN 0 ··· 0
∗ ∗ −M h ··· 0
..
.
which implies ∗ ∗ ∗ 0
∗ ∗ ∗ ∗ −M h
∞
1 (48)
X √ √
E{ x T (k)x(k)|x0 , θ0 } ≤ V (x(0)) < ∞ (46) π̃h1 BTh π̃hN BTh
−γ3 I + Rh ···
ρ
k=0 ∗ −X1 ··· 0
..
.
∗ ∗ 0
From Definition 1, it can be got that the system is stochasti-
∗ ∗ ∗ −XN
cally stable. This completes the proof.
∗ ∗ ∗ ∗
From the above results, we can see that πhl plays an impor-
∗ ∗ ∗ ∗
tant role in designing the RMPC, guaranteeing the robust ∗ ∗ ∗ ∗
stability and the optimal control performance of the closed- √ √
ξh1 BTh ξhN BTh
loop system. However, in practice, πhl cannot be exactly ···
obtained. Therefore, it is essential that the RMPC takes into 0 ··· 0
account cases in which the transition probabilities cannot be 0 ··· 0
determined with certainty. It follows that 0 ··· 0
≤0 (49)
−M h ··· 0
..
πhl = π̃hl + 1π̃hl , π̃hl ∈ [0, 1] .
(47) ∗ 0
∗ ∗ −M h
where π̃hl denotes the estimation of πhl in accordance √ √
ξh1 BTh ξhN BTh
−γ2 I ···
with (10). 1π̃hl ∈ [−ξhl , ξhl ], where ξhl ∈ [0, 1] denotes ∗ −X1 ··· 0
the admissible uncertainty. Therefore, the following theorem
.. ≤0 (50)
can be obtained. ∗
∗ . 0
Theorem 2: Consider the system (1), and let x(k|k) = x(k) ∗ ∗ ··· −XN
be the measured system state at each sampling time k. For
√ √
given symmetric matrices Qh > 0 and Rh > 0 and scalars ξh1 ĀTh ξhN ĀTh
−γ4 I ···
γ1 > 0, γ2 > 0, γ3 > 0, γ4 > 0 and ξhl > 0, there ∗ −X1 ··· 0
exists a non-fragile state feedback controller (7) such that .. ≤0 (51)
the performance index is minimized. Then the closed-loop ∗
∗ . 0
system (12) is stochastically stable if there exist matrices ∗ ∗ ··· −XN
Xh > 0, G > 0, Yh > 0, Yh1 > 0, Mh > 0 and Wh > 0
satisfying and
−(I )? + M h I T
<0 (52)
√ √ √ ∗ −Xl
2 2(γ3 − γ2 )GT GT 2YhT 2π̃h1 ĀTh
where
∗ −W h 0 0 0
Āh = Ah G + Bh Yh , W h = Wh−1 , M h = Mh−1
∗ ∗ −Qh 0 0
2 = −(G)? + Xh − 2γ4 I ,
h = Qh , Rh = Rh .
Q−1
−1
∗ ∗ ∗ −Rh 0
∗ ∗ ∗ ∗ −X1 Proof: Taking into account the proof of Theorem 1,
it is obvious that condition (29) should be affected by the
∗ ∗ ∗ ∗ ∗
uncertainty (47) such that
∗ ∗ ∗ ∗ ∗
N
∗ ∗ ∗ ∗ ∗ X
[Ah +Bh (Kh +1Kh )]T (π̃hl +1π̃hl )Pl [Ah +Bh (Kh +1Kh )]
∗ ∗ ∗ ∗ ∗
l=1
∗ ∗ ∗ ∗ ∗ − Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (53)
N
To address the uncertainty, (53) can be changed such that X
− 2(Ah + Bh Kh )T ξhl Pl (Ah + Bh Kh )
[Ah + Bh (Kh + 1Kh )]T l=1
N N N
X X X N
×[ (π̃hl + 1π̃hl )Pl + ξhl Pl − ξhl Pl X
− 21KhT BTh ξhl Pl Bh 1Kh
l=1 l=1 l=1
N N l=1
X X
− (1π̃hl + ξhl )Mh + ξhl Mh ][Ah + Bh (Kh + 1Kh )] N
X N
X
l=1 l=1 + 21KhT (Rh + BTh π̃hl Pl Bh + BTh ξhl Mh Bh )1Kh
− Ph + Qh + (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (54) l=1 l=1
Similarly, (51) can be derived very obvious. Then, (64) can the performance index is minimized. Then, the closed-loop
also be rewritten as system (12) is stochastically stable if there exist matrices
N
X Xh > 0, G > 0, Yh > 0, Yh1 > 0, Wh > 0 and Vh > 0
2(Ah G + Bh Yh )T π̃hl Pl (Ah G + Bh Yh ) satisfying
l=1 √ √ T
−(G)? + Xh − 2γ4 I 2(γ3 − γ2 )GT GT
N
2Yh
∗ −W 0 0
X
+ 2(Ah G + Bh Yh )T ξhl Mh (Ah G + Bh Yh ) h
l=1
∗ ∗ −Q h 0
− 2γ4 I + 2(γ3 − γ2 )GT Wh G − GT Ph G + GT Qh G
∗ ∗ ∗ −R h
∗ ∗ ∗ ∗
+ 2YhT Rh Yh ≤ 0 (66)
∗ ∗ ∗ ∗
By using the Schur complement, inequality (66) can be con-
∗ ∗ ∗ ∗
verted to the inequality (48). As for (57), it holds if and only ∗ ∗ ∗ ∗
√ T
√ T
√ T
if the following inequality is satisfied 2πh1 Āh · · · 2πhm Āh 2Āh
0 ··· 0 0
Pl − Mh < 0 (67)
0 ··· 0 0
Similarly, (67) can be written as follows 0 ··· 0 0
≤0 (73)
−X1 ··· 0 0
IT
−Mh ..
<0
(68) .
∗ −Xl ∗ 0 0
∗ ∗ −Xm 0
where the following inequality should also be considered
∗ ∗ ∗ −V h
−I T Mh I < −(I )? + M h (69)
−γ3 I + Rh
√
πh1 Bh · · ·
T √
πhm BTh BTh
− Ph + Qh + 2KhT Rh Kh ≤ 0 (85)
which is also equivalent to
X X To satisfy the inequality, the following conditions should
[Ah + Bh (Kh + 1Kh )]T [ πhl Pl + πhl (Pl − Vh )
be guaranteed
l∈Mhk l∈Mk
h
X
+ ᾱh Vh ][Ah + Bh (Kh + 1Kh )] − Ph + Qh BTh πhl Vh Bh ≤ γ2 I (86)
l∈Mhk
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (80)
and
where X
ᾱh =
X
πhl = 1 −
X
πhl Rh + BTh πhl Pl Bh + BTh Vh Bh ≤ γ3 I (87)
h l∈Mhk
l∈Mk l∈Mhk
Applying to the Schur complement lemma, (74) and (75) can
If inequality (80) is to be satisfied, we need to have
X be obtained. Then, (85) can be written as
[Ah + Bh (Kh + 1Kh )]T [ πhl Pl + ᾱh Vh ] X
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )
l∈Mhk
l∈Mhk
× [Ah + Bh (Kh + 1Kh )] − Ph + Qh
+ 2(Ah + Bh Kh )T Vh (Ah + Bh Kh )
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (81) X
− 2(Ah + Bh Kh )T πhl Vh (Ah + Bh Kh )
and l∈Mhk
X
πhl (Pl − Vh ) < 0 (82) + 2(γ3 − γ2 )Wh − Ph + Qh + 2KhT Rh Kh ≤ 0 (88)
h
l∈Mk
Multiplying the right and left sides by G and its transpose,
As for (81), it can be shown to be equivalent to respectively, we have
X X
[(Ah + Bh Kh ) + Bh 1Kh ]T [ πhl Pl + Vh 2(Ah G + Bh Yh )T πhl Pl (Ah G + Bh Yh )
l∈Mhk l∈Mhk
+ 2(Ah G + Bh Yh )T Vh (Ah G + Bh Yh )
X
− πhl Vh ][(Ah + Bh Kh ) + Bh 1Kh ] − Ph + Qh X
l∈Mhk − 2(Ah G + Bh Yh )T πhl Vh (Ah G + Bh Yh )
+ (Kh + 1Kh )T Rh (Kh + 1Kh ) ≤ 0 (83) l∈Mhk
+ 2(γ3 − γ2 )G Wh G − GT Ph G
T
Furthermore, (83) can be obtained as
X X + GT Qh G + 2YhT Rh Yh ≤ 0 (89)
(Ah + Bh Kh )T ( πhl Pl + Vh − πhl Vh )(Ah + Bh Kh )
l∈Mhk l∈Mhk
where
X
(Ah G + Bh Yh )T πhl Vh (Ah G + Bh Yh ) ≤ γ4 I
X X
+ 2(Ah + Bh Kh )T ( πhl Pl + Vh − πhl Vh )Bh 1Kh (90)
l∈Mhk l∈Mhk l∈Mhk
X X
+ (Bh 1Kh )T ( πhl Pl + Vh − πhl Vh )Bh 1Kh Inequality (90) can be converted to (76). Finally,
l∈Mhk l∈Mhk
X
2(Ah G + Bh Yh )T πhl Pl (Ah G + Bh Yh )
− Ph + Qh + KhT Rh Kh + 2KhT Rh 1Kh + 1KhT Rh 1Kh ≤ 0 l∈Mhk
(84) + 2(Ah G + Bh Yh )T Vh (Ah G + Bh Yh )
Similar to (31) and (32), it can be rewritten as follows: − 2γ4 I + 2(γ3 − γ2 )GT Wh G − GT Ph G + GT Qh G
X + 2YhT Rh Yh ≤ 0 (91)
2(Ah + Bh Kh )T πhl Pl (Ah + Bh Kh )
l∈Mhk Moreover, (73) can be obtained from (91). As for (82), it holds
X if and only if the following inequality is satisfied
− 2(Ah + Bh Kh )T πhl Vh (Ah + Bh Kh )
l∈Mhk Pl − Vh < 0 (92)
This is equivalent to TABLE 1. The bijective mapping relations between (η1 , η2 ) and η.
IT
−Vh
<0 (93)
∗ −Xl
Similarly, (77) can also be derived, and the proof is V. NUMERICAL EXAMPLE
complete. The following numerical example can be used to illustrate
Lemma 3: The feasible solutions of optimization problem the effectiveness of the proposed control method. Consider a
at time k are also feasible solutions for all time instants discrete-time MJLS with two modes.
t > k. Therefore, at time k, if the optimization prob- Mode 1:
lem is feasible, for all time instants t > k, it is also
−0.2 0.25 0.1
A1 = , B1 =
feasible. −0.1 −0.16 0.2
Proof: Assume that the optimization problem has feasi-
Mode 2:
ble solution at time k. Note that (27) is the mere constraint
dependent on the states of the system, then, for all future 0.1 0.15 0.17
A2 = , B2 =
system states x(k + τ ), τ ≥ 1, it only needs to guarantee −0.2 −0.1 −0.1
that (27) is feasible. At time k, when the optimization problem Without loss of generality, the transition probability matrix of
is feasible, x T (k + 1|k)Xη−1 x(k + 1|k) < γ1 holds. More- the Markov chain is given by
over, at time k + 1, for the measured state x(k + 1|k +
1) = x(k + 1), it can be obtained very easy that x T (k + 0.2 0.8
Pr =
1|k + 1)Xη−1 x(k + 1|k + 1) < γ1 holds. In other words, 0.4 0.6
the feasibility can be guaranteed based on the above analysis. The weighting matrices are chosen as Qh = diag{0.5, 0.5},
Similarly, this argument can be continued for time k + 2,
h = 6, and
R T h ∈ {1, 2, 3, 4}. The initial values are x0 =
k + 3, · · · . 0.2 0.13 , γ1 = 2, γ2 = 1 and γ3 = 0.2. In this
Theorem 4: Consider a class of discrete-time MJLSs sub- example, for the sake of proving the effectiveness of proposed
ject to operation mode disordering. According to the proof of method, [43] can be considered. The following three cases
lemma 1, the closed-loop system (12) is asymptotically stable should be shown: Considering mode disordering and apply-
with the state feedback gain matrix as Kh1 = Yh1 G−1 and ing the conventional RMPC method in [43], the simulation
Kh = Yh G−1 . results are as follows
Proof: Based on the proof of feasibility, at time k + 1, In this example, the operation modes of θ1 (k) take values
we can construct the same feasible solution as that at time k. in M1 = [1, 2]. When θ1 (k) is subject to mode disor-
At
S∗time k, assume that the optimal solution is expressed as dering, the process is described as another Markov process
k = {X h , G, Y ,
h SYh1 , W ,
h 1γ ∗ (k)}. Then, at time k + 1,
{θ2 (k), M2 = [1, 2]}. Therefore, according to the proposed
the feasible solution k+1 = {Xh , G, Yh , Yh1 , Wh , γ1 (k + 1)} method, a bijective mapping is defined as 9(η) = η1 +
can be constructed, which is theSoptimal solution at the time k. 2(η2 − 1). The system corresponding to the operation modes
Moreover, it is easy to see that k+1 can satisfy the optimiza- after bijective mapping is shown in Table 1.
tion problem. Therefore, based on the optimal theory, we can The following transition probability matrix can be obtained
get γ1∗ (k + 1) ≤ γ1 (k + 1) = γ1∗ (k).
0.1 0.2 0.3 0.4
On the other hand, the following Lyapunov function 0.3 0.1 0.1 0.5
V (x(k|k), θ(k), k) = x T (k|k)Pθ (k) x(k|k) need to be estab- Pr = 0.1 0.4 0.2 0.3 .
lished, where Pθ (k) is the optimal solution of optimization
0.2 0.3 0.2 0.3
problem at time k. Furthermore, based on the proof of fea-
sibility, one has x T (k + 1|k + 1)Pθ (k+1) x(k + 1|k + 1) ≤ The simulation figures are shown in Figures 1-4. From
x T (k + 1|k + 1)Pθ (k) x(k + 1|k + 1). This is because Pθ (k+1) Figure 2, we can see that the closed-loop system with oper-
is optimal, but Pθ (k) is only feasible at time k + 1. According ation mode disordering cannot guarantee better control per-
to (19), when τ = 0, one can get x T (k + 1|k)Pθ (k) x(k + formance using the conventional RMPC method in [43].
1|k) ≤ x T (k|k)Pθ(k) x(k|k). Meanwhile, it has x T (k + 1|k + Although the asymptotic stability of the closed-loop system
1)Pθ (k) x(k + 1|k + 1) ≤ x T (k + 1|k)Pθ (k) x(k + 1|k). can also be achieved by the conventional RMPC method,
So, the following inequality can be obtained x T (k + 1|k + the system needs a long time to achieve it. Furthermore, from
1)Pθ (k+1) x(k +1|k +1) ≤ x T (k|k)Pθ (k) x(k|k). The Lyapunov Figure 4, it is obvious that the asymptotic stability of the
function is strictly decreasing for the closed-loop system. closed-loop system with operation mode disordering based
Therefore, the closed-loop system is asymptotically stable. on the proposed control method is achieved approximately
The proof is completed. four-times faster than with the conventional RMPC method.
FIGURE 1. The original operation modes, θ1 , and disordered operation FIGURE 4. The state response of closed-loop system with mode
modes, θ2 , of the system. disordering using the proposed method.
VI. CONCLUSION [15] I. Ghous, Z. Xiang, and H. R. Karimi, ‘‘H∞ control of 2-D continuous
In this paper, for a class of discrete-time MJLSs subject Markovian jump delayed systems with partially unknown transition prob-
abilities,’’ Inf. Sci., vols. 382–383, pp. 274–291, Mar. 2017.
to operation mode disordering, a RMPC method has been [16] F. Li, P. Shi, C.-C. Lim, and L. Wu, ‘‘Fault detection filtering for nonho-
studied. To deal with operation mode disordering, a bijective mogeneous Markovian jump systems via a fuzzy approach,’’ IEEE Trans.
mapping scheme between the original random process and Fuzzy Syst., vol. 26, no. 1, pp. 131–141, Feb. 2018.
[17] Z. Li and G. Wang, ‘‘Stabilization of discrete-time systems via a partially
a new random process has been introduced. At each sam- disabled controller experiencing forced dwell times,’’ IEEE Access, vol. 6,
pling time, the complex ‘‘min-max’’ optimization problem is pp. 27001–27009, 2018.
transformed into a convex optimization problem with LMIs, [18] G. Sun, L. Wu, Z. Kuang, Z. Ma, and J. Liu, ‘‘Practical tracking control
of linear motor via fractional-order sliding mode,’’ Automatica, vol. 94,
greatly reducing the complexity of solving the optimization pp. 221–235, Aug. 2018.
problem. Furthermore, the conservativeness of the system [19] L. Wu, Y. Gao, J. Liu, and H. Li, ‘‘Event-triggered sliding mode con-
has been reduced because probability information has been trol of stochastic systems via output feedback,’’ Automatica, vol. 82,
pp. 79–92, Aug. 2017.
included in designing the predictive controller. Moreover, [20] F. Li, C. Du, C. Yang, and W. Gui, ‘‘Passivity-based asynchronous sliding
the cases of uncertain and unknown transition probabilities mode control for delayed singular Markovian jump systems,’’ IEEE Trans.
have also been considered. In all of these cases, the stochas- Autom. Control, vol. 63, no. 8, pp. 2715–2721, Aug. 2018.
[21] J. Song, Y. Niu, and Y. Zou, ‘‘Asynchronous sliding mode control of
tic stability of the closed-loop system has been guaranteed. Markovian jump systems with time-varying delays and partly accessible
The simulation results illustrate that the proposed method mode detection probabilities,’’ Automatica, vol. 93, pp. 33–41, Jul. 2018.
is both feasible and very effective. The future work is that [22] H. Li, P. Shi, and D. Yao, ‘‘Adaptive sliding-mode control of Markov
jump nonlinear systems with actuator faults,’’ IEEE Trans. Autom. Control,
when Markovian jump linear systems subject to actuator and vol. 62, no. 4, pp. 1933–1939, Apr. 2017.
sensor faults, the robustness and stability of system will be [23] C.-C. Tsai, S.-C. Lin, T.-Y. Wang, and F.-J. Teng, ‘‘Stochastic model refer-
researched. ence predictive temperature control with integral action for an industrial
oil-cooling process,’’ Control Eng. Pract., vol. 17, no. 2, pp. 302–310,
2009.
REFERENCES [24] Y. Tang, C. Peng, S. Yin, J. Qiu, H. Gao, and O. Kaynak, ‘‘Robust model
predictive control under saturations and packet dropouts with application
[1] O. L. V. Costa, M. D. Fragoso, and R. P. Marques, Discrete-Time Markov to networked flotation processes,’’ IEEE Trans. Autom. Sci. Eng., vol. 11,
Jump Linear Systems. London, U.K.: Springer-Verlag, 2005. no. 4, pp. 1056–1064, Oct. 2014.
[2] Z. Li, T. Zhang, C. Ma, H. Li, and X. Li, ‘‘Robust passivity control for [25] F. Gagnon, A. Desbiens, É. Poulin, P.-P. Lapointe-Garant, and J.-S. Simard,
2-D uncertain Markovian jump linear discrete-time systems,’’ IEEE ‘‘Nonlinear model predictive control of a batch fluidized bed dryer
Access, vol. 5, pp. 12176–12184, 2017. for pharmaceutical particles,’’ Control Eng. Pract., vol. 64, pp. 88–101,
[3] A. Cifter, ‘‘Forecasting electricity price volatility with the Markov- Jul. 2017.
switching GARCH model: Evidence from the Nordic electric power mar- [26] Y. Cao, D. Acevedo, Z. K. Nagy, and C. D. Laird, ‘‘Real-time feasible
ket,’’ Electr. Power Syst. Res., vol. 102, no. 9, pp. 61–67, 2013. multi-objective optimization based nonlinear model predictive control of
[4] J. Wang, M. S. Chen, H. Shen, J. H. Park, and Z. G. Wu, ‘‘A Markov particle size and shape in a batch crystallization process,’’ Control Eng.
jump model approach to reliable event-triggered retarded dynamic output Pract., vol. 69, pp. 1–8, Dec. 2017.
feedback H∞ control for networked systems,’’ Nonlinear Anal., Hybrid [27] G. Franzè, F. Tedesco, and D. Famularo, ‘‘Model predictive control for con-
Syst., vol. 26, pp. 137–150, Nov. 2017. strained networked systems subject to data losses,’’ Automatica, vol. 54,
[5] M. S. Ali, K. Meenakshi, and N. Gunasekaran, ‘‘Finite time H∞ bounded- pp. 272–278, Apr. 2015.
ness of discrete-time Markovian jump neural networks with time-varying [28] W. Yang, G. Feng, and T. Zhang, ‘‘Robust model predictive control for
delays,’’ Int. J. Control, Autom. Syst., vol. 16, no. 1, pp. 181–188, 2018. discrete-time Takagi–Sugeno fuzzy systems with structured uncertainties
[6] G. Wang, Z. Li, Q. Zhang, and C. Yang, ‘‘Robust finite-time stability and persistent disturbances,’’ IEEE Trans. Fuzzy Syst., vol. 22, no. 5,
and stabilization of uncertain Markovian jump systems with time-varying pp. 1213–1228, Oct. 2014.
delay,’’ Appl. Math. Comput., vol. 293, pp. 377–393, Jan. 2017. [29] J. Liu, Y. Gao, S. Geng, and L. Wu, ‘‘Nonlinear control of variable speed
[7] J. R. Chávez-Fuentes, J. E. Mayta, E. F. Costa, and M. H. Terra, ‘‘Stochastic wind turbines via fuzzy techniques,’’ IEEE Access, vol. 5, pp. 27–34, 2017.
and exponential stability of discrete-time Markov jump linear singular [30] L. Teng, Y. Y. Wang, W. J. Cai, and H. Li, ‘‘Robust model predictive control
systems,’’ Syst. Control Lett., vol. 107, pp. 92–99, Sep. 2017. of discrete nonlinear systems with time delays and disturbances via T-S
[8] S. Cong, ‘‘A result on almost sure stability of linear continuous-time fuzzy approach,’’ J. Process Control, vol. 53, pp. 70–79, May 2017.
Markovian switching systems,’’ IEEE Trans. Autom. Control, vol. 63, no. 7, [31] B. Ding and X. Ping, ‘‘Dynamic output feedback model predictive control
pp. 2226–2233, Jul. 2018. for nonlinear systems represented by Hammerstein–Wiener model,’’ J.
Process Control, vol. 22, pp. 1773–1784, Oct. 2012.
[9] G. W. Gabriel and J. C. Geromel, ‘‘Performance evaluation of sampled-data
[32] M. Ławryńczuk, ‘‘Nonlinear predictive control for Hammerstein–Wiener
control of Markov jump linear systems,’’ Automatica, vol. 86, pp. 212–215,
systems,’’ ISA Trans., vol. 55, pp. 49–62, Mar. 2015.
Dec. 2017.
[33] F. Khani and M. Haeri, ‘‘Robust model predictive control of nonlinear
[10] G. W. Gabriel, T. R. Gonçalves, and J. C. Geromel, ‘‘Optimal and robust processes represented by Wiener or Hammerstein models,’’ Chem. Eng.
sampled-data control of Markov jump linear systems: A differential LMI Sci., vol. 129, pp. 223–231, Jun. 2015.
approach,’’ IEEE Trans. Autom. Control, vol. 63, no. 9, pp. 3054–3060,
[34] B. Vatankhah and M. Farrokhi, ‘‘Nonlinear model-predictive control with
Sep. 2018.
disturbance rejection property using adaptive neural networks,’’ J. Franklin
[11] W. Liu, P. Shi, and J.-S. Pan, ‘‘State estimation for discrete-time Markov Inst., vol. 354, no. 13, pp. 5201–5220, 2017.
jump linear systems with time-correlated and mode-dependent measure- [35] G. Lou, W. Gu, W. Sheng, X. Song, and F. Gao, ‘‘Distributed model
ment noise,’’ Automatica, vol. 85, pp. 9–21, Nov. 2017. predictive secondary voltage control of islanded microgrids with feedback
[12] Z. Wang, J. Yuan, Y. Pan, and D. Che, ‘‘Adaptive neural control for high linearization,’’ IEEE Access, vol. 6, pp. 50169–50178, 2018.
order Markovian jump nonlinear systems with unmodeled dynamics and [36] H. Zheng, T. Zou, J. Hu, and H. Yu, ‘‘A framework for adaptive pre-
dead zone inputs,’’ Neurocomputing, vol. 247, pp. 62–72, Jul. 2017. dictive control system based on zone control,’’ IEEE Access, vol. 6,
[13] S. Xing and F. Deng, ‘‘Delay-dependent H∞ filtering for discrete singular pp. 49513–49522, 2018.
Markov jump systems with Wiener process and partly unknown transition [37] K. Hashimoto, S. Adachi, and D. V. Dimarogonas, ‘‘Event-triggered inter-
probabilities,’’ J. Franklin Inst., vol. 355, pp. 6062–6086, Sep. 2018. mittent sampling for nonlinear model predictive control,’’ Automatica,
[14] J. Liu, C. Wu, Z. Wang, and L. Wu, ‘‘Reliable filter design for sensor vol. 81, pp. 148–155, Jul. 2017.
networks using type-2 fuzzy framework,’’ IEEE Trans. Ind. Informat., [38] T. A. N. Heirung, B. E. Ydstie, and B. Foss, ‘‘Dual adaptive model
vol. 13, no. 4, pp. 1742–1752, Aug. 2017. predictive control,’’ Automatica, vol. 80, pp. 340–348, Jun. 2017.
[39] H. C. La, A. Potschka, and H. G. Bock, ‘‘Partial stability for [62] Z.-G. Wu, P. Shi, H. Su, and J. Chu, ‘‘Asynchronous l2 −l∞ filtering for
nonlinear model predictive control,’’ Automatica, vol. 78, pp. 14–19, discrete-time stochastic Markov jump systems with randomly occurred
Apr. 2017. sensor nonlinearities,’’ Automatica, vol. 50, no. 1, pp. 180–186, Jan. 2014.
[40] A. Garg, F. P. C. Gomes, P. Mhaskar, and M. R. Thompson, ‘‘Model [63] J. Wang and B. Ding, ‘‘Two-step output feedback predictive control for
predictive control of uni-axial rotational molding process,’’ Comput. Chem. Hammerstein systems with networked-induced time delays,’’ Int. J. Syst.
Eng., vol. 121, pp. 306–316, Feb. 2019. Sci., vol. 49, no. 13, pp. 2753–2762, 2018.
[41] A. J. Gallego, G. M. Merello, M. Berenguel, and E. F. Camacho, ‘‘Gain-
scheduling model predictive control of a Fresnel collector field,’’ Control
Eng. Pract., vol. 82, pp. 1–13, Jan. 2019.
[42] P. Sopasakis, D. Herceg, A. Bemporad, and P. Patrinos, ‘‘Risk-
averse model predictive control,’’ Automatica, vol. 100, pp. 281–288,
Feb. 2019.
[43] M. V. Kothare, V. Balakrishnan, and M. Morari, ‘‘Robust constrained
model predictive control using linear matrix inequalities,’’ Automatica, HONGBIN CAI received the M.S. degree in con-
vol. 32, no. 10, pp. 1361–1379, 1996. trol theory and control engineering from Liaoning
[44] F. A. Cuzzola, J. C. Geromel, and M. Morari, ‘‘An improved approach for Shihua University. He is currently pursuing the
constrained robust model predictive control,’’ Automatica, vol. 38, no. 7, Ph.D. degree in control science and engineering
pp. 1183–1189, 2002. from Northwestern Polytechnical University. His
[45] P. Bumroongsri and S. Kheawhom, ‘‘Robust model predictive control research interests include industrial process con-
with time-varying tubes,’’ Int. J. Control, Autom. Syst., vol. 15, no. 4, trol and model predictive control.
pp. 1479–1484, 2017.
[46] I. Nodozi and M. Rahmani, ‘‘LMI-based model predictive control for
switched nonlinear systems,’’ J. Process Control, vol. 59, pp. 49–58,
Nov. 2017.
[47] Y. Kim, T. H. Oh, T. Park, and J. M. Lee, ‘‘Backstepping control inte-
grated with Lyapunov-based model predictive control,’’ J. Process Control,
vol. 73, pp. 137–146, Jan. 2019.
[48] B.-G. Park and W. H. Kwon, ‘‘Robust one-step receding horizon control
of discrete-time Markovian jump uncertain systems,’’ Automatica, vol. 38,
no. 7, pp. 1229–1235, 2002.
[49] J. Lu, D. Li, and Y. Xi, ‘‘Constrained model predictive control synthesis PING LI received the Ph.D. degree in control
for uncertain discrete-time Markovian jump linear systems,’’ IET Control science and engineering from Zhejiang University,
Theory Appl., vol. 7, no. 5, pp. 707–719, 2013. in 1995. His research interests include industrial
[50] T. Yang and H. R. Karimi, ‘‘LMI-based model predictive control for a class process control and optimization, model predictive
of constrained uncertain fuzzy Markov jump systems,’’ Math. Problems control, and adaptive control.
Eng., vol. 2013, pp. 1–13, Sep. 2013.
[51] Y. Q. Liu and F. Liu, ‘‘N-step off-line MPC design of nonhomogeneous
Markov jump systems: A suboptimal case,’’ J. Franklin Inst., vol. 351,
no. 1, pp. 174–186, 2014.
[52] P. Patrinos, P. Sopasakis, H. Sarmiveis, and A. Bemporad,
‘‘Stochastic model predictive control for constrained discrete-time
Markovian switching systems,’’ Automatica, vol. 50, pp. 2504–2514,
Oct. 2014.
[53] S. Chitraganti, S. Aberkane, C. Aubrun, G. Valencia-Palomo, and
V. Dragan, ‘‘On control of discrete-time state-dependent jump linear sys-
tems with probabilistic constraints: A receding horizon approach,’’ Syst.
Control Lett., vol. 74, pp. 81–89, Dec. 2014. CHENGLI SU received the Ph.D. degree in control
[54] J. Cheng and F. Liu, ‘‘Feedback predictive control based on periodic science and engineering from Zhejiang University,
invariant set for Markov jump systems,’’ Circuits, Syst., Signal Process., in 2006. His research interests include industrial
vol. 34, no. 8, pp. 2681–2693, 2015. process control and optimization, and model pre-
[55] Y. Liu, Y. Yin, F. Liu, and K. L. Teo, ‘‘Constrained MPC design of nonlinear dictive control.
Markov jump system with nonhomogeneous process,’’ Nonlinear Anal.,
Hybrid Syst., vol. 17, pp. 1–9, Aug. 2015.
[56] Y. Song, S. Liu, and G. Wei, ‘‘Constrained robust distributed model pre-
dictive control for uncertain discrete-time Markovian jump linear system,’’
J. Franklin Inst., vol. 352, no. 1, pp. 73–92, 2015.
[57] A. Sala, M. Hernández-Mejías, and C. Ariño, ‘‘Stable receding-horizon
scenario predictive control for Markov-jump linear systems,’’ Automatica,
vol. 86, pp. 121–128, Dec. 2017.
[58] J. Lu, Y. Xi, D. Li, Y. Xu, and Z. Gan, ‘‘Model predictive control
synthesis for constrained Markovian jump linear systems with bounded
disturbance,’’ IET Control Theory Appl., vol. 11, no. 18, pp. 3288–3296,
2017.
[59] V. Dombrovskii, T. Obyedko, and M. Samorodova, ‘‘Model predictive con- JIANGTAO CAO received the Ph.D. degree in con-
trol of constrained Markovian jump nonlinear stochastic systems and port- trol science and engineering from the University of
folio optimization under market frictions,’’ Automatica, vol. 87, pp. 61–68, Portsmouth, in 2009. His research interests include
Jan. 2018. industrial process control and optimization, model
[60] L. Zhang, W. Xie, and J. Liu, ‘‘Robust control of saturating systems with predictive control, and fuzzy control systems.
Markovian packet dropouts under distributed MPC,’’ ISA Trans., to be
published, doi: 10.1016/j.isatra.2018.08.027.
[61] G. Wang, ‘‘H∞ control of singular Markovian jump systems with
operation modes disordering in controllers,’’ Neurocomputing, vol. 142,
pp. 275–281, Apr. 2014.