Tube-Based MPC
Tube-Based MPC
Tube-Based MPC
Abstract— Robust tube-based model predictive control and enables optimization over closed-loop trajectories instead
(MPC) methods address constraint satisfaction by leveraging of controller gains [7]. We relate the SLP to df-MPC and
an a priori determined tube controller in the prediction to exploit this relation in order to formulate a system level
tighten the constraints. This paper presents a system level
tube-MPC (SLTMPC) method derived from the system level pa- tube-MPC (SLTMPC) method.
rameterization (SLP), which allows optimization over the tube Related Work: A tube-based MPC method reducing con-
arXiv:2103.02460v2 [eess.SY] 30 Apr 2021
controller online when solving the MPC problem, which can servativeness by using multiple pre-computed tube con-
significantly reduce conservativeness. We derive the SLTMPC trollers was introduced in [9]. In contrast, the proposed
method by establishing an equivalence relation between a class
SLTMPC method directly optimizes over the controller gains
of robust MPC methods and the SLP. Finally, we show that the
SLTMPC formulation naturally arises from an extended SLP online. At the intersection of SLP and MPC, the first SLP-
formulation and show its merits in a numerical example. based MPC formulation was introduced in [7] and then
Index Terms— Robust control, optimal control, predictive formalized as SLS-MPC in [10], where both additive distur-
control for linear systems. bances and parametric model uncertainties were integrated in
the robust MPC problem. A more conservative variant of this
I. INTRODUCTION
approach was already introduced in the context of the linear
The availability of powerful computational hardware for quadratic regulator (LQR) in [11]. In a distributed setting, an
embedded systems together with advances in optimization SLP-based MPC formulation was proposed in [12], which
software has established model predictive control (MPC) builds on the principles of distributed SLS presented in [8]
as the principal control method for constrained dynamical and was later extended to distributed explicit SLS-MPC [13]
systems. MPC relies on a sufficiently accurate system model and layered SLS-MPC [14].
to predict the system behavior over a finite horizon. In Contributions: We consider linear time-invariant dynam-
order to address constraint satisfaction in the presence of ical systems with additive uncertainties. In this setting, we
bounded uncertainties, robust MPC methods [1] typically first show the equivalence of df-MPC and the SLP, before
optimize over a feedback policy and tighten constraints in using this relation to analyze the inherent tube structure
the prediction problem. The two most common robust MPC present in the SLP. Based on this analysis, we propose
methodologies are disturbance feedback MPC (df-MPC) [2], a SLTMPC formulation, which allows optimization over
[3] and tube-based MPC [4], [5], [6], which mainly differ the tube controller in the online optimization problem and
in their parameterization of the feedback policy. Disturbance thus reduces conservativeness compared to other tube-based
feedback parametrizes the input in terms of the disturbance, methods. Additionally, we show that the SLTMPC can be
while tube-based methods split the system dynamics into derived from an extended version of the SLP and outline
nominal and error dynamics and parametrize the input in extensions to distributed and explicit SLTMPC formulations.
terms of those two quantities. In general, df-MPC is compu- Finally, we show the effectiveness of our formulation on a
tationally more demanding, yet less conservative than tube- numerical example.
based methods. In this paper, we present a tube-based MPC The remainder of the paper is organized as follows: Sec-
method, which lies at the intersection of these two extremes tion II introduces the notation, the problem formulation, and
by offering a trade-off between computational complexity relevant concepts for this paper. We present the equivalence
and conservativeness. relation between the SLP and df-MPC in Section III, before
We derive the proposed method by leveraging the system deriving the SLTMPC formulation in Section IV. Finally,
level parameterization (SLP), which has recently received Section V presents a numerical application of SLTMPC and
increased attention. It was introduced as part of the system Section VI concludes the paper.
level synthesis (SLS) framework [7], in particular in the
context of distributed optimal control [8]. The SLP offers II. PRELIMINARIES
the advantage that convexity is preserved under any addi-
tional convex constraint imposed on the parameterization A. Notation
In the context of matrices and vectors, the superscript r,c
*This work was supported by the European Space Agency (ESA) under
NPI 621-2018 and the Swiss Space Center (SSC). denotes the element indexed by the r-th row and c-th column.
1 J. Sieber and M. N. Zeilinger are members of the Institute for Dy-
Additionally, :r,c: refers to all rows from 1 to r − 1 and all
namic Systems and Control (IDSC), ETH Zurich, 8092 Zurich, Switzerland columns from c to the end, respectively. The notation S i
{jsieber,mzeilinger}@ethz.ch
2 S. Bennani is a member of ESA-ESTEC, Noordwijk 2201 AZ, The refers to the i-th Cartesian product of the set S, i.e. S i =
Netherlands [email protected] S × · · · × S = {(s0 , . . . , si−1 ) | sj ∈ S ∀j = 0, . . . , i − 1}.
B. Problem Formulation The maps Φx , Φu have a block-lower-triangular structure
We consider linear time-invariant (LTI) dynamical systems and completely define the behavior of the closed-loop sys-
with additive disturbances of the form tem with feedback controller K. Using these maps as the
parametrization of the system behavior allows us to recast
xk+1 = Axk + Buk + wk , (1) the optimal control problem in terms of Φx , Φu instead
with wk ∈ W, where W is a compact set. Here, we assume of the state feedback gain K by exploiting the following
time-invariant polytopic sets W = {w ∈ Rn | Sw ≤ s}, but theorem.
the results can easily be generalized to other compact convex Theorem 1: (Theorem 2.1 in [7]) Over the horizon N ,
sets. The system (1) is subject to polytopic state and input the system dynamics (1) with block-lower-triangular state
constraints containing the origin in their interior feedback law K defining the control action as u = Kx, the
following are true:
X = {x ∈ Rn | Hx x ≤ hx }, U = {u ∈ Rm | Hu u ≤ hu }. (2)
1) the affine subspace defined by
This paper presents a system level tube-MPC formulation,
i.e. a tube-based MPC formulation derived via a system level
I − ZA −ZB
Φx
=I (5)
parameterization, which enables online optimization of the Φu
tube control law. To formulate this method, we first show
parametrizes all possible system responses (4),
the equivalence of the system level parameterization and
2) for any block-lower-triangular matrices Φx , Φu sat-
disturbance feedback MPC, both of which are introduced
isfying (5), the controller K = Φu Φ−1x achieves the
in the following sections.
desired system response.
C. System Level Parameterization (SLP)
We consider the finite horizon version of the SLP, as D. Robust Model Predictive Control (MPC)
proposed in [7]. The main idea of the SLP is to parameterize We define the optimal controller for system (1) subject
the controller synthesis by the closed-loop system behav- to (2) via a robust MPC formulation [15], where we consider
ior, which restates optimal control as an optimization over
PN −1 2 2
the quadratic cost k=0 kxk kQk + kuk kRk , with Qk , Rk
closed-loop trajectories and renders the synthesis problem the state and input weights at time step k, respectively,
convex under convex constraints on the control law allowing, and N the prediction horizon. For simplicity, we focus on
e.g., to impose a distributed computation structure [8]. We quadratic costs, however the results can be extended to
define x, u, and w as the concatenated states, inputs, and other cost functions fulfilling the standard MPC stability
disturbances over the horizon N , respectively, and δ as the assumptions [15]. Using the compact system definition in (3),
initial state x0 concatenated with the disturbance sequence the robust MPC problem is then defined as
w: x = [x0 , x1 , . . . , xN ]> , u = [u0 , u1 , . . . , uN ]> , δ =
[x0 , w]> . The dynamics can then be compactly defined as min x> Qx + u> Ru, (6a)
π
trajectories over the horizon N ,
s.t. x = ZAx + ZBu + δ (6b)
x = ZAx + ZBu + δ, (3) x :N N
∈X , x N N
∈ Xf , u ∈ U , ∀w ∈ W N −1
(6c)
where Z is the down-shift operator and ZA, ZB are the u = π(x, u), x = xk 0
(6d)
corresponding dynamic matrices
where
0 ... ... 0 0 ... ... 0
A 0 . . . 0 B Q = diag(Q0 , . . . , QN −1 , P ), R = diag(R0 , . . . , RN −1 , 0),
0 ... 0
ZA = . . .. , ZB = .. .. . with P a suitable terminal cost matrix, Xf a suitable terminal
. .. ..
.. . . . . . . . . . set, and π = [π 0 , . . . , π N ]> a vector of parametrized
0 ... A 0 0 ... B 0 feedback policies. Available robust MPC methods can be
In order to formulate the closed-loop dynamics, we define classified according to their policy parameterization and
an LTV state feedback controller u = Kx, with constraint handling approach. Two of the most common
0,0 policies are the tube policy [6] and the disturbance feedback
K 0 ... 0 policy [2], defined as
K 1,1 K 1,0 ... 0
K= . . . ,
.. .. .. π tube (x, u) = K (x − z) + v, (7)
0
K N,N K N,N −1 ... K N,0
df
π (x, u) = Mw + v, (8)
resulting in the closed-loop trajectory x = where z = [z0 , . . . , zN ]> , v = [v0 , . . . , vN ]> , and
−1
(ZA + ZBK) x + δ = (I − ZA − ZBK) δ. The
system response is then defined by the closed-loop
0 ... 0
K
map Φ : δ → (x, u) as M1,0 . . . 0
..
K = . , M = . .. .. . (9)
−1
.. .
.
x (I − ZA − ZBK) Φx
δ = Φδ. (4)
= −1 δ = K
u K (I − ZA − ZBK) Φu MN,0 . . . MN,N −1
The tube policy parameterization (7) is based on splitting direction for any x0 ∈ XN SLP
, (Φx , Φu ) ∈ ΠSLP
N (x0 ), and
the system dynamics (3) into nominal dynamics and error disturbance w ∈ W N −1
, therefore XN df
= XN SLP
.
dynamics, i.e. Proof: We prove XN df
= XN SLP
by proving the two set
df df
inclusions XN ⊆ XN SLP
and XN ⊆ XN
SLP
separately. The
z = ZAz + ZBv, (10)
detailed steps of the two parts are stated in Table I.
x − z = e = (ZA + ZBK) e + δ. (11) a) XN df
⊆ XNSLP
: Given x0 ∈ XN df
, an admissible
df
(M, v) ∈ ΠN (x0 ) exists by definition. Using (12), we write
This allows for recasting (6) in the nominal variables z, v in-
the input and state trajectories for a given w ∈ W N −1 as
stead of the system variables x, u, while imposing tightened
udf = Mw + v and xdf = Ax0 + Budf + Ew, respectively.
constraints on the nominal variables. One way to perform
In order to show the existence of a corresponding (Φx , Φu ),
the constraint tightening, which we will focus on in this
we choose (Φx , Φu ) according to (14) in Table I and Φv
paper, is via reachable sets for the error dynamics. In this
such that Φv x0 = v holds. Then, the derivations in Table I
paper, we will refer to this approach as tube-MPC, which was
show that the input and state trajectories of df-MPC and SLP
first introduced in [4]. For an overview of other tube-based df
are equivalent. Therefore, x0 ∈ XN =⇒ x0 ∈ XN SLP
.
methods or robust MPC in general, see e.g. [15]. df
b) XN SLP
⊆ XN : Given x0 ∈ XN , an admissible
SLP
III. A SYSTEM LEVEL APPROACH TO (Φx , Φu ) exists by definition. Using (4) and a given w ∈
TUBE-BASED MODEL PREDICTIVE CONTROL W N −1 we write the state and input trajectories as xSLP =
Φx δ and uSLP = Φu δ, respectively. If (M, v) is chosen
In this section, we present a new perspective on distur- according to (15) in Table I, then these choices are admissible
bance feedback MPC (df-MPC) by utilizing the SLP. In and the input and state trajectories for both df-MPC and SLP
particular, we show the equivalence of SLP and df-MPC, are equivalent. Therefore, x0 ∈ XN SLP
=⇒ x0 ∈ XN df
.
which includes the tube policy (7) as a subclass [16], and
discuss the implications that arise. TABLE I
Consider the disturbance feedback policy (8). We define D ERIVATION OF THE EQUIVALENCE BETWEEN SLP AND DF -MPC.
the convex set of admissible (M, v) as
df SLP SLP ⊆ X df
XN ⊆ XN XN
M structured as in (9)
N
Πdf (M, v) x:N ∈ X N , udf ∈ U N , Φx = A E + BΦu
N (x0 ) = , (14) v = Φ:,0 (15)
:,1:
u x0 , M = Φu
xN ∈ Xf , ∀w ∈ W N −1
Φu = Φv M
(14)
uSLP = Φu δ = Φv M δ udf = Mw + v
and the set of initial states x0 for which an admissible (15)
control policy of the form (8) exists is given by XN df
= = Φv x0 + Mw = Φ:,1: :,0
u w + Φu x0
df
df
{x0 | ΠN (x0 ) 6= ∅}. Similarly, we define the convex set of = v + Mw = u = Φu δ = uSLP
SLP
admissible (Φx , Φu ) as x = Φx δ df
x = Ax0 +Ew+B(Mw+v)
(14)
= A E δ + BΦu δ
= A E δ + BΦu δ
Φx , Φu satisfy (5) and
(13)
= Ax0 +Ew+Budf = xdf = Φx δ = xSLP
are block-lower-triangular,
SLP
ΠN (x0 ) = (Φx , Φu ) :N,: ,
Φx δ ∈ X N , Φu δ ∈ U N ,
Remark 1: Theorem 2 is also valid for linear time-
N −1
ΦN,:
x δ ∈ Xf , ∀w ∈ W
varying (LTV) systems without any modifications to the
and the set of initial states x0 for which an admissible proof, by adapting the definitions of ZA and ZB accord-
control policy uSLP = Φu Φ−1 x x exists, as XN
SLP
= {x0 | ingly.
ΠN (x0 ) 6= ∅}. To show the equivalence between MPC and
SLP
Leveraging Theorem 2, the following insights can be
SLP trajectories, we will rely on Lemma 1 and we formalize derived: Compared with df-MPC, the SLP formulation offers
the equivalence in Theorem 2. a direct handle on the state trajectory, which allows for
Lemma 1: Consider the dynamics (3) as a function of the directly imposing constraints on the state and simplifies
initial state x0 the numerical implementation. Additionally, computational
efficiency is increased by recently developed solvers for
x = Ax0 + Bu + Ew, (12) SLP problems based on dynamic programming [17] and
with A, B, and E defined as in Appendix I. Then, the ready-to-use computational frameworks like SLSpy [18]. The
following relation holds for any u, any w, and any admissible SLP formulation can thereby address one of the biggest
(Φx , Φu ): limitations of df-MPC, i.e. the computational effort required.
SLP problems also facilitate the computation of a distributed
(13)
Φx = A E + BΦu . controller due to the parameterization by system responses
Proof: See Appendix I. Φx , Φu , on which a distributed structure can be imposed
df
Theorem 2: Given any initial state x0 ∈ XN , (M, v) ∈ through their support [8]. This offers a direct method to also
df
ΠN (x0 ), and some disturbance sequence w ∈ W N −1 , there distribute df-MPC problems. We will leverage Theorem 2 to
exist (Φx , Φu ) ∈ ΠSLP
N (x0 ) such that xdf = xSLP and derive an improved tube-MPC formulation as the main result
df
u = u SLP
. The same statement holds in the opposite of this paper in the following section.
IV. SYSTEM LEVEL TUBE-MPC (SLTMPC) over the horizon N . The associated system responses are
We first analyze the effect of imposing additional struc- computed using (4), resulting in
ture on the SLP, before deriving the SLTMPC formulation.
I ... ... 0
K̄ ... ... 0
Finally, we show that the SLTMPC formulation emerges AK̄ I . . . 0 K̄AK̄ K̄ ... 0
naturally from an extended SLP formulation. Φ̄K̄ =
. . . .
, Φ̄K̄
= . . . .. ,
x
.. . . . . .. u
.. .. .. .
A. Diagonally Restricted System Responses AN . . . AK̄ I K̄AN . . . K̄AK̄ K̄
K̄ K̄
Following the analysis in [16], it can be shown that
disturbance feedback MPC is equivalent to a time-varying where AK̄ = A + B K̄ and A, B as in (1). Given these
version of tube-MPC. The same insight can be derived system responses, the states and inputs over the horizon N
from the SLP by defining the nominal state and input as are computed as x = Φ̄K̄x δ and u = Φ̄u δ, respectively, or
K̄
x2
0.0 0.0
−0.5 −0.5
−1.0 −1.0
−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 −1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50
x1 x1
Coverage [%]
0.5 tube-MPC 60
x2
0.0 50
−0.5 40
−1.0 30
−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.00 0.05 0.10 0.15
x1 θ
Fig. 1. RoA of SLTMPC and tube-MPC for θ = {0.05, 0.1, 0.12} (A,B,C) and approximate RoA coverage with respect to the state constraints in percent,
as a function of the parameter θ (D).
State Constraints
to the one of df-MPC. Figure 1 (D) further highlights this 1
Terminal Nominal State
by approximating the coverage of the RoA, i.e. the area Nominal States
x2
0
0.15, and θdf = 0.16, respectively. Figure 2 shows the open-
loop nominal trajectory and trajectories for 10’000 randomly SLTMPC
sampled noise realizations for all three methods. Due to the −1
optimized tube controller and less conservative constraint
tightening, df-MPC and SLTMPC compute trajectories that 1
ble III states the computation times and costs of the methods 0
for the noisy trajectories shown in Figure 2, highlighting the
trade-off in computation time and performance offered by tube-MPC
−1
SLTMPC.
−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50
VI. CONCLUSIONS x1
This paper has proposed a tube-MPC method for lin- Fig. 2. Nominal state trajectory and 10’000 noise realizations from initial
ear time-varying systems with additive noise based on the state x0 = [−0.9, 0.0]> and parameter θ = 0.05 for df-MPC, SLTMPC,
and tube-MPC.
system level parameterization (SLP). The formulation was
derived by first establishing the equivalence between dis- was extended in order to formulate the proposed system
turbance feedback MPC (df-MPC) and the SLP, offering level tube-MPC (SLTMPC) method. Finally, we showed the
a new perspective on a subclass of robust MPC methods effectiveness of the proposed method by comparing it against
from the angle of SLP. Subsequently, the standard SLP df-MPC and tube-MPC on a numerical example.
TABLE III
Adding − (I − ZA) φz on both sides of (36), results in
C LOSED - LOOP COSTS AND COMPUTATION TIMES FOR 10’000 NOISY
x0 = [−0.9, 0.0]> . I − ZA − ZBΦk Φ−1
TRAJECTORIES STARTING IN (x − φz ) =
e
− (I − ZA) φz + ZBφv + δ. (37)
Cost [-] Computation Time [ms]
Mean Std. Deviation Mean Std. Deviation Using (27), we conclude that − (I − ZA) φz + ZBφv = 0,
df-MPC 24.61 1.76 53.21 6.66 hence (37) becomes I − ZA − ZBΦk Φ−1 e e = δ, and
SLTMPC 26.38 1.87 33.49 6.35 thus
tube-MPC 30.44 1.98 6.9 1.43 −1
e = I − ZA − ZBΦk Φ−1 e δ = Φe δ, (38)
ACKNOWLEDGMENT
which describes the closed loop behavior of a system with
We would like to thank Carlo Alberto Pascucci from state e = x − φz under the control law ue = Φk Φ−1 e e.
Embotech for the insightful discussions on the topic of this The dynamics (29) are directly given by (27). Therefore, (28)
work. and (29) define decoupled dynamical systems, with the inputs
ue = Φk δ and uz = φv .
A PPENDIX I
P ROOF OF L EMMA 1 R EFERENCES
[1] A. Bemporad and M. Morari, “Robust model predictive control: A
Proof: The matrices in (12) are defined as survey,” in Robustness in identification and control. Springer, 1999,
pp. 207–226.
0 ... ... 0 [2] J. Löfberg, “Approximations of closed-loop minimax MPC,” in Proc.
I .
0 . . . ..
A
I 42nd IEEE Conf. Decis. Control, vol. 2, 2003, pp. 1438–1442.
[3] P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski, “Optimization
.. .. ,
2
A = A , E = A (32) over state feedback policies for robust control with constraints,”
. .
.. I
Automatica, vol. 42, no. 4, pp. 523–533, 2006.
. . . .
.. .. . . 0
[4] L. Chisci, J. A. Rossiter, and G. Zappa, “Systems with persistent dis-
AN turbances: predictive control with restricted constraints,” Automatica,
N −1 vol. 37, no. 7, pp. 1019–1028, 2001.
A ... A I
[5] W. Langson, I. Chryssochoos, S. Raković, and D. Q. Mayne, “Robust
and B = A E ZB. These quantities can then be used to model predictive control using tubes,” Automatica, vol. 40, no. 1, pp.
125–133, 2004.
rewrite (5) as [6] D. Q. Mayne, M. M. Seron, and S. Raković, “Robust model predictive
control of constrained linear systems with bounded disturbances,”
−1 −1
Φx = (I − ZA) + (I − ZA) ZBΦu Automatica, vol. 41, no. 2, pp. 219–224, 2005.
(i) [7] J. Anderson, J. C. Doyle, S. H. Low, and N. Matni, “System level
= A E + A E ZBΦu synthesis,” Annu. Rev. Control, vol. 47, pp. 364–393, 2019.
[8] Y. S. Wang, N. Matni, and J. C. Doyle, “Separable and Localized
(33)
= A E + BΦu , System-Level Synthesis for Large-Scale Systems,” IEEE Trans. Au-
−1
tomat. Contr., vol. 63, no. 12, pp. 4234–4249, 2018.
where relation (i) follows from the fact that (I − ZA) [9] M. Kögel and R. Findeisen, “Robust MPC with Reduced Conservatism
can be represented as a finite Neumann series since ZA is Blending Multiples Tubes,” in Proc. of American Control Conference
(ACC), 2020, pp. 1949–1954.
nilpotent with index N , which leads to [10] S. Chen, H. Wang, M. Morari, V. M. Preciado, and N. Matni, “Robust
Closed-loop Model Predictive Control via System Level Synthesis,”
∞ N
−1
X k
X k in Proc. 59th IEEE Conf. Decis. Control, 2020, pp. 2152–2159.
(I − ZA) = (ZA) = (ZA) = A E . [11] S. Dean, S. Tu, N. Matni, and B. Recht, “Safely learning to control
k=0 k=0 the constrained linear quadratic regulator,” in 2019 American Control
Conference (ACC), 2019, pp. 5582–5588.
Then, (33) proves Lemma 1. [12] C. A. Alonso and N. Matni, “Distributed and Localized Closed Loop
Model Predictive Control via System Level Synthesis,” in Proc. 59th
IEEE Conf. Decis. Control, 2020, pp. 5598–5605.
A PPENDIX II [13] C. A. Alonso, N. Matni, and J. Anderson, “Explicit Distributed and
P ROOF OF C OROLLARY 1 Localized Model Predictive Control via System Level Synthesis,” in
Proc. 59th IEEE Conf. Decis. Control, 2020, pp. 5606–5613.
Proof: Given an admissible (Φ̃x , Φ̃u ) with struc- [14] J. S. Li, C. A. Alonso, and J. C. Doyle, “Frontiers in Scalable
ture (21), the affine state trajectory is Distributed Control: SLS, MPC, and Beyond,” 2020. [Online].
Available: https://arxiv.org/abs/2010.01292
[15] J. B. Rawlings, D. Q. Mayne, and M. Diehl, Model Predictive Control:
1 1 −1 1
= Z Ã + Z B̃ Φ̃u Φ̃x + δ̃, (34) Theory, Computation, and Design. Nob Hill Publishing, 2017.
x x x [16] S. V. Rakovic, B. Kouvaritakis, M. Cannon, C. Panos, and R. Find-
eisen, “Parameterized Tube Model Predictive Control,” IEEE Trans.
where Automat. Contr., vol. 57, no. 11, pp. 2746–2761, 2012.
T [17] S.-H. Tseng and J. S. Li, “SLSpy: Python-Based System-
φv −Φk Φ−1
1 0 e φz Level Controller Synthesis Framework,” 2020. [Online]. Available:
Φ̃u Φ̃−1
= φv Φ k = , https://arxiv.org/abs/2004.12565
x −Φ−1 e φz Φe
−1
Φk Φ−1
e
(35) [18] S. H. Tseng, C. A. Alonso, and S. Han, “System Level Synthesis via
Dynamic Programming,” in Proc. 59th IEEE Conf. Decis. Control,
then plugging (35) into (34) yields 2020, pp. 1718–1725.
[19] G. W. Stewart, Matrix Algorithms: Volume 1: Basic Decompositions.
x = ZA+ZBΦk Φ−1 x+ZBφv+δ−ZBΦk Φ−1 e φz . (36)
e SIAM, 1998.
[20] I. Kolmanovsky and E. G. Gilbert, “Theory and computation of dis-
turbance invariant sets for discrete-time linear systems,” Mathematical
Problems in Engineering, vol. 4, 1998.
[21] L. Furieri, Y. Zheng, A. Papachristodoulou, and M. Kamgarpour,
“Sparsity Invariance for Convex Design of Distributed Controllers,”
IEEE Transactions on Control of Network Systems, vol. 7, no. 4, pp.
1836–1847, 2020.
[22] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The
explicit solution of model predictive control via multiparametric
quadratic programming,” in 2000 American Control Conference
(ACC), vol. 2. IEEE, 2000, pp. 872–876.
[23] A. Carron, J. Sieber, and M. N. Zeilinger, “Distributed Safe Learning
using an Invariance-based Safety Framework,” 2020. [Online].
Available: https://arxiv.org/abs/2007.00681
[24] S. Diamond and S. Boyd, “CVXPY: A Python-Embedded Modeling
Language for Convex Optimization,” Journal of Machine Learning
Research, vol. 17, no. 83, pp. 1–5, 2016.
[25] MOSEK ApS, MOSEK Optimizer API for Python. Version 9.1., 2019.
[Online]. Available: https://docs.mosek.com/9.1/pythonapi/index.html
[26] D. Limón, I. Alvarado, T. Alamo, and E. F. Camacho, “Robust tube-
based MPC for tracking of constrained linear systems with additive
disturbances,” Journal of Process Control, vol. 20, no. 3, pp. 248–260,
2010.