Academia.eduAcademia.edu

New software applications for system identification

2017, 2017 21st International Conference on System Theory, Control and Computing (ICSTCC)

https://doi.org/10.1109/ICSTCC.2017.8107019

New software applications for linear multivariable system identification are presented. The incorporated algorithms use subspace-based techniques (MOESP, N4SID, or their combination) to find a standard discrete-time state-space description, and optionally the covariance matrices and Kalman predictor gain, using input and output (I/O) trajectories. For flexibility, separate applications are offered for obtaining the processed triangular factor of the structured, block-Hankel-block matrix of I/O data (using fast or standard QR factorization algorithms), for computing the system and predictor matrices, for estimating the initial state of the system, and for simulating and evaluating the model. The applications are encapsulated in Docker containers, which are managed by the Kubernetes platform on a Linux machine. This ensures greater flexibility, enhanced security, and fast execution. The services to be implemented are part of a cloud-based open platform for process control applications.

2017 21st International Conference on System Theory, Control and Computing (ICSTCC) New Software Applications for System Identification Vasile Sima∗ , Alexandru Stanciu∗ and Florin Hartescu∗ ∗ National Institute for Research & Development in Informatics 011455 Bucharest, Romania Email: {vsima,alex,flory}@ici.ro Abstract—New software applications for linear multivariable system identification are presented. The incorporated algorithms use subspace-based techniques (MOESP, N4SID, or their combination) to find a standard discrete-time state-space description, and optionally the covariance matrices and Kalman predictor gain, using input and output (I/O) trajectories. For flexibility, separate applications are offered for obtaining the processed triangular factor of the structured, block-Hankel-block matrix of I/O data (using fast or standard QR factorization algorithms), for computing the system and predictor matrices, for estimating the initial state of the system, and for simulating and evaluating the model. The applications are encapsulated in Docker containers, which are managed by the Kubernetes platform on a Linux machine. This ensures greater flexibility, enhanced security, and fast execution. The services to be implemented are part of a cloud-based open platform for process control applications. Keywords—identification algorithms, linear multivariable systems, numerical algorithms, parameter estimation, subspace methods, singular value decomposition, software. I. I NTRODUCTION System identification aims to find a mathematical description of a dynamic process from measurements of its variables [1]. Such a description can be used for system analysis, fault detection, simulation, prediction, control, etc. For example, designing model predictive, optimal or H∞ controllers needs state-space models of the processes to be controlled. We have developed four software tools for identification of linear multivariable discrete-time systems, which basically can be used to find estimates of the system order, system matrices, initial state, and to simulate and evaluate the model. These tools are based on state-space subspace algorithms: MOESP— Multivariable Output Error state-SPace [2], [3], and N4SID— Numerical algorithm for Subspace State Space System IDentification, see [4], [5] and their references. Consider the following state space model, xk+1 yk = Axk + Buk + wk , x1 = x0 , = Cxk + Duk + vk , (1) describing a linear time-invariant (LTI) discrete-time multivariable system, where xk ∈ IRn , uk ∈ IRm , and yk ∈ IRp are the state, input, and output vectors at time k, respectively, A, B, C, and D are real matrices, and {wk }, {vk } are state and output noise or disturbance sequences. Standard assumptions are (see, e.g., [4]) that the matrix pairs (A, C) and (A, (B Q1/2 )) are 978-1-5386-3842-2/17/$31.00 ©2017 IEEE observable and controllable, respectively, and {wk }, {vk } are zero mean, stationary ergodic sequences, uncorrelated with {uk } and x0 , whose covariances satisfy       Q S wq  T wr vrT δqr ≥ 0 , (2) = E vq S T Ry where E denotes the expected value operator and δqr is 1, if q = r, and 0, otherwise. A model in innovation form is xk+1 yk = Axk + Buk + Kek , = Cxk + Duk + ek , (3) where K is the predictor matrix, or Kalman gain matrix, and {ek } is a white noise sequence; K defines the noise model. For such models, the main objective of system identification is to find the system order, n, and the system matrices (A, B, C, D) (modulo a system similarity transformation), using the input and output data sequences, u := {uk } and y := {yk }, k = 1 : t (i.e., for k = 1, 2, . . . , t). Note that each row of u or y contains a possibly multidimensional observation, while each column contains t values of a certain (process) variable. Another objective is to determine the noise covariance matrices in (2) and the predictor matrix K in (3). Since, in theory, the results hold asymptotically, for t → ∞, it is essential to be able to efficiently process large data sets. Subspace identification techniques use a concatenation H of block-Hankel matrices defined in terms of input and output (I/O) data. For the N4SID method, the matrix H is defined by  T  T Y1,2s,N H = U1,2s,N , (4) where N = t − 2s + 1, s is traditionally called “number of block rows” (although that number is 2s), and usually s ≥ n,   u1 u2 u3 ··· uN  u2 u3 u4 ··· uN +1    U1,2s,N =  .  (5) . . .. .. ..  ..  ··· . u2s u2s+1 u2s+2 · · · uN +2s−1 and Y is defined similarly. The MOESP method uses two U type blocks, for the so-called future and past inputs, respectively. Matrix H in (4) has N rows and 2(m + p)s columns. Usually, N ≫ 2(m+p)s. The MOESP and N4SID techniques use the upper triangular factor R of a QR factorization of H, H = QR, where R is upper triangular, and Q has orthogonal 106 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply. columns. (The matrix Q itself is not needed.) The standard QR factorization algorithm has a complexity of order N (2(m + p)s)2 , hence the computational effort increases linearly with the number of measurements used for system identification. Fast algorithms have been proposed, see, e.g., [6], [7], which exploit the displacement structure of H and find R with a computational effort practically independent of N . These algorithms have been implemented in the SLICOT Library — Subroutine Library In COntrol Theory [8] and in the associated SLICOT-based MATLAB toolbox [9], [10]. SLICOT Library (www.slicot.org) is based on the state-of-the-art linear algebra package LAPACK and the BLAS collections, and so it benefits of the advances in computer architectures. Performance results have been reported, e.g., in [7], [10]–[15]. Based on the previous experience, system identification applications have been developed under Windows or Linux (Ubuntu) operating systems, with 32 or 64 bits architectures. It is possible to use either MOESP or N4SID approach, and also a combined approach: the matrices A and C are estimated using the MOESP approach, and the matrices B and D using the N4SID approach. Multiple I/O data sets can be processed sequentially, and there are several computing options. The aplications are encapsulated in Docker containers (https:// docker.io), which are managed by the Kubernetes platform (https://kubernetes.io). This solution ensures greater flexibility, enhanced security (through encapsulation/isolation), and fast execution. The services to be implemented are part of a cloudbased open platform for process control applications. Such a platform can offer powerful and standardized algorithms, not readily available in, or expensive for, industrial applications. The paper is structured as follows. Section II briefly describes the underlying algorithms, which are not new. Section III summarises the newly developed applications. Section IV presents some results. Section V concludes the paper. II. U NDERLYING ALGORITHMS FOR SUBSPACE - BASED SYSTEM IDENTIFICATION Since the matrix H is highly structured, it can be efficiently processed by exploiting its structure. There are two main such techniques, based on fast Cholesky and fast QR factorization algorithms. The Cholesky factorization algorithm factors the intercorrelation matrix, W = H T H, which is usually positive definite, due tothe noise contribution. For the N4SID approach, define  T T . and Hy = Y1,2s,N H = Hu Hy , with Hu = U1,2s,N T f If there are multiple I/O data sets, then W = W + H H, f corresponds to the already processed sets. Clearly, where W f W = 0 if the first or the only data set is processed. Structure-exploiting, economical formulas exist for computing the block-symmetric matrix W [7], [11]. Indeed, define fuu + H T Hu , and, similarly, Wuy , and Wyy , each Wuu = W u block containing 2s × 2s submatrices of size m × m, m × p, i,j and p × p, respectively. Denoting Wuu the (i, j) submatrix of i,j i,j Wuu , for i, j = 1 : 2s, and similarly for Wuy , and Wyy , the first block-row of Wuu is given by 1,j 1,j fuu Wuu =W +u1 uTj +u2 uTj+1 +· · ·+uN uTj+N −1 , j = 1 : 2s, (6) and the next block-rows are found recursively, for j = 1 : 2s − 1, i = 1 : j, using i+1,j+1 i+1,j+1 i,j i,j fuu fuu Wuu =W −W + Wuu + ui+N uTj+N − ui uTj . (7) i,j The blocks Wyy are computed similarly, by replacing u by i,j y in (6) and (7), while for Wuy , diads u· y·T are used. Note that the updating formulas like (7) are much cheaper than (6), for large N , and are used for all but 8s submatrices. Consider now the MOESP approach, and let   W11 W12 W13 T T  W22 W23 , (8) W =  W12 T T W13 W23 Wyy be the matrix computed as described above, where W11 , W12 , W22 ∈ IRms×ms , and W13 , W23 ∈ IRms×2ps . Then, the matrix corresponding to the MOESP approach is defined by   T W22 W12 W23 (9) WM =  W12 W11 W13  . T T W23 W13 Wyy Finally, W or WM matrices are processed by the standard Cholesky algorithm. If it fails, due to a (numerically) indefinite matrix, the standard QR algorithm is applied to the matrix H, possibly recursively if there are several data sets. But such failures are rare events. Consider now the fast QR factorization algorithm for the N4SID approach. Define the displacement matrix Z = diag(Zu , Zy ), where Zu and Zy are 2s × 2s block matrices with blocks of dimension m × m and p × p, respectively, all zero, except identity blocks on the superdiagonal, i.e.,   0m Im 0m · · · 0m  0m 0m Im · · · 0m     .. ..  .. .. ..  . . .  . Zu =  . ,   . .  0m 0m 0m . Im  0m 0m 0m · · · 0m and similarly for Zy , with m replaced by p. It can be shown [7] that the symmetric matrix ∇W = W − Z T W Z, called the displacement of W , has a low rank factorization, namely ∇W = GT ΣG, with Σ = diag(Iq , −Iq ), and q = m + p + 1, hence ∇W has the rank at most 2(m+p+1). The matrix factor G ∈ IR2(m+p+1)×2(m+p)s is the generator of W . Efficient techniques are available to build the generator. The generalized Schur algorithm is used to find all rows of the Cholesky factor f + H T H. Householder transformations and of W , W = W hyperbolic rotations are employed. Submatrices of R are then used to find system matrices, and optionally, covariance matrices, and predictor matrix. Several system orders, n, can be tried using the same matrix R. The A and C matrices can be obtained from an estimate 107 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply. of an extended observability matrix, Γs , computed in turn from the factors U and Σ of the singular value decomposition of a matrix build from R (see, e.g., [7] and the references therein). The shift invariant property of Γs is exploited. (The N4SID approach uses an oblique projection Os of the socalled “future” outputs onto the “past” inputs and outputs. The system order is the rank of Os .) An efficient, structureexploiting QR algorithm, together to an algorithm based on Kronecker products, are then used to compute the matrices B and D [7]. The covariance matrices are found from the residual of a least squares problem. Then, the predictor matrix K is computed using the solution of an optimal filtering problem. The initial state, x0 , is computed by solving another least squares problem built based on the system matrices and the I/O data trajectories. Details are given, e.g., in [7], [11], [16], [17] and the references therein. III. D EVELOPED APPLICATIONS Four applications for identification of LTI discrete-time multivariable systems have been developed. The first application, called corder, computes the upper triangular factor R of the structured matrix H, defined by the I/O data, using either the standard QR algorithm, or one of the fast algorithms summarized in Section II. The I/O data can optionally be processed sequentially, batch by batch, with dependent or independent batches. Depending on the specified options (including the method to use: MOESP, N4SID, or a combined method), the application corder processes the factor R as needed by the next application. An absolute tolerance, TOL2 =: τ , is used for determining an estimate of the system order, n. If τ ≥ 0, the estimate is indicated by the index of the last singular value greater than or equal to τ . (Singular values less than τ are considered as zero. Note that singular values are ordered decreasingly.) When τ = 0, a default value, τ = sεM σ1 , is used, where εM is the relative machine precision and σ1 is the maximal singular value. When τ < 0, the estimate is indicated by the index of the singular value that has the largest logarithmic gap to its successor. The second application, csident, uses the processed factor R to compute the system matrices. The main options allow to select either N4SID or the combined method: MOESP for computing the matrices A and C, and N4SID, for computing the matrices B and D via a structure-exploiting QR factorization algorithm. Moreover, the covariance matrices, Q, Ry , and S, and the predictor matrix, K, are optionally computed. Any order n < s may be specified, for the same factor R, and this feature allows to quickly perform multiple experiments. The third application, cfindx0, finds an estimate, x̂0 , of the initial state, x0 , by solving a possibly large least squares problem, defined in terms of the system matrices and the I/O data. (It is, of course, possible to use part of the I/O data.) The fourth application, csimul, computes the trajectory of the estimated output, ŷ := {ŷk }, with or without a predictor, for the given input trajectory and an optionally given initial state. To assess the estimated model quality, the application also evaluates some metrics, such as the relative Fig. 1. Template for specifying the options and parameters for corder application. error, err = ky − ŷk1 / max(1, kyk1 ), or the so-called Variance Accounted For—VAF, for each output variable. Specifically, if z is a column vector of length t, and ẑ is its estimate,  then VAF(z, ẑ) = 100 1−(e− ē)T (e− ē)/(z − z̄)T (z − z̄) , where e := z − ẑ, and the overbar denotes the mean. Each implemented application reads the options and other parameters needed for execution, including the names of the files to be read or written, from a template file with predefined name and structure. These template files are automatically generated using a Web interface, by filling-in the needed information in appropriate fields. An example is given in Fig. 1 for application corder. The application reads the input/output trajectories from the file specified in the antepenultimate field of the associated template. The results are written either on a terminal, or to a file whose name is given in the first field of the template. The printing format is specified in the last field. The value 1 of the printing option determines printing both the given data and the computed results. The values 2 or 3 imply printing only the results, with headings and matrix names and element indices, for the value 2, or with just the numerical values, for the value 3. In the first case, the results can be viewed and understood by a user, and easily transferred to another environment or application, e.g., to MATLAB. In the second case, the results can be directly used by the next application in the workflow. The application implementation using cloud services is designed based on the Kubernetes platform, which is used to orchestrate the execution of Docker containers. The applications presented above are encapsulated in containers which are grouped in a Pod — a Kubernetes concept — so that these containers can directly communicate between them. A REST (Representational State Transfer)-type interface is used for input and output data transfer, in order to create a service managed by Kubernetes for the whole workflow of the applications, as represented in Fig. 2. The main inputs and outputs of each application are shown there. It is possible to skip an application, e.g., use a given x0 , instead of running cfindx0. Moreover, csident and csimul may be called repeatedly for the same R and x0 , but different orders n. 108 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply. (A, B, C, D) Cov pred. K factor R corder csident order n cfindx0 initial state estimate input parameters options estimated output User Interface ŷ x̂o csimul err, VAF Fig. 2. The workflow for the system identification applications. corder csimul pub/sub pub/sub Redis pub/sub pub/sub pub/sub csident cfindx0 REST API Pod User Interface Fig. 3. Connecting the system identification applications via a message brocker and running them in the framework of a Kubernetes Pod. The system identification applications are executed via a wrapper, whose role is to manage the input and output data transfers. These wrapper programs have to be connected to a message brocker — Redis (https://redis.io) — in order to be able to send or receive the needed data. The address and the channel to be used for connection, as well as all other information required for execution, are generated by a program destined to configure and execute the Docker containers under the Kubernetes platform. This program uses a system description file, which lists all components, their connexions and the values for the required parameters. The architecture is illustrated in Fig. 3. IV. N UMERICAL R ESULTS This section presents a selection of the results obtained using the identification applications. Several examples from the Database for Identification of Systems collection (DAISY) [18], available at http://home.esat.kuleuven.be/∼smc/ daisy/, have been used. This collection contains over 20 data sets: industrial processes, mechanical systems, biomedical systems, environmental systems, thermal systems, etc. Several results obtained for the glass furnace and cutaneous potential data examples are shown in the sequel. The computation times for these two examples have been over 40 and 200 times shorter, respectively, than when using the MATLAB R2017a function n4sid on the same computer. Many other results and comparisons of the identification algorithms based on subspace techniques have appeared, e.g., in [7], [10], [11], [12], [14] and in the references therein. Example 1. Glass furnace: This example has six output and three input variables, and 1247 samples. The output variables are the temperatures measured by six sensors located in a furnace section. Two of the input variables refer to heating, while the third, to cooling. All four applications described above have been used. The number of block rows and the system order have been chosen as s = 10 and n = 5 (the order proposed by corder). The fast Cholesky algorithm was selected. Figure 4 shows the measured output trajectories, {y}, and the estimated output trajectories without predictor, {ŷ} (denoted Yei in the figure, for each output variable i, i = 1 : 6). Figure 5 shows similarly the measured and estimated output trajectories with predictor (denoted YeKi ). The measured output trajectories are plotted with red lines, while the estimated trajectories are plotted with blue lines. The relative error for each estimated output variable is shown above each subplot. Clearly, the use of the predictor improves the modelling accuracy. Example 2. Cutaneous potential data: This example uses the cutaneous potential recordings of a pregnant woman. There are no inputs, but eight outputs: five abdominal measurements and three thoracic measurements (the last ones). The number of data samples is 2500. The lack of inputs and the strong variability of the output signals require to use a predictor for this example. Several values for s and n have been tried. For s = 21, the smallest global relative error, 0.274, has been obtained for n = 12, and the mean of the eight VAF values was 81.86. Similarly, for s = 41 and s = 61, the smallest relative errors were 0.249, for n = 29, and 0.245, for n = 45, respectively. The corresponding means of VAF values were 94.42 and 94.84. Clearly, there is a tradeoff between the model complexity and the accuracy, but all the results above may be acceptable. Figure 6 shows the measured output trajectories, {y}, and the estimated output trajectories with predictor, {ŷ} (denoted YeKi in the figure, i = 1 : 8), for s = 61 and n = 45. Figure 7 details the initial part of the trajectories for i = 1 : 2. Figure 8 plots the VAF values for the eight estimated outputs and all orders between 4 and 60. Two of the outputs (with indices 1 and 4) are more difficult to be followed by the estimated outputs. For lower values of s the initial transient phase is faster, but some of the final VAF values are smaller. V. C ONCLUSION Subspace-based algorithms used for implementing new applications for finding mathematical models of dynamical systems are briefly presented. The considered models are discretetime, multivariable, linear time-invariant, and in a state space representation. The data needed are trajectories of input and output (or only output) variables. Various options enable to choose standard or fast algorithms and several parameters. The options are currently specified via a Web interface. The applications are encapsulated in Docker containers and can be executed locally or in a cloud. The results obtained for identification of a glass furnace and of the cutaneous potential data show the capabilities of the developed applications. 109 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply. norm(Y -YeK )/max(1,norm(Y )) = 0.14089 norm(Y1 -Ye 1 )/max(1,norm(Y 1 )) = 0.66626 4 Y 1 ,YeK 1 Y 1 ,Ye 1 2 0 -2 Y Ye 200 400 600 800 1000 1200 -4 -4 800 1000 1200 0 1400 200 3 4 3 Y ,YeK 400 600 800 1000 1200 1400 2 0 -2 Y Ye -4 0 200 400 600 800 1000 1200 Y YeK -4 1400 0 200 norm(Y -Ye )/max(1,norm(Y )) = 0.31556 4 4 4 4 4 Y ,YeK 600 800 1000 1200 1400 2 0 4 0 400 norm(Y4 -YeK 4 )/max(1,norm(Y 4 )) = 0.11681 4 2 Y 4 ,Ye 4 1400 3 Y 3 ,Ye 3 0 -2 -2 -2 Y Ye -4 Y YeK -4 0 200 400 600 800 1000 1200 1400 0 200 400 Samples 5 5 5 800 1000 1200 1400 norm(Y5 -YeK 5 )/max(1,norm(Y 5 )) = 0.086152 4 Y 5 ,YeK 5 2 0 -2 600 Samples norm(Y -Ye )/max(1,norm(Y )) = 0.3506 4 Y 5 ,Ye 5 1200 norm(Y3 -YeK 3 )/max(1,norm(Y 3 )) = 0.15496 3 2 2 0 -2 Y Ye -4 Y YeK -4 0 200 400 600 800 1000 1200 1400 0 200 norm(Y -Ye )/max(1,norm(Y )) = 0.57764 6 4 6 6 Y 6 ,YeK 6 0 -2 600 800 1000 1200 1400 2 0 -2 Y Ye 400 norm(Y6 -YeK 6 )/max(1,norm(Y 6 )) = 0.11722 4 2 Y 6 ,Ye 6 1000 Samples norm(Y -Ye )/max(1,norm(Y )) = 0.62096 3 800 Y YeK Samples 4 600 0 -2 600 400 norm(Y2 -YeK 2 )/max(1,norm(Y 2 )) = 0.15522 2 -2 400 200 4 0 200 Y YeK 0 Y 2 ,YeK 2 Y 2 ,Ye 2 0 1400 Y Ye 2 0 1 -4 norm(Y2 -Ye 2 )/max(1,norm(Y 2 )) = 0.62725 4 1 2 -2 -4 0 1 4 Y YeK -4 -4 0 200 400 600 800 1000 1200 1400 Samples Fig. 4. Output trajectories, {y}, and estimated output trajectories without predictor, {ŷ} (denoted Yei , i = 1 : 6), for glass furnace example. ACKNOWLEDGMENT Partial support from the Romanian National Research Programme PNII, project no. 257/2014, is acknowledged. R EFERENCES [1] L. Ljung, System Identification: Theory for the User, 2nd ed. Upper Saddle River, NJ: Prentice-Hall PTR, 1999. [2] M. Verhaegen and P. Dewilde, “Subspace model identification. Part 1: The output-error state-space model identification class of algorithms,” Int. J. Control, vol. 56, no. 5, pp. 1187–1210, 1992. [3] M. Verhaegen, “Subspace model identification. Part 3: Analysis of the ordinary output-error state-space model identification algorithm,” Int. J. Control, vol. 58, no. 3, pp. 555–586, 1993. [4] P. Van Overschee and B. De Moor, “N4SID: Two subspace algorithms for the identification of combined deterministic-stochastic systems,” Automatica, vol. 30, no. 1, pp. 75–93, 1994. 0 200 400 600 800 1000 1200 1400 Samples Fig. 5. Output trajectories, {y}, and estimated output trajectories with predictor, {ŷ} (denoted YeKi , i = 1 : 6), for glass furnace example. [5] ——, Subspace Identification for Linear Systems : Theory – Implementation – Applications. Boston/London/Dordrecht: Kluwer Academic Publishers, 1996. [6] N. Mastronardi, D. Kressner, V. Sima, P. Van Dooren, and S. Van Huffel, “A fast algorithm for subspace state-space system identification via exploitation of the displacement structure,” J. Comput. Appl. Math., vol. 132, no. 1, pp. 71–81, 2001. [7] V. Sima, D. M. Sima, and S. Van Huffel, “High-performance numerical algorithms and software for subspace-based linear multivariable system identification,” J. Comput. Appl. Math., vol. 170, no. 2, pp. 371–397, 2004. [8] P. Benner, V. Mehrmann, V. Sima, S. Van Huffel, and A. Varga, “SLICOT — A subroutine library in systems and control theory,” in Applied and Computational Control, Signals, and Circuits, B. N. Datta, Ed. Boston, MA: Birkhäuser, 1999, vol. 1, chapter 10, pp. 499–539. [9] V. Sima and S. Van Huffel, “Efficient numerical algorithms and software for subspace-based system identification,” in Proc. 2000 IEEE International Conference on Control Applications and IEEE International 110 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply. norm(Y -YeK )/max(1,norm(Y )) = 0.357 norm(Y -YeK )/max(1,norm(Y )) = 0.38314 1 50 1 1 1 50 1 1 Y YeK Y ,YeK Y ,YeK 1 1 Y YeK 0 1 1 0 -50 -50 0 500 1000 1500 2000 2500 0 norm(Y -YeK )/max(1,norm(Y )) = 0.17858 2 150 2 2 150 Y YeK Y ,YeK 2 Y 2 ,YeK 2 100 100 150 200 250 Y YeK 100 50 2 50 50 norm(Y2 -YeK 2 )/max(1,norm(Y 2 )) = 0.18942 0 0 -50 0 500 1000 1500 2000 -50 2500 0 50 100 Samples 250 Fig. 7. Initial part of the first two output and estimated output trajectories with predictor, for cutaneous potential data example, s = 61, n = 45. 0 3 200 norm(Y3 -YeK 3 )/max(1,norm(Y 3 )) = 0.22586 50 3 Y ,YeK 150 Samples -50 Y YeK 0 500 1000 1500 2000 80 Y YeK 4 VAF values 70 0 4 Y ,YeK 90 2500 norm(Y4 -YeK 4 )/max(1,norm(Y 4 )) = 0.30945 50 Variance-Accounted-For (VAF) values with Kalman predictor 100 -100 60 50 40 -50 0 500 1000 1500 2000 2500 30 Samples 20 norm(Y5 -YeK 5 )/max(1,norm(Y 5 )) = 0.15648 50 10 0 10 20 30 40 50 60 Y 5 ,YeK 5 System orders 0 Fig. 8. VAF trajectories for cutaneous potential data example, s = 61, n = 4 : 60, with predictor. -50 Y YeK -100 0 500 1000 1500 2000 2500 norm(Y6 -YeK 6 )/max(1,norm(Y 6 )) = 0.14224 500 6 Y ,YeK 6 0 -500 Y YeK -1000 0 500 1000 1500 2000 2500 Samples norm(Y -YeK )/max(1,norm(Y )) = 0.12713 7 1000 7 7 Y 7 ,YeK 7 Y YeK 500 0 -500 0 500 1000 1500 2000 2500 norm(Y -YeK )/max(1,norm(Y )) = 0.16194 8 1000 8 8 Y 8 ,YeK 8 Y YeK 500 0 -500 0 500 1000 1500 2000 2500 Samples Fig. 6. Output and estimated output trajectories with predictor, for cutaneous potential data example, s = 61, n = 45. Symposium on Computer-Aided Control Systems Design, Sep. 25–27, 2000, Anchorage, AK, U.S.A. Webster, NY: Omnipress, 2000, pp. 1–6. [10] ——, “Performance investigation of SLICOT system identification toolbox,” in Proc. European Control Conference (ECC 2001), 4–7 Sep., 2001, Porto, Portugal, 2001, pp. 3586–3591. [11] V. Sima and D. M. Sima, “High-performance algorithms for linear multivariable system identification,” Romanian Journal of Information Science and Technology, vol. 6, no. 3-4, pp. 345–375, 2003. [12] V. Sima, “Efficient data processing for subspace-based multivariable system identification,” in Proc. IFAC Workshop on Adaptation and Learning in Control and Signal Processing (ALCOSP 2004), and IFAC Workshop on Periodic Control Systems (PSYCO 2004), Aug. 30th – Sep. 1st, 2004, Yokohama, Japan, 2004, pp. 871–876. [13] V. Sima, “Efficient algorithms for system identification,” in Contributions to ICT Progress, D. Banciu, Ed. Bucureşti: Editura Tehnică, 2007, pp. 141–174. [14] V. Sima and P. Benner, “Fast system identification and model reduction solvers,” in Proc. 9th IFAC Workshop “Adaptation and Learning in Control and Signal Processing” (ALCOSP 2007), Aug. 29-31, 2007, Saint Petersburg, Russia, Curran Associates, Inc., 2007. [15] P. Benner, D. Kressner, V. Sima, and A. Varga, “Die SLICOT-Toolboxen für Matlab,” at—Automatisierungstechnik, vol. 58, no. 1, pp. 15–25, 2010. [16] V. Sima, “Algorithms and LAPACK-based software for subspace identification,” in Proc. 1996 IEEE International Symposium on ComputerAided Control System Design, Sep. 15–18, 1996, Dearborn, MI, U.S.A., 1996, pp. 182–187. [17] ——, “Subspace-based algorithms for multivariable system identification,” Studies in Informatics and Control, vol. 5, no. 4, pp. 335–344, 1996. [18] B. De Moor, P. De Gersem, B. De Schutter, and W. Favoreel, “DAISY: A database for identification of systems,” Journal A, Special Issue on CACSD (Computer Aided Control Systems Design), vol. 38, pp. 4–5, 1997. 111 Authorized licensed use limited to: Polytechnic University of Bucharest. Downloaded on March 29,2023 at 06:45:41 UTC from IEEE Xplore. Restrictions apply.