Multilevel Modelling in Spss-Review
Multilevel Modelling in Spss-Review
Multilevel Modelling in Spss-Review
Alastair H Leyland
MRC Social and Public Health Sciences Unit
University of Glasgow
4 Lilybank Gardens
Glasgow G12 8RZ
August 2004
1.1 Background
This review is based on SPSS version 12.0. The three SPSS commands of interest for
multilevel modelling are all contained in the Advanced Models module, these being MIXED
and VARCOMP. (An additional procedure GLM fits repeated measures models; however, random
effects cannot be included in repeated measures designs in version 12.0.) The Advanced
Models add capability to the SPSS Base system to conduct a range of additional analyses
including generalised linear models and Cox regression; they complement the capabilities of
the popular SPSS Base system. A major statistical package, SPSS is available in several
languages. Most commands are available either through the graphical user interface or
through the use of command syntax.
The data can be saved in a similar variety of formats by choosing Save As from the File
menu. Alternatively, command syntax can be written using the SAVE, EXPORT and SAVE
TRANSLATE commands.
The VARCOMP command requires the higher level units to be specified as a factor in the main
command line, with these units then specified as random effects (random factors) using the
RANDOM subcommand. Further factors and covariates can be included in the main command.
The default estimation method is the minimum norm quadratic unbiased estimator with unit
prior weights. Alternative estimation methods – specified using the METHOD subcommand –
are maximum likelihood (ML), restricted maximum likelihood (REML), or ANOVA method
based on type I or type III sum of squares (SSTYPE(1) or SSTYPE(3)). The INTERCEPT
subcommand determines whether or not the intercept is to be included in the model, and the
MISSING subcommand determines the treatment of missing values. The REGWGT
subcommand is used to specify regression weights in a weighted least squares regression
model. CRITERIA is used to specify the convergence criterion in terms of the relative change
in the objective function between iterations (CONVERGE), the tolerance for checking for
singularity (EPS) and the maximum number of iterations (ITERATE). The PRINT
subcommand is used to request output in terms of the objective function and variance
components estimates at every n iterations (HISTORY(n), available only for maximum
likelihood or restricted maximum likelihood estimation), the expected mean squares and the
sums of squares (EMS and SS respectively, both available only for ANOVA estimation).
OUTFILE is used to save the results of the estimation; VAREST will save the variance
components estimates and COVB and CORB the covariance and correlation matrices (for ML
and REML estimation only). The DESIGN subcommand is used to specify the effects
(including interactions) included in a model, drawing from variables specified in the main
command. The default is to include the intercept all covariates on the variable list, the main
factorial effects and all orders of factor-by-factor interaction. Note that the VARCOMP
procedure therefore provides only estimates of the variance components, not estimates of the
regression coefficients. For this reason the rest of the review concentrates on the more general
MIXED command.
The MIXED procedure can be used to fit a variety of mixed linear models including multilevel
models. The command line is used to identify the dependent variable together with any
factors and covariates to be included in the analysis. Note that, unlike the VARCOMP
command, the MIXED command line does not require the specification of higher level units as
factors. The CRITERIA subcommand is used to control the algorithm used for estimation and
associated tolerance. Convergence can be determined by reference to the Hessian
(HCONVERGE), the log-likelihood function (LCONVERGE) or the parameter estimates
(PCONVERGE). The EMMEANS subcommand is used to provide the estimated marginal means
for specific factors (or an overall mean if TABLES(OVERALL) is specified). The subcommand
FIXED is used to specify which of the factors and covariates are to be included as fixed
effects. Interactions can be included by using the BY keyword or, alternatively, an asterisk (*).
An intercept is included unless the NOINT keyword is used. The METHOD subcommand is used
to specify whether estimation is maximum likelihood or restricted maximum likelihood (the
default), and the MISSING subcommand determines the treatment of missing values. The
PRINT subcommand dictates the output of the MIXED analysis; options include printing the
correlation and covariance matrices of the fixed parameter estimates (CORB and COVB),
summary statistics of the dependent variable and any covariates for all combinations of
factors including the higher level units specified using the RANDOM subcommand, the
covariance matrix of the random effects (G), fixed and random parameter estimates
(SOLUTION) and standard errors and Wald tests for the covariance parameters (TESTCOV).
The RANDOM subcommand is used to specify the random part of the model; it specifies which
factors or covariates are to be treated as random effects and at which level. To include a
random intercept the keyword INTERCEPT must be specified as the first random effect in the
RANDOM subcommand (the default is to exclude the intercept). There are a number of ways of
specifying some models; for example, a random intercept model with higher level units
defined by “L2_units” can be specified either by declaring “L2_units” to be a factor on the
command line and entering “L2_units” as a random factor:
MIXED yvar BY L2_units
/RANDOM = L2_units .
or by entering a random intercept and using the SUBJECT keyword of the RANDOM
subcommand to identify the higher level units:
MIXED yvar
/RANDOM = INTERCEPT | SUBJECT(L2_units) .
The RANDOM subcommand can be called repeatedly to configure complex random structures.
The COVTYPE keyword specifies which of a list of pre-defined covariance structures is to be
used. Many of the covariance structures allowed will be of interest for fitting growth curve or
repeated measures models (e.g. first order ante-dependence AD1, first order autoregressive
AR1, and diagonal or heterogeneous variances DIAG). For random effect models the common
choice will be an unstructured covariance matrix (UN) which will fit all variances and
covariances between random effects. The REGWGT subcommand can be used to apply
regression weights to the analysis. The REPEATED subcommand can be used to specify the
covariance structure at level 1 in much the same way as the RANDOM subcommand. The
SUBJECT keyword must be used to identify the hierarchical structure and must contain all of
the variables specified as the subject (using the SUBJECT keyword) in any RANDOM
subcommands. The SAVE subcommand can be used to save various case-specific statistics
depending on the keywords used; the options are to save any of the predicted values based on
the fixed part of the model (FIXPRED) or the fixed and higher level random parts (PRED),
together with their standard errors (SEFIXP and SEPRED) and Satterthwaite degrees of
freedom (DFFIXP and DFPRED) and the (composite level 1) residuals (RESID). Finally, the
TEST subcommand allows the specification of null hypotheses as linear combination of
parameters for both the fixed and random parts of the model. This subcommand conducts the
F-test proposed by Fai and Cornelius (1996).
AIC = −2l + 2d
the finite sample corrected AIC, or AICC (Hurvich and Tsai, 1989):
2dn
AICC = −2l +
( n − d − 1)
the consistent AIC (CAIC; Bozdogan, 1987):
When using maximum likelihood estimation, n is taken to be the total number of level 1
units and d the number of fixed parameters plus the number of random parameters. For
REML estimation, n is taken to be the total number of level 1 units minus the number of
fixed parameters and d the number of random parameters.
The parameters that we will estimate are the 5 fixed parameters β 0 ,…, β 4 and the two
variances σ u20 and σ e20 .
SPSS fits categorical variables as factors through the use of the BY keyword of the VARCOMP
and MIXED commands. This creates the dummy variables necessary, using the last category as
the comparison group. The reference category can therefore be changed by using the RECODE
command. In equation (1) x2 ij has been coded as a dummy variable indicating the mean
effect of girls relative to boys (so for SPSS we have x2 ij = 1 indicating a girl, x2 ij = 2
indicating a boy), and x3 j and x4 j indicate whether the school was a boys’ school or a girls’
school respectively. (Note that, in general, such factors can be numeric or string variables.)
We can fit model (1) using the code
MIXED normexam BY sex schlsex WITH standlrt
/EMMEANS TABLES(sex*schlsex) WITH(standlrt=0)
/FIXED = standlrt sex schlsex
/RANDOM = INTERCEPT | SUBJECT(school)
/METHOD = ML
/PRINT = COVB G HISTORY SOLUTION TESTCOV
/SAVE = FIXPRED (fix_pred) PRED (tot_pred) RESID (resid) .
EXECUTE .
The EMMEANS and PRINT subcommands are not required for this analysis – the code above is
intended to illustrate their use. The SAVE command requests the predicted values from the
fixed part of the model (saved in fix_pred), the predicted values from the fixed and random
parts of the model (tot_pred) and the level 1 residuals (resid); this could again be omitted.
The estimates for this model (using both maximum likelihood ML and restricted maximum
likelihood REML) estimation are given in table 1.
The next model includes an interaction between the two student level variables, the London
Reading Test score and gender, in the fixed part of the model.
In equation (2) the parameter β 5 fits the difference between the slope with the London
Reading Test score for girls (compared to boys). This model can be fitted by including an
interaction term in the FIXED subcommand
/FIXED = standlrt sex standlrt*sex schlsex
e2ij 0 σ e22 0
~ N ,
e 0 0 σ 2
6 ij e6
where x6ij is an indicator variable taking the value 1 for boys, 0 for girls i.e. x6ij = 1 − x2 ij .
Such a model can be fitted in other standard software packages such as MLwiN (Rasbash et
al., 2000) and SAS (SAS Institute Inc., 1999).
The dataset used to illustrate the 3-level normal response model is that previously analysed by
Fielding et al.(2003) and refers to A/AS level examinations. The results for a Chemistry
exam, in terms of the point score (0, 2, 4, … 10) are given for 31,022 individuals from 2280
schools in 131 Local Education Authorities in England. The covariate we use for student i
from school j in Education Authority k is an intake score (average GCSE score, x1ijk ). The
model we consider is a variance components model:
yijk = β 0 + β1 x1ijk + v0 k + u0 jk + e0ijk
v0 k ~ N ( 0,σ v20 )
(5)
u0 jk ~ N ( 0, σ u20 )
e0ijk ~ N ( 0,σ e20 )
The addition of further levels to a model can be accomplished by using multiple RANDOM
subcommands:
MIXED chem WITH gcse
/FIXED = gcse
/RANDOM = INTERCEPT | SUBJECT(lea) COVTYPE(ID)
/RANDOM = INTERCEPT | SUBJECT(school*lea) COVTYPE(ID)
/METHOD = ML
/PRINT = COVB G HISTORY SOLUTION TESTCOV
/SAVE = FIXPRED (fix_pred) PRED (tot_pred) RESID (resid) .
EXECUTE .
Repeated measures models can be fitted using the MIXED command to balanced or
unbalanced datasets, with or without time variant covariates. The REPEATED subcommand is
used to specify the observations and the hierarchy (in addition to the RANDOM subcommand)
as well as the covariance structure.
The data used to illustrate the repeated measures models is that analysed by Goldstein et al.
(1994) and refer to the height of 26 boys aged 11 to 13 measured over 9 occasions
approximately 3 months apart. The data are balanced i.e. there are exactly 9 measurements
made on each boy with no missing values.
We can first consider fitting a quartic polynomial to the height (cm) of the j th boy measured
on occasion i (at age tij , centred around 12 years), yij , with the coefficients of the intercept,
linear and quadratic terms varying at random across the boys:
We can now extend model (6) to fit first order autoregressive AR(1) errors at level 1.
Goldstein et al. (1994) found evidence of seasonal effects on height; to counter this we
include the sine and cosine of a seasonal (calendar year) time component Tij in the fixed part
of the model. Our model then becomes:
4
yij = β 0 + ∑ β htijh + β 5 sin (Tij ) + β 6 cos (Tij ) + u0 j + u1 j tij + u2 j tij2 + e0ij
h =1
Fitting this model requires, in addition to the declaration of the additional fixed parameters, a
REPEATED subcommand specifying the measurement occasion i (coded 1 to 9 for each
subject and called ‘occasion’) and an AR(1) covariance matrix at level 1.
COMPUTE sinT = sin(season) .
COMPUTE cosT = cos(season) .
MIXED height WITH age sinT cosT
/CRITERIA = MXSTEP(25)
/FIXED = age age*age age*age*age age*age*age*age sinT cosT
/RANDOM = INTERCEPT age age*age | SUBJECT(id) COVTYPE(UN)
/REPEATED = occasion | SUBJECT(id) COVTYPE(AR1)
/METHOD = ML
/PRINT = COVB G HISTORY SOLUTION TESTCOV
/SAVE = FIXPRED (fix_pred) PRED (tot_pred) RESID (resid) .
EXECUTE .
If the data are unbalanced – if there aren’t the same number of observations for each
individual – SPSS is still able to fit the above repeated measures models.
4. Model specifications – more complex models
where sid and pid are the identifying codes for secondary school and primary school
respectively. Note that the order of the RANDOM subcommands is not important. (The order of
the subcommands is not important for any of these models fitted using the MIXED command.)
For this model SPSS v12.0 would not let me save predicted values or residuals.
A model with the gender effect varying randomly across primary school:
where x2 ijk = 1 − x1ijk is a dummy variable taking the value 1 for boys and 0 for girls, can be
fitted by changing the relevant RANDOM subcommand:
/RANDOM = sex | SUBJECT(pid) COVTYPE(UN)
Since sex has been declared as a factor on the MIXED command, the above RANDOM
subcommand will allow both factors (boys and girls) to vary at random across primary
schools and so the INTERCEPT should not be included. The covariance type needs to be
specified as unstructured (UN) to estimate the covariance term σ u 01 as described in section
3.1.
4.2 Multivariate normal response models
The multiple response model can be thought of as an extension of a repeated measures model
– instead of a number of measurements of the same item made at different points in time we
have measurements of a number of different items. We can use fixed effects to control for
differences in the means between responses and random effects to model the different
variances, but the real advantage of fitting multivariate response models is the ability to
model the correlation between responses.
The data used to illustrate this model are examination scores for 1905 16 year old students
from 73 schools in England, where results are available both for a written paper yWjk and for
coursework yCjk for pupil j in school k . The fitted model is then
where x1 jk is a dummy variable taking the value 1 for boys, 0 for girls. The trick to fitting
such a model is to stack the responses into a single column yijk and introduce an indicator
variable I ijk taking the value 1 for the written exam ( i = W ), 0 otherwise. Then we can write
If the data have multiple responses per record they need to be transformed from a format such
as:
school student sex writnexm courswk
2 37 2 33 47.2
2 38 1 64 .
Note that the data may contain missing values; in the above example there is no score for
coursework for student 38 in school 2. As mentioned in section 3.3 SPSS can analyse
unbalanced repeated measures data, and since we use the REPEATED subcommand here this
extends to missing multivariate responses.
To fit the model in SPSS we declare a 3-level model, with schools at the highest level and
repeated measures on students at levels 1 and 2. There is, however, no modelling of the
variance at the student level (there is no RANDOM subcommand with the keyword
SUBJECT(student)).
MIXED y BY index sex
/FIXED = index index*sex | NOINT
/RANDOM = index | SUBJECT(school) COVTYPE(UN)
/REPEATED = index | SUBJECT(school*student) COVTYPE(UN)
/METHOD = ML
/PRINT = COVB G HISTORY SOLUTION TESTCOV
/SAVE = FIXPRED (fix_pred) PRED (tot_pred) RESID (resid) .
EXECUTE .
SPSS 12.0 does not provide the higher level residuals directly, presumably because these are
seen as some kind of nuisance terms. However, in many cases there will be substantive
interest in the residuals and the SAVE subcommand can be used to save the fixed part
predictions (FIXPRED) as well as the predictions from the fixed and random parts of the
model (PRED). For the general linear multilevel model, written in matrix form,
Y = Xβ + Zγ + e (12)
where γ is a stacked vector of all residuals (slopes and intercepts) at all levels and Z is the
corresponding design matrix, the predictions from the fixed part correspond to
ˆ * = Xβˆ
Y (13)
and the predictions from the fixed and random parts are given by
ˆ = Xβˆ + Zγˆ
Y (14)
It follows from (13) and (14) that the predicted residuals γ̂ are given by
(
γˆ = ( Z T Z ) Z T Y )
-1
ˆ −Y
ˆ* (15)
For the trivial example of a 2-level variance components model given by (1) or (2), the
following code uses the MATRIX command (and, in particular, the SOLVE function) in SPSS to
obtain estimates of the school-level residuals.
AUTORECODE VARIABLES = school
/INTO l2id .
SORT CASES BY l2id .
* get composite residuals .
COMPUTE comp_res = tot_pred - fix_pred .
* make sure MXLOOP is greater than the number of schools .
SET MXLOOP = 100 .
MATRIX .
GET l2id
/FILE = *
/VARIABLES = l2id .
GET school
/FILE = *
/VARIABLES = school .
GET comp_res
/FILE = *
/VARIABLES = comp_res .
COMPUTE temp_mat = (l2id = 1) .
COMPUTE zmat = {temp_mat} .
LOOP i = 2 TO l2id(NROW(l2id)) .
COMPUTE temp_mat = (l2id = i) .
COMPUTE zmat = {zmat, temp_mat} .
END LOOP .
COMPUTE zTz = T(zmat)*zmat .
COMPUTE zTy = T(zmat)*comp_res .
COMPUTE res_2 = SOLVE(zTz,zTy) .
COMPUTE zTy = T(zmat)*school .
COMPUTE schl_2 = SOLVE(zTz,zTy) .
SAVE {schl_2,res_2}
/OUTFILE = *
/VARIABLES = school res_2_1 .
END MATRIX .
EXECUTE .
The MATRIX command of this code can be modified to estimate, for example, residuals for the
2-level random slopes model given by (3):
MATRIX .
GET l2id
/FILE = *
/VARIABLES = l2id .
GET school
/FILE = *
/VARIABLES = school .
GET comp_res
/FILE = *
/VARIABLES = comp_res .
GET standlrt
/FILE = *
/VARIABLES = standlrt .
COMPUTE temp_mat = (l2id = 1) .
COMPUTE zmat = {temp_mat} .
LOOP i = 2 TO l2id(NROW(l2id)) .
COMPUTE temp_mat = (l2id = i) .
COMPUTE zmat = {zmat, temp_mat} .
END LOOP .
COMPUTE zTz = T(zmat)*zmat .
COMPUTE zTy = T(zmat)*school .
COMPUTE schl_2 = SOLVE(zTz,zTy) .
LOOP i = 1 TO l2id(NROW(l2id)) .
COMPUTE temp_mat = (l2id = i)&*standlrt .
COMPUTE zmat = {zmat, temp_mat} .
END LOOP .
COMPUTE zTz = T(zmat)*zmat .
COMPUTE zTy = T(zmat)*comp_res .
COMPUTE res_2 = SOLVE(zTz,zTy) .
COMPUTE temp_mat = IDENT(l2id(NROW(l2id)),2*l2id(NROW(l2id))) .
COMPUTE res_2_1 = temp_mat*res_2 .
COMPUTE temp_mat = {0*IDENT(l2id(NROW(l2id))),
IDENT(l2id(NROW(l2id)))} .
COMPUTE res_2_2 = temp_mat*res_2 .
SAVE {schl_2,res_2_1,res_2_2}
/OUTFILE = *
/VARIABLES = school res_2_1 res_2_2 .
END MATRIX .
EXECUTE .
Of course the residuals are of little use in themselves without their corresponding standard
errors. The dispersion matrix of the residuals can be estimated using formulae given by e.g.
Goldstein (2003).
6. Conclusions
Multilevel modelling in SPSS has definite limitations; in particular, the restriction to normal
response models means that several classes of model cannot be fitted. These include such
common models as multilevel logistic regression and multilevel Poisson regression models
and, through these, developments such as multilevel categorical responses or multilevel Cox
regression.
The major limitation to the normal response models is the restricted ability to specify the
covariance matrix at the lowest level. In particular, this means that SPSS is not able to fit
models with heterogeneous variances as in equation (4). This may seem like a minor
limitation but in effect it means that the user must hypothesise that the lowest level variance is
the same for all subgroups (and that it is independent of the value of any covariate) without
being able to test these hypotheses. This becomes particularly important when testing for
random slopes at higher levels, since the inability to model the variance at the lowest level
may effect the outcome of such tests. Moreover, although it is possible to obtain higher level
residuals from the models that SPSS fits, it is unduly cumbersome at present.
However, there are some strengths to the SPSS MIXED command. The alteration or addition
of RANDOM subcommands makes it easy to change the random specification of a model (at the
higher levels) or to add further levels, and it is as straightforward to fit cross-classified models
as it is to fit hierarchical models. The REPEATED subcommand provides a wide range of
correlation functions, and the use of these makes it simple to fit normal multivariate response
models. There is no requirement for datasets to be balanced or complete, the information
criteria provided are fairly comprehensive and the algorithm used is fast. The MIXED
command is also available through the Windows interface (as opposed to through the use of
the command syntax); a description of the use of the MIXED command through the Windows
interface can be found elsewhere (Landau and Everitt, 2004).
The widespread use of SPSS means that, if it to be taken seriously as a statistical package, it
is important that multilevel data analysis should be available. The MIXED command already
covers most of the multilevel analyses that most users will require for (normally distributed)
continuous outcomes. However, in many disciplines continuous measures will be the
exception rather than the rule and SPSS will remain limited until it introduces commands to
fit generalised discrete response multilevel models. Put it like this: unless all of your
(multilevel) data have normally distributed responses you are going to need to use a package
other than SPSS to analyse them. In which case, is it worth taking the time to learn how to use
the MIXED command in SPSS when you are also going to have to learn to use other software?
References
Akaike H. (1973) Information theory and an extension of the maximum likelihood principle.
In: Petrov BN, Csaki F, eds. 2nd International Symposium on Information Theory. Budapest:
Akademiai Kiado, 267-281.
Bozdogan H. (1987) Model selection and Akaike's information criterion (AIC): the general
theory and its analytical extensions. Psychometrika 52, 345-370.
Fai A. H. T., Cornelius P. L. (1996) Approximate F-tests of multiple degree of freedom
hypotheses in generalized linear least squares analyses of unbalanced split-plot experiments.
Journal of Statistical Computation and Simulation 54, 363-378.
Fielding A., Yang M., Goldstein H. (2003) Multilevel ordinal models for examination grades.
Statistical Modelling 3.
Goldstein H. (2003) Multilevel statistical models. London: Arnold.
Goldstein H., Healy M. J. R., Rasbash J. (1994) Multilevel time series models with
applications to repeated measures data. Statistics in Medicine 13, 1643-1655.
Hurvich C. M., Tsai C.-L. (1989) Regression and time series model selection in small
samples. Biometrika 76, 297-307.
Landau S., Everitt B. S. (2004) A Handbook of Statistical Analyses using SPSS. Boca Raton:
Chapman & Hall.
Rasbash J., Browne W., Goldstein H., Yang M., Plewis I., Healy M., Woodhouse G., Draper
D., Langford I., Lewis T. (2000) A User's Guide to MLwiN. London: Multilevel Models
Project, Institute of Education, University of London.
SAS Institute Inc. (1999) SAS/STAT User's Guide, Version 7-1. Cary, NC: SAS Institute Inc.
Schwarz G. (1978) Estimating the dimension of a model. Annals of Statistics 6, 461-464.
Wolfinger R., Tobias R., Sall J. (1994) Computing Gaussian likelihoods and their derivatives
for general linear mixed models. SIAM Journal on Scientific Computing 15, 1294-1310.
Table 1: parameter estimates for 2-level models
ML REML
Model Parameter Estimate SE Time Estimate SE Time
(1) β0 -0.0091 0.0763 1s -0.0094 0.0779 1s
β1 0.5600 0.0124 0.5598 0.0125
β2 0.1672 0.0341 0.1674 0.0341
β3 -0.1590 0.0873 -0.1590 0.0894
β4 0.0187 0.1232 0.0187 0.1261
σ u20 0.0811 0.0165 0.0858 0.0178
ML REML
Model Parameter Estimate SE Time Estimate SE Time
(5) β0 -9.9067 0.1089 112s -9.9063 0.1090 120s
β1 2.4726 0.0169 2.4726 0.0169
σ v20 0.0136 0.0135 0.0148 0.0139
σ u20 1.1662 0.0555 1.1662 0.0555
σ e20 5.1541 0.0431 5.1542 0.0555
-2 log like 141685.6 141728.0
Table 3: parameter estimates for repeated measures models
ML REML
Model Parameter Estimate SE Time Estimate SE Time
(6) β0 148.9753 1.5396 1s 148.9753 1.5701 1s
β1 6.1659 0.3510 6.1658 0.3574
β2 1.0906 0.3490 1.0905 0.3525
β3 0.4678 0.1625 0.4678 0.1635
β4 -0.3404 0.3002 -0.3404 0.3021
σ u20 61.5486 17.0858 64.0120 18.1211
ML REML
Model Parameter Estimate SE Time Estimate SE Time
(8) β0 5.2574 0.1807 12s 5.2552 0.1843 12s
β1 0.4986 0.0982 0.4985 0.0983
σ v20 0.3457 0.1609 0.3697 0.1733
σ u20 1.1043 0.2023 1.1096 0.2036
σ e20 8.0534 0.1990 8.0551 0.1991
-2 log like 17123.5 17127.9
β0 5.2605 0.1783 1m30s 5.2580 0.1821 1m37s
β1 0.4939 0.1072 0.4940 0.1078
σ v20 0.3409 0.1602 0.3652 0.1727
ML REML
Model Parameter Estimate SE Time Estimate SE Time
(8) βW 0 49.0084 0.9318 3s 49.0096 0.9380 3s
βC 0 69.6230 1.1719 69.6211 1.1795
βW 1 -2.4930 0.5603 -2.4913 0.5605
β C1 6.7567 0.6706 6.7574 0.6709
σ Wv
2 46.5648 9.3531 47.3794 9.5623
σ WCv 24.9371 8.9916 25.3663 9.1903
σ Cv
2 75.1936 14.6729 76.4476 14.9919
σ Wu
2 124.4335 4.3363 124.5024 4.3400
σ WCu 72.7489 4.1521 72.7841 4.1555
σ Cu
2 180.0697 6.2499 180.1729 6.2553
-2 log like 26799.5 26794.6