Journal of Statistical Software: Factominer: An R Package For Multivariate Analysis
Journal of Statistical Software: Factominer: An R Package For Multivariate Analysis
Journal of Statistical Software: Factominer: An R Package For Multivariate Analysis
Abstract
In this article, we present FactoMineR an R package dedicated to multivariate data
analysis. The main features of this package is the possibility to take into account different
types of variables (quantitative or categorical), different types of structure on the data (a
partition on the variables, a hierarchy on the variables, a partition on the individuals) and
finally supplementary information (supplementary individuals and variables). Moreover,
the dimensions issued from the different factorial analyses can be automatically described
by quantitative and/or categorical variables. Numerous graphics are also available with
various options. Finally, a graphical user interface is implemented within the Rcmdr
environment in order to propose an user friendly package.
1. Introduction
In this paper we present the FactoMineR package (Husson, Lê, and Mazet, 2007), a package
for multivariate data analysis with R (Team, 2006). One of the main reasons for developing
this package is that we felt a need for a multivariate approach closer to our practice via:
• the use of a more geometrical point of view than the one usually adopted by most of
the Anglo-American practitioners.
Another reason is that obviously it represents a convenient way to implement new methodolo-
gies (or methodologies dedicated to the advanced practitioner) as the ones we’re presenting
thereafter that take into account different structure on the data such as:
Finally we wanted to provide a package user friendly and oriented towards the practitioner
which is what led us to implement our package in the Rcmdr package. No need to mention
that the practitioner has the possibility to use the package both ways, i.e. with or without
the GUI.
We will first present the most commonly used factorial analysis implemented in the package,
then some methodologies dedicated to data endowed with some structure, at the same time
as we’ll set out our practice and lastly, we will show an example of the GUI.
• Correspondence Analysis (CA) when individuals are described by two categorical vari-
ables that leads to a contingency table;
Let X be the data table of interest. In order to reduce the dimensionality, X is transformed to
a new coordinate system by an orthogonal linear transformation. Let Fs (resp. Gs ) denotes
the vector of the coordinates of the rows (resp. columns) on the axis of rank s. Those two
vectors are related by the so called “transition formulae”. In the case of PCA, they can be
written:
1 X
Fs (i) = √ xik mk Gs (k), (1)
λs k
1 X
Gs (k) = √ xik pi Fs (i), (2)
λs k
where Fs (i) denotes the coordinate of the individual i on the axis s, Gs (k) the coordinate
of the variable k on the axis s, λs the eigenvalue associated with the axis s, mk the weight
associated to the variable k, pi the weight associated to the individual i, xik the general term
of the data table (row i, column k).
The transition formulae lay the foundation of our point of view and consequently set the
graphical outputs at the roots of our practice. From these formulae it is crucial to analyse the
Journal of Statistical Software 3
scatter plots of the individuals and of the variables conjointly: an individual is at the same
side as the variables for which it takes high values, and at the opposite side of the variables
for which it takes low values.
1 X
Fs (i0 ) = √ xi0 k mk Gs (k) (3)
λs k
In the same manner, it is also easy to calculate the coordinate of a supplementary variable
when the former is quantitative; in this case the supplementary variable lies in the scatter plot
of the variables. When the variable is categorical, its modalities are represented by the way of
a “mean individual” per modality. For each modality, the values associated with each “mean
individual” are the means of each variable over the individuals endowed with this modality;
in this case the supplementary variable lies in the scatter plot of the individuals.
Notice that the supplementary information don’t intervene in any way in the calculus of
the vectors Fs and Gs but represent a real support when interpreting the axis as illustrated
further.
and/or qualitative). These variables can have participated to the construction of the factorial
axes (they can be active or supplementary).
For one quantitative variable, we calculate the correlation coefficient between the variable and
the coordinates of the individuals on the axis (Fs (i)); we only use the data concerning the
active individuals. The correlation coefficients are calculated for all the variables, dimension
by dimension. Then, we can test the significance of each correlation coefficient and sort the
variables from the most correlated to the less correlated. Each dimension is then described
by the variables (by default, we only keep significant variables). These helps are particularly
useful for the interpretation of the dimensions when there is a lot of variables.
For one qualitative variable, we make a one-way analysis of variance with the coordinates
of the individuals on the axis explained by the qualitative variable. Then, for each category
of the qualitative variable, a student T -test is used to compare the average of the category
P
with the general average (using the constraint i αi = 0, we test αi = 0). Then the p-value
associated to this test is transformed to a Normal quantile in order to take into account the
information that the mean of the category is less or greater than 0 (we use the sign of the
difference between the mean of the category and the overall mean). This transformation is
named v-test by Lebart, Morineau, and Piron (1997).
2.5. Examples
> data(decathlon)
> res.pca <- PCA(decathlon, quanti.sup=11:12, quali.sup = 13)
By default, the PCA function gives two graphs, one for the variables and one for the indi-
viduals. Figure 1 shows the variables graph: active variables (variables used to perform the
PCA) are colored in black and supplementary quantitative variables are colored in blue.
The individuals can be colored according to a qualitative variable in the individual graph. To
do so, the following code is used:
The habillage = 13 indicates that individuals are colored according to the 13th variable.
Thus, the athletes are colored according to the athletics meeting (Fig. 2). The athletes who
participated to the Olympic Game are colored in red and the athletes who participated to
the Decastar are colored in black.
Journal of Statistical Software 5
1.0
Discus
X400m Shot.put
X1500m
0.5
Javeline High.jump
Dimension 2 (17.37%)
X110m.hurdle
X100m
Rank
0.0
Points
Long.jump
−0.5
−1.0
Dimension 1 (32.72%)
The percentage of variability explained by each dimension is given: 32.72% for the first axis
and 17.37% for the second one.
We can draw a bar plot with the eigenvalues (Fig. 3) with the following code:
This graph allows to detect the number of dimensions interesting for the interpretation. The
third and fourth dimension may be interesting, so we can plot the graph for these two dimen-
sions. For the variables (Fig. 4), we will use the code:
The parameter choix = "var" indicates that we plot the graph of the variables, the parameter
axes = c(3,4) indicates that the graph is done for the dimension 3 and 4, and the parameter
lim.cos2.var = 0 indicates that all the variables are drawn (more precisely, all the variables
having a quality of projection greater than 0; this option is interesting to keep only the
variables well projected).
The results are given in a list with several objects with the print function:
> print(res.pca)
Results (Table 1) are given for the individuals, the active variables, the quantitative and
qualitative supplementary variables.
6 FactoMineR: an R package for multivariate data analysis
Decastar
OlympicG
Casarsa
4
YURKOV
Parkhomenko
Korkizoglou
2
Dimension 2 (17.37%)
Sebrle
Zsivoczky
Smith Macey
Pogorelov
SEBRLE Clay
MARTINEAU
HERNU Terek CLAY
KARPOV
Turi Barras
BOURGUIGNON Uldal McMULLEN
OlympicG
Decastar
Schoenbeck Bernard Karpov
0
Karlivans
BARRAS Qi
Hernu
BERNARD Ojaniemi
Smirnov
ZSIVOCZKY
Gomez
Schwarzl
Lorenzo Nool
Averyanov
WARNERS Warners
NOOL
-2
Drews
-4
-4 -2 0 2 4 6
Dimension 1 (32.72%)
Figure 2: Individuals graph (Decathlon data): individuals are colored from the athletics
meeting
Eigenvalues
3.0
2.5
2.0
1.5
1.0
0.5
0.0
Dim1 Dim2 Dim3 Dim4 Dim5 Dim6 Dim7 Dim8 Dim9 Dim10
1.0
Javeline
Pole.vault
0.5
X110m.hurdle
Dimension 4 (10.57%)
Points
Shot.put
Long.jump
X400m
0.0
X100m
High.jump X1500m
Rank
Discus
−0.5
−1.0
Dimension 3 (14.05%)
As mentioned above, we can describe each principal component using the dimdesc function:
Table 2 gives the description of the first dimension of the PCA done on the Decathlon data.
The variables are kept if the p-value is less than 0.20 (proba = 0.2). The variable which
describe the best the first dimension is the Points variable (it was a supplementary variable),
and then, it is the X100m variable which is negatively correlated with the dimension (the
individuals who have a great coordinate on the first axis have a low X100m time). The first
dimension is then described by the qualitative variable Competition. The Olympic Game
category has a coordinate significantly greater than 0 showing that the athletes of this com-
petition have greater coordinates than 0 on the first axis. Since, the variable Points is highly
correlated with this axis (the correlation is positive), the athletes for this competition made
better performances.
nom description
1 "$eig" "eigenvalues"
2 "$var" "results for the variables"
3 "$var$coord" "coordinates of the variables"
4 "$var$cor" "correlations variables - dimensions"
5 "$var$cos2" "cos2 for the variables"
6 "$var$contrib" "contributions of the variables"
7 "$ind" "results for the individuals"
8 "$ind$coord" "coord. for the individuals"
9 "$ind$cos2" "cos2 for the individuals"
10 "$ind$contrib" "contributions of the individuals"
11 "$quanti.sup" "results for the supplementary quantitative variables"
12 "$quanti.sup$coord" "coord. of the supplementary quantitative variables"
13 "$quanti.sup$cor" "correlations supp. quantitative variables - dimensions"
14 "$quali.sup" "results for the supplementary qualitative variables"
15 "$quali.sup$coord" "coord. of the supplementary categories"
16 "$quali.sup$vtest" "v-test of the supplementary categories"
17 "$call" "summary statistics"
18 "$call$centre" "mean for the variables"
19 "$call$ecart.type" "standard error for the variables"
20 "$call$row.w" "weights for the individuals"
21 "$call$col.w" "weights for the variables"
> data(children)
> res.ca <- CA (children, col.sup = 6:8, row.sup = 15:18)
The columns from 6 to 8 are supplementaries (they concern the age groups of the people),
and rows from 15 to 18 are either supplementaries. By default, the CA function gives one
graphical output (Fig. 5).
If we just want to visualize the active elements (Fig. 6), we use the following code:
$Dim.1
$Dim.1$quanti
Dim.1
Points 0.9561543
Long.jump 0.7418997
Shot.put 0.6225026
High.jump 0.5719453
Discus 0.5524665
Rank -0.6705104
X400m -0.6796099
X110m.hurdle -0.7462453
X100m -0.7747198
$Dim.1$quali
Dim.1
OlympicG 1.429753
Decastar -1.429753
CA factor map
0.8
comfort
0.6
to_live
0.4
Dim 2 (21.13%)
circumstances
economic
university
employment
0.2
housingdisagreement world
work
hard cep
fifty
money health
0.0
egoism
bepc
more_fifty thirty fear
unemployment
unqualified war
future
high_school_diploma
finances
-0.2
Dim 1 (57.04%)
Figure 5: Correspondence Analysis factorial map: the active rows are colored in blue, the
active columns are colored in red, the supplementary rows are colored in dark blue, the
supplementary columns are colored in dark red
10 FactoMineR: an R package for multivariate data analysis
CA factor map
0.4
circumstances
economic
university
0.3
employment
0.2
housing
Dim 2 (21.13%)
work
0.1
hard cep
money
health
0.0
egoism
bepc
fear
unemployment
unqualified war
future
-0.1
high_school_diploma
finances
-0.2
-0.3
Dim 1 (57.04%)
Figure 6: Correspondence Analysis: factorial map with only the active elements
Sets 1 j J
Variables 1 k Kj
1
Individuals i xik
of the PCA on the group j. Thus, the maximum axial inertia of each group of variables is
equal to 1. The influence of the groups of variables in the global analysis is balanced and the
structure of each group is respected. This weighing presents a simple direct interpretation. It
has also invaluable indirect properties; in particular it allows to consider MFA as a particular
generalized canonical analysis within the meaning of Carroll (1968).
For each group of variables one can associate a cloud of individuals. This cloud is the one
which is considered in the PCA for the only group j (after above mentioned standardization
by the first eigenvalue). MFA provides a superimposed representation of these clouds, with
the manner of a procrustes analysis. This representation can be presented in two ways: as a
projection of a cloud of points and as a canonical variable. Here, a third way is chosen, based
on a very useful property.
While taking into account the structure of variables in J groups and while using the weighting
of MFA (mk = 1j if the variable k is in the group j), this relation becomes:
λ1
J Kj
1 X 1 X
Fs (i) = √ xik Gs (k)
λs j=1 λj1 k=1
This equation is a general interpretation of the PCA but restricted to the only variables of
the group j. The partial individual ij is on the side of the variables of the group j for which
it takes high values, and on the opposite side of the variables of the group j for which it takes
low values. This property expresses a direct relation between the positions of the partial
individuals and the representation of the variables. It is so natural that many users of MFA
use it ... without knowing it. It has no equivalent in the procrustes analyzes.
On the graphs it is pleasant to see the point i in the exact barycenter of the points {ij , j =
1, ..., J}. In practice, the coordinates Fs (ij ) are multiplied by J. Thus, without modifying
12 FactoMineR: an R package for multivariate data analysis
the relative positions of the partial points, the required property is obtained:
J
1X
Fs (i) = Fs (ij )
J j=1
It may be also interesting to represent the groups of variables as points in a scatter plot to
visualize their common structure. To each group of variables j, one can associate the scalar
product matrix between individuals. This matrix of dimension I × I (I is the number of
individuals) is denoted Wj and can be regarded as a point in the Euclidean space of dimension
2
I 2 , denoted RI . In this space, the cosine of the angle formed by the origin and the two points
Wj and Wl is the RV coefficient between the two groups j and l. The representation of the
2
groups provided by MFA is obtained by projection upon vectors of RI induced by the MFA
factors: one factor may be considered as a set consisting of a single variable; it is then possible
2
to associate this set to a scalar product matrix and thus to a vector of RI .
MFA allows to analyse several groups of variables which can be quantitative and/or qualitative
when GPA allows to analyse only groups of quantitative variables.
As in PCA, the practitioner has the possibility to add supplementary information (individuals,
quantitative and qualitative variables), and in the case of MFA, user can add supplementary
groups of variables for instance.
1 l L
1 j J1 1 j′ Jl 1 JL
1 k Kj 1 k′ Kj′
1
i xik
Figure 8: Example of hierarchy on the variables: there is two levels for the hierarchy. The
first one contains L groups, each l group contains Jl subgroups, and each subgroup have Kj
variables.
The approach to consider such a structure on the variables in a global analysis involves
balancing the groups of variables within every node of the hierarchy.
Journal of Statistical Software 13
Hierarchical Multiple Factor Analysis (HMFA, LeDien and Pagès 2003a and LeDien and
Pagès 2003b) is an extension of MFA to the case where variables are structured according to
a hierarchy.
In HMFA, a succession of MFA is applied to each node of the hierarchy in order to balance
the groups of variables within every node, by going through the hierarchical tree from the
bottom up. Not only HMFA provides a graphical display of the individuals according to the
whole set of (weighted) variables, but it also displays the individuals as described by each
group of variables: as mentioned above, an individual which is described by just one group
of variables is called a ”partial individual”. An interesting feature of the analysis is that the
partial representation of each individual at each node is at the center of gravity of the partial
representation of this individual associated with the various subsets of variables nested within
this node.
Moreover, HMFA provides a representation of the nodes involved in the hierarchy; the prin-
ciple of this representation is similar to that of MFA.
Description of categories
For this first method we consider two cases depending on the type of the variable describing
the groups, wether it is numerical or categorical.
If a variable is quantitative, the mean of one group for this variable is calculated and compared
to the overall mean. More precisely, (Lebart et al. 1997) proposed to calculate the following
quantity:
x̄q − x̄
u= r
s2 N −nq
nq N −1
where nq denotes the number of individuals for the group q, N the total number of individuals,
s the standard deviation for all the individuals.
The quantity u can then be compared to the appropriate quantile of the Normal distribution.
If this quantity is more extreme than the quantile of the Normal distribution, then the variable
is interesting to describe the group of individuals. The interesting variables are then sorted
from the most to the less interesting variable.
If a variable is qualitative, then the frequency Nqj corresponding to the number of individuals
of the group q who take the category j (for the qualitative variable) is distributed as an
hypergeometric distribution with the parameters N , nj , nq /N (where nj denotes the number
of individuals that have taken the category j). A p-value is then calculated by category (and
by qualitative variable). The categories are sorted from the highest to the lowest p-value.
14 FactoMineR: an R package for multivariate data analysis
> source("http://factominer.free.fr/install-facto.r")
This interface is user-friendly and allows to make graphs and to save results in a file very
easily as explained below.
As an example, we show the interface for the PCA function (Fig. 10).
The main window allows to choose the active variables (by default all the variables are active
and the PCA can be performed). Several buttons allow to choose the supplementary quanti-
tative or qualitative variables, the supplementary individuals, the outputs to be displayed or
the graphs to be plotted.
The graphical options concern the two main graphs: the scatter plots of the individuals and
of the variables. Relating to the individuals graph, it is possible to represent the active
individuals, the supplementary individuals, the categories of the supplementary categorical
Journal of Statistical Software 15
variables; it is also possible to choose the elements that we want to draw. The individuals
can be colored according to one categorical variable (the categorical variable available are
proposed in a list).
Relating to the variables graph, active and/or illustrative variables can be drawn. If there are
a lot of variables, one can represent only the variables that are well projected on the plane
(by default the variables are drawn if their quality of representation is greater than 10%).
Several outputs are also available (Fig. 12). The dialog box allows to give all the results
from the PCA function, e.g. the eigenvalues, the results for the individuals and the variables
(active or supplementary). One can also get an automatic description of the dimensions of
the factorial analysis. All these results can be written in a file (a ∗.csv file which can be open
with Excel).
16 FactoMineR: an R package for multivariate data analysis
Figure 11: Window with the graphical options available for the PCA function
Figure 12: Window with the outputs available for the PCA function
5. Conclusion
The main features of the R package FactoMineR have been explained and illustrated in this
paper, using the data set decathlon that is available in the package.
Journal of Statistical Software 17
The website http://factominer.free.fr/ gives some examples for the different methods available
in the package; you can also find our latest references related to the methods developed in
our team at the following address http://agrocampus-rennes.fr/math/.
References
Hotelling H (1936). “Relations between two sets of variables.” Biometrika, 28, 321–377.
Husson F, Lê S, Mazet J (2007). FactoMineR: Factor Analysis and Data Mining
with R. R package version 1.04, URL http://factominer.free.fr,http://www.
agrocampus-rennes.fr/math/.
Lê S, Pagès J (2007). “DMFA: Dual Multiple Factor Analysis.” 12th International Conference
on Applied Stochastic Models and Data Analysis.
LeDien S, Pagès J (2003b). “Hierarchical Multiple Factor Analysis: application to the com-
parison of sensory profiles.” Food Quality and Preference, 14, 397–403.
Team RDC (2006). R: A Language and Environment for Statistical Computing. R Foun-
dation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http:
//www.R-project.org.
18 FactoMineR: an R package for multivariate data analysis
Affiliation:
Sébastien Lê
Agrocampus Rennes
UMR CNRS 6625
65 rue de Saint-Brieuc
35042 Rennes Cedex
E-mail: [email protected]
URL: http://www.agrocampus-rennes.fr/math/le