Alize PDF
Alize PDF
Alize PDF
ALIZE/LIA RAL
This reference system is based on the ALIZE software plateform and uses tools deve-
loped by LIA. This paper describes this system and explains how to use it in practical
to make an automatic speaker recognition experiment. This paper is divided into four
parts. In the first part, we present how to install the system. Then, in the second part,
the databases used and the protocol which have been defined are described. In the third
part, we move on to the explanation of the method. And finally, the scripts used to
launch a NIST experiment and the results obtained in authentication are shown.
I.Installation
1.Software Requirements
The full system has been tested under Linux (Red Hat 3.3.3-7). For the compilation
step, we have used g++ 3.3.3.
2.Downloading
The source code of each software can be downloaded at :
ALIZE :
http ://www.lia.univ-avignon.fr/heberges/ALIZE/
LIA RAL :
http ://www.lia.univ-avignon.fr/heberges/ALIZE/LIA RAL/index.html
SPro :
http ://www.irisa.fr/metiss/guig/spro/download.html
SPHERE :
http ://www.nist.gov/speech/tools/sphere 26atarZ.htm
Download these archives in your home directory /home. Then, you have to decompress
these four tar files as follows :
At the end of this step, you have four folders (and the tar files) in your home directory :
alize-1.04, LIA RAL v1.2, spro-4.0 and nist.
3.Compilation
a.ALIZE
In a bash shell, you have to make the folllowing steps in order to generate the ALIZE
library :
>cd alize-1.04
>touch ltmain.sh
>./autogen.sh
b.LIA RAL
To install the LIA RAL package, follow these steps :
At the end of these steps, the executables and the library libliatools.a are respectively
located at /home/alize-1.04/bin/ and /home/alize-1.04/lib/.
c.SPHERE
The installation of the SPHERE library can be accomplished with the next com-
mands :
Before installation, this script needs information concerning the computing environment
on which this package is being compiled. The install script has definitions for several
operating system environments on which this package may be installed. The list of these
OS environments is reminded below :
1 :Sun OS-4.1.[12]
2 :Sun Solaris
3 :Next OS
4 :Dec OSF/1 (with gcc)
5 :Dec OSF/1 (with cc)
6 :SGI IRIX :cc -ansi
7 :HP Unix (with gcc)
8 :HP Unix (with cc)
9 :IBM AIX
10 :Custom
As our computing environment (linux) is not listed, we choose the option 10 : custom.
So, the custom installation prompts for the necessary information :
If the compilation step is successful, you will find the two libraries libsp.a and libutil.a
in /home/nist/lib/.
N.B : The success of the compilation step depends on the OS environment and the
version of the compiler. So, if you encounter a compilation error, you will have to
change the source code. For example, in your case, we had to change the code of exit.c
(we added the code line #include <errno.h> and removed the declarations of variables
errno and sys errlist).
d.SPro
To install the SPro toolkit, you have to follow these traditional steps :
At the end of these steps, the executables and the library libspro.a are respectively
located at /home/spro-4.0/bin/ and /home/spro-4.0/lib/.
Experiments were based on NIST speaker verification data from the evaluation 2003
to 2005.
2.Protocol
a world set : subset from NIST03 and NIST04data used for training the gender
dependent background models.
a development set : subset from NIST04 used to train the fusion and threshold.
a normalisation set : subset from NIST04 for pseudo-impostors (77 male, 113
female) used for T-normalization.
an evaluation set : primary task data (1conv-1conv) of the NIST05 speaker recog-
nition evaluation campaign.
N.B : For the following, we suppose that the audio data have been installed at /home/sph
location.
b.Practice
The features extraction is carried out by the filter-bank based cepstral analysis tool,
sfbcep from SPro. The executable usage is given in the following :
The sfbcep executable takes as input a waveform (.sph) and output filter-bank derived
cepstral features (.tmp.prm). The options that we have used are described below (see
tab.(2)) :
Option Description
-F Specify the input waveform file format
-l Specify the analysis frame length
-d Specify the interval between two consecutive frames
-w Specify the waveform weighting window
-p Specify the number of output cepstral coefficients
-e Add log-energy to the feature vector
-D Add first order derivatives to the feature vector
-k Specify the pre-emphasis coefficient
-i Specify the lower frequency bound
-u Specify the upper frequency bound
At the end of this step, the features are vectors (size 34) which have the next form :
o
..
Linear Frequency Cepstral Coefficients (LFCC) (size 16)
.
. energy (size 1)
..
o
first order derivatives of the LFCC (size 16)
.
. delta energy (size 1)
http ://www.irisa.fr/metiss/guig/spro/spro-4.0.1/spro-4.0.1.pdf
Once all the feature vectors have been calculated, a very important step is to decide
which vectors are useful and which are not. One way of looking at the problem is to
determine frames corresponding to speech portions of the signal versus those correspon-
ding to silence. So to select frames corresponding to speech portions, we use an approach
based on the log energy distribution of each speech segment :
1. The energy coefficients are first normalized using zero mean and unit variance
normalization.
2. Then the normalized energies are used to train a three component GMM.
3. Finally N % of the most energized frames are selected through the GMM, whereas
the frames with lowest energy are discarded.
b.Practice
To normalize the energy coefficients, the executable NormFeat.exe from LIA SpkDet
has been used. The executable usage is given in the following :
NormFeat.exe aims at processing input speech related features by applying any kind of
normalization. inputfile is the name of the features to work with and it can be a list with
a .lst extension. The main interesting options that we have used are described below
(see tab.(3)) :
A sample of an output label file is given in the figure (2). The first and second column
contain the beginning and the end (in second) of the selected frames. The third column
contains the label affected to selected frames.
The parameter vectors are normalized to fit a zero mean and an unit variance distri-
bution. The mean and variance used for the normalization are computed file by file on
all the frames kept after applying the frame removal processing.
b.Practice
The executable used for the features normalization is the same than one used for the
energy normalization. The only difference is that NormFeat.exe loads label files. So,
in order to make the features normalization, NormFeat.exe uses only the frames with
highest energy. We mention below the options which have changed in comparison with
those used for energy normalization (see tab.(5)) :
The goal of this step is to create a model which must represent the entire space of
possible alternatives to the hypothesized speaker. The selected approach pools speech
from several speakers and trains a single model which is called a world model (or a
universal background model). The Gaussian Mixture Model is estimated using the EM
(Expectation Maximization) algorithm. 2048 mixtures are used for the GMM. The log-
energy coefficient is discarded to build the model.
b.Practice
To train the world model, the executable TrainWorld.exe from LIA SpkDet has been
used. The executable usage is given in the following :
TrainWorld.exe aims at learning a GMM via the EM algorithm. inputfile is the name of
the normalized features to work with and it can be a list with a .lst extension. worldfile
is the name of the resulting file model. The main interesting options that we have used
are described below (see tab.(6)) :
Tab. 6 Description of the main options used for world model training
In order to train the target model, we use the following approach : the speaker model
is derived by adapting the parameters of the background model using the speakers
training speech and the maximum a posteriori (MAP) estimation. 2048 mixtures are
used for the GMM. As previously, the log-energy coefficient is discarded to build the
model.
b.Practice
To train the target model, the executable TrainTarget.exe from LIA SpkDet has been
used. The executable usage is given in the following :
TrainTarget.exe aims at training target speakers by adapting a world model via a MAP
method. worldfile is the name of the world model used for adaptation. inputfile is the
input list which associates a name of model to each feature filenames. A sample of a
such list is given in the figure (3).
Fig. 3 A sample of input list (model on the first column and input feature filenames
on the second)
The main interesting options that we have used are described below (see tab.(7)) :
Tab. 7 Description of the main options used for target model training
The MAP method used is the MAPOccDep approach : the random variable to estimate is
computed by a linear combination of its value in the world model and its value obtained
by an EM algorithm on the data. This method takes into account the a posteriori
probability n for each gaussian. The weights of this combination are provided by the
n n
option MAPRegFactor r ( n+r for the world model and 1 n+r for the clien model).
5.Testing
a.Theory
The goal of this step is simply to calculate a score for each test feature vector given a
target model and a backgroung model (the score is an estimate of the probability that
the test segment contains speech from the target speaker). To compute this score, we
consider only the ten top gaussian distributions of the models (N.B : for this step, we
have also discarded the log-energy coefficient of each test feature vector).
b.Practice
To calculate the score, the executable ComputeTest.exe from LIA SpkDet has been
used. The executable usage is given in the following :
ComputeTest.exe aims at giving a score related to a test segment and a target model.
inputfile is the file which contains the list of experiments. A sample of a such file is
given in the figure (4). worldfile is the name of the backgroung model. outputfile is the
resulting score file.
xaiv 2017 2172 2237 2345 2441 2514 2768 3426 3487 3497 3583 3757 3937 3944 3990 4015 4373 4641 4702 4865 4910 5009 5010 5012 5081
xazu 2017 2258 2434 2489 2500 2930 2943 2989 3097 3213 3487 3494 3540 3545 3998 4326 4538 4679 4702 4769 4849 4910 5009 5322
xaiu 2017 2073 2077 2345 2495 2591 2728 3071 3142 3259 3540 3811 3885 3927 3990 4101 4702 4849 4870 4937 4948 5010 5012 5081 5284
...
Fig. 4 A sample of input file (test file on the first column, models to test with on the
others)
The main interesting options that we have used are described below (see tab.(8)) :
A sample of an output score file is given in the figure (5). The columns contain in order :
the gender of the target speaker, the target model identifier, a binary number (0 or 1)
which informs on the sign of the score, the test file identifier and the score.
The last step in speaker verification is the decision making. This process consists in
comparing the likehood resulting from the comparison between the claimed speaker mo-
del and the incoming speech signal with a decision threshold. If the likehood is higher
than the threshold, the claimed speaker will be accepted, else rejected.
The tuning of decision threshold is very troublesome in speaker verification. Indeed, its
reliability cannot be ensured while the system is running. This uncertainty is mainly
due to the score variability between trials, a fact well known in the domain.
So, score normalization has been introduced explicitly to cope with score variability and
to make speaker-independent decision threshold tuning easier.
The normalization technique used here is tnorm (which uses impostor scores to norma-
lize client scores).
b.Practice
A sample of a nist normalized score file is given in the figure (6). The columns are the
same than those described above (but the information of the third column is not used
here : the binary is always put to zero).
A full description of all LIA SpkDet tools and options is available at the following
address :
7.Decision making
a.Theory
The goal of this step is simply to take the decision accept or reject according to the
normalized score and a decision threshold. If the score is higher than the threshold, the
claimed speaker will be accepted, else rejected.
b.Practice
To make a decision, the executable Scoring.exe from LIA Utils has been used. The
executable usage is given in the following :
Scoring.exe aims at taking the decision accept or reject according to the confidence
score found in the score file. inputfile is the nist normalized score file (performed by
ComputeNorm.exe). outputfile is the output decision file. The main interesting options
that we have used are described below (see tab.(10)) :
A sample of an output decision file is given in the figure (7). This file contains eight
fields, separated by white space and in following order :
A full description of all LIA Utils tools and options is available at the following address :
IV.Experiment
1.Scripts
A set of scripts is available to easily run experiment on the common protocol with
Alize/LIA RAL. It requires a minimum amount of setup (specify where the sound
samples are stored) and run a full NIST evaluation with a standard state of the art
GMM based system. These scripts are described below :
1. For NIST 2005 data, the stereo files have to be split into two files adding an exten-
sion .A or .B to the original name (xxxx.sph split to xxxx.A.sph and xxxx.B.sph).
To do that, we use the script extract1channel.pl as follows :
>./extract1channel.pl inputlist inputfile ext exe output
inputlist is the list containing the audio files which have to be split. inputfile is
the location where the sound samples are stored. ext is the extension of the audio
files. exe is the location of the executable w edit (for us, it is /home/nist/bin).
output is the location where the two files (.A and .B) are saved.
2. Decompress the tar file uws alz scripts.tar.gz in your home directory.
3. Then, you have to edit the script setupdirs.sh to specify where the NIST sphere
files are located (only the lines 11, 12, 13 and 14 have to be changed). After that,
run setupdirs.sh.
2.Experimental results
We sum up here the main options used to make a speaker recognition experiment :
3.Comparative results
To test the influence of the number of gaussians, we have repeated the previous expe-
riment using only 512 gaussians to create the different models. The corresponding DET
curve is displayed in fig.(9) (in comparison, the DET curve obtained using 2048 gaus-
sians have also been displayed (in blue) on the same figure). The use of 2048 mixtures
instead of 512 for the speaker modelling improves slightly the performances. Regarding
the fact that GMMs with 2048 mixtures need more computation, a model with 512
mixtures is a good compromise.
Conclusion
The ALIZE/LIA RAL reference system has been described and tested on the NIST
database. This reference system is made of replaceable modules which is of great interest.
From now, a researcher could indeed show the improvement of his own new module
(features extraction, silence removal, score normalization, . . . ) simply by replacing the
corresponding module in the reference system.
Acknowledgments
I express here my sincere gratitude to Asmaa El Hannani for their availability, colla-
boration and help.