Voice Conversion Matlab Toolbox

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

VOICE CONVERSION MATLAB TOOLBOX

David Sündermann

Siemens Corporate Technology, Munich, Germany


Technical University of Catalonia, Barcelona, Spain
[email protected]

ABSTRACT • interpolation tools for linear, cubic spline, mel-scale,


and two-dimensional interpolation,
This paper documents function and properties of the Voice
Conversion Matlab Toolbox (version 2007-02-18). It contains • Gaussian mixture modeling,
information on system requirements, an overview about the
modules included, shows examples of applying the toolbox • hashing,
to voice conversion based on vocal tract length normalization • k-means clustering,
(VTLN) and linear transformation in a step-by-step manner,
and gives details about the parameter settings. • least squares fitting,

Index Terms— voice conversion, Matlab, toolbox • linear transformation,


• VTLN,
1. INTRODUCTION
• objective error measures as line spectral distortion, sig-
nal-to-noise ratio, residual distance [6], and mahalano-
Voice conversion is the transformation of the voice character-
bis distance,
istics of a source towards a target voice [1].
The Voice Conversion Matlab Toolbox described in this • pitch-synchronous overlap and add,
paper is a collection of Matlab scripts that enables the user to
rapidly design, modify, and test voice conversion algorithms • residual prediction,
based on • unit selection,
• VTLN in frequency domain [2] or time domain [3], • vector smoothing based on normal distribitions (also
two-dimensional),
• linear transformation [4].
• text-independent speech alignment.
It contains several signal processing tools, which are the fun-
This paper does not intend to discuss all tools contained in
damentals of the aforementioned voice conversion techniques,
this toolbox (there are more than 200), since they are docu-
such as
mented in the header of the respective source files, which can
• a pitch tracker, be easily accessed by typing

• a voicing detector, help tool

• a dynamic programming module, at the Matlab prompt, where tool has to be replaced by the
respective tool’s name such as vtln.
• dynamic time and frequency warping algorithms,

• a program for monitoring and manually modifying pitch 2. SYSTEM REQUIREMENTS


marks,
The toolbox was tested on Matlab version 6.5 (Release 13)
• feature conversion tools for linear predictive coeffi- on a Windows XP platform and on Matlab version 6.1 (Re-
cients, line spectral frequencies, (mel frequency) cep- lease 12.1) on a Linux platform1 . Both systems had the signal
stral coefficients, residual coefficients according to Ye 1 Most of the delivered scripts are in DOS-formatted ASCII code. If used
and Young [5] and Sündermann [6], and sinusoidal co- under Linux, they should be converted using the dos2unix command to
efficients, avoid data format errors.
processing as well as the statistics toolbox installed, but the also (pseudo) pitch marks for the unvoiced signal portions are
author tried to avoid using them to make the toolbox applica- generated by applying a pitch mark interpolation according
ble to a wider range of systems. to [11]. In doing so, the voicing information is to be pre-
In addition to the Matlab functionality of the toolbox, there served, as it will play an important role for the voice conver-
are a few algorithms based on other environments or pro- sion. The internal pitch mark format is that according to [9],
grams such as which stores a vector of the pitch period lengths in numbers
of samples, i.e. integer numbers, into a pm file. Furthermore,
• Perl version 5.8, the respective voicing information is stored into a parallel v
(voicing) file. At first, the header of the pp files are removed
• the author’s Language Resources Generation Tool-
resulting in files of the type pit
box [7],

• Praat [8] version 4.4, ls *.pp | replaceStr.pl ’.pp$’ > stem.txt


cat stem.txt | replaceStr.pl ˆ ...
• Cygwin version 2.510. ’tail +7 ’ | paste.pl ’.pp > ’ ’’ | ...
paste.pl stem.txt ’’ | ...
paste.pl ’.pit’ ’’ > pp2pit.bash
3. VTLN-BASED VOICE CONVERSION
bash pp2pit.bash
3.1. Pitch Tracking and Voicing Detection
Now, a Matlab batch processing script is generated that trans-
The speech processing of this toolbox is mainly based on the forms the pit file to the pm type and also produces the v files
pitch-synchronous paradigm, which requires the speech data
to be pitch-tracked. The pitch tracker described in [9] comes cat stem.txt | ...
along with the toolbox, however, according to the author’s ex- replaceStr.pl ˆ "pit2pm(’data\/" | ...
perience and a recent pitch tracker evaluation [10], it achieves paste.pl ".pit’, ’data/" ’’ | ...
a rather poor performance leading to a considerable number paste.pl stem.txt ’’ | ...
of artifacts. The Praat program produces much more reliable paste.pl ".wav’, ’data/" ’’ | ...
pitch marks and is to be used as standard pitch tracker in the paste.pl stem.txt ’’ | ...
following. paste.pl ".pm’, ’data/" ’’ | ...
In the directory of the Matlab toolbox (in the following paste.pl stem.txt ’’ | ...
referred to as toolbox directory), where we expect all Matlab paste.pl ".v’);" ’’ > ../tmp_pit2pm.m4
and other commands to be executed (unless otherwise spec-
ified), is a folder data that contains a number of example Now, we type at the Matlab prompt (in the toolbox directory)
speech files: f.01.wav to f.10.wav of a female voice and
m.01.wav to m.10.wav of a male voice. We move to this tmp_pit2pm
folder by typing at the Cygwin prompt
producing the desired pitch mark and voicing files.
cd data
3.2. Estimating the Warping Factor and Fundamental
Now, the list of all wav files in the current folder is taken, Frequency Ratio
a Praat script is generated and finally executed. Here, scripts
from the Language Generation Toolbox, and Praat’s command According to the definition of frequency as well as time do-
line version praatcon2 are used3 . main VTLN as given by [12], above all, two parameters are
required for the VTLN-based voice conversion: the warp-
ls *.wav | replaceStr.pl ˆ ’bash ... ing factor alpha and the fundamental frequency ratio rho,
..\/wav2pp.bash ’ > wav2pp.full.bash cf. [13]. This section shows, how these parameters are esti-
bash wav2pp.full.bash > wav2pp.praat mated on the training data.
praatcon wav2pp.praat To begin with, so called file list files
(cf. fileListFile2file) are generated that contain the
The result are pp, i.e. PointProcess files, the Praat format for names of wav, pm, and v files by typing at the Cygwin prompt
pitch marks, one for each wav file available. (in the toolbox directory)
So far, the pitch marks are only determined in voiced sig- 4 Usually, all working files, data, temporary files, etc. should be located in
nal parts. Since required for the following speech processing, the data directory. Exceptions are the temporary files generated by several
Matlab tools (cf. getRandFile and clrTemp). All other files that for some
2 Under Linux, Praat’s command line version is praat. reasons have to be (temporarily) located in the toolbox directory, should be
3 The character sequence ’...’ is to indicate that the line is continued. preceded by the prefix tmpm to make them easily detectable.
desired effect. It shall also be mentioned that, in terms of the
ls data/f*.wav > data/f.wav.l objective criterion used for training (cf. [2]), the automatically
ls data/f*.pm > data/f.pm.l determined parameter settings might be optimal; perceptively,
ls data/f*.v > data/f.v.l however, this may not be the case.
ls data/m*.wav > data/m.wav.l
ls data/m*.pm > data/m.pm.l
ls data/m*.v > data/m.v.l 4. VOICE CONVERSION BASED ON LINEAR
TRANSFORMATION
Now, the announced parameters are trained on the training
data, for instance for female-to-male conversion, typing at the
The linear transformation task is much more complex than
Matlab prompt
the VTLN-based one. Therefore and, in particular, due to the
number of parameters involved, we make use of a config file,
[alpha, rho] = getWarpingFactor( ...
that is located in the data folder and is called config.txt.
’data/f.wav.l’, ’data/f.pm.l’, ...
For the syntax of this config file consult help
’data/m.wav.l’, ’data/m.pm.l’)
getParameter, a function that reads parameters out of a
config file.
3.3. Frequency Domain and Time Domain VTLN For the parameters that will be used by the following op-
erations, have a look at Table 1.
Now, the objective is to convert a source speech file by means
of the estimated warping parameters towards the target voice.
This is done by means of the Matlab command
4.1. Parameter Training
vtlnBasedVc(’data/f.01.wav’, ...
The training is performed by means of the command
’data/f.01.pm’, ’data/f.01.v’, ...
’data/outputVtln.wav’, alpha, rho, ’fdvtln’)
trainVc(’data/config.txt’)
storing the converted speech as data/outputVtln.wav. At
All trained parameters are stored to the vcParamFile, cf.
this point, and also in the following, we apply the trained pa-
Table 1. One can always access the contents of this file by
rameters to files that also served as training material. This
using file2x, e.g. by typing
is only for reasons of convenience. If the reader has doubts
of the applicability of the trained parameters to unseen data,
vcParam = file2x(getParameter( ...
he is invited to split the data into training and testing data by
’vcParamFile’, ’data/config.txt’))
himself.
The above command line is the frequency domain VTLN
In particular, the field vcParam.general is of interest, since
version. For the time domain version, replace the last argu-
it contains all parameters specified in the config file and other
ment by ’tdvtln’.
ones derived in the training.
It shall be mentioned that the parameter estimation dis-
cussed in Section 3.2 via getWarpingFactor is based on
frequency domain VTLN. The particular warping function
can be defined as an additional argument (consult help 4.2. Conversion
getWarpingFactor), default is the symmetric piece-wise
The conversion is performed using the command
function according to [14]. Since the warping factor strongly
depends on the warping function used, this function must also
vc(’data/config.txt’)
be specified as additional argument to vtlnBasedVc, if dif-
ferent from the default. When time domain VTLN is applied
Again, all necessary settings have to be applied to the con-
(which usually produces a superior speech quality), there is
fig file.
no choice as for the warping function, since it corresponds to
the symmetric piece-wise function by definition; for a proof,
see [3].
In a recent study [12], the author showed that by altering 5. COPYRIGHT
the mentioned parameters, alpha and rho, he is able to pro-
duce several (at least five) well-distinguishable voices based All scripts of the Voice Conversion Matlab Toolbox are writ-
on one source voice. This is to invite the user of this toolbox ten by the author in the years 2003 to 2007. Table 2 contains
to manually play with these parameters to achieve the best the exceptions.
6. REFERENCES [16] D. Sündermann, H. Höge, A. Bonafonte, H. Ney,
A. Black, and S. Narayanan, “Text-Independent Voice
[1] D. Sündermann, “Voice Conversion: State-of-the-Art Conversion Based on Unit Selection,” in Proc. of the
and Future Work,” in Proc. of the DAGA, Munich, Ger- ICASSP, Toulouse, France, 2006.
many, 2005.
[17] D. Sündermann, H. Höge, A. Bonafonte, H. Ney,
[2] D. Sündermann, H. Ney, and H. Höge, “VTLN-Based and J. Hirschberg, “Text-Independent Cross-Language
Cross-Language Voice Conversion,” in Proc. of the Voice Conversion,” in Proc. of the Interspeech, Pitts-
ASRU, Virgin Islands, USA, 2003. burgh, USA, 2006.
[3] D. Sündermann, A. Bonafonte, H. Ney, and H. Höge, [18] A. Kain, High Resolution Voice Transformation, Ph.D.
“Time Domain Vocal Tract Length Normalization,” in thesis, Oregon Health and Science University, Portland,
Proc. of the ISSPIT, Rome, Italy, 2004. USA, 2001.
[4] Y. Stylianou, O. Cappé, and E. Moulines, “Statistical [19] D. Sündermann, H. Höge, A. Bonafonte, H. Ney, and
Methods for Voice Quality Transformation,” in Proc. of A. Black, “Residual Prediction Based on Unit Selec-
the Eurospeech, Madrid, Spain, 1995. tion,” in Proc. of the ASRU, San Juan, Puerto Rico,
[5] H. Ye and S. Young, “High Quality Voice Morphing,” 2005.
in Proc. of the ICASSP, Montreal, Canada, 2004. [20] D. Sündermann, H. Höge, and T. Fingscheidt, “Break-
[6] D. Sündermann, A. Bonafonte, H. Ney, and H. Höge, “A ing a Paradox: Applying VTLN to Residuals,” in Proc.
Study on Residual Prediction Techniques for Voice Con- of the ITG, Kiel, Germany, 2006.
version,” in Proc. of the ICASSP, Philadelphia, USA, [21] J. Chen and A. Gersho, “Real-Time Vector APC Speech
2005. Coding at 4800 bps with Adaptive Postfiltering,” in
[7] D. Sündermann, “A Language Resources Generation Proc. of the ICASSP, Dallas, USA, 1987.
Toolbox for Speech Synthesis,” in Proc. of the AST,
Maribor, Slovenia, 2004.
[8] P. Boersma, “Praat, a System for Doing Phonetics by
Computer,” Glot International, vol. 5, no. 9/10, 2001.
[9] V. Goncharoff and P. Gries, “An Algorithm for Accu-
rately Marking Pitch Pulses in Speech Signals,” in Proc.
of the SIP, Las Vegas, USA, 1998.
[10] B. Kotnik, H. Höge, and Z. Kacic, “Evaluation of Pitch
Detection Algorithms in Adverse Conditions,” in Proc.
of the Speech Prosody, Dresden, Germany, 2006.
[11] A. Black and K. Lenzo, Building Synthetic Voices,
Carnegie Mellon University, Pittsburgh, USA, 2003.
[12] D. Sündermann, G. Strecha, A. Bonafonte, H. Höge, and
H. Ney, “Evaluation of VTLN-Based Voice Conversion
for Embedded Speech Synthesis,” in Proc. of the Inter-
speech, Lisbon, Portugal, 2005.
[13] D. Sündermann, Text-Independent Voice Conversion
(Draft), Ph.D. thesis, Bundeswehr University Munich,
Munich, Germany, 2006.
[14] L. Uebel and P. Woodland, “An Investigation into Vo-
cal Tract Length Normalization,” in Proc. of the Eu-
rospeech, Budapest, Hungary, 1999.
[15] A. Kain and M. Macon, “Spectral Voice Conversion
for Text-to-Speech Synthesis,” in Proc. of the ICASSP,
Seattle, USA, 1998.
parameter description example
general
vcParamFile contains all parameters derived in training data/vc.mat
vcParamNarrow if the memory-consuming parts of vcParamFile can be re- 0
moved; this is recommendable, if residualPrediction is
switched off
training
normPitch fn in Hz, cf. [13] 100.0
sourceWavFileListFile wav and pm training file list files of source and target data/f.wav.l
sourcePmFileListFile data/f.pm.l
targetWavFileListFile data/m.wav.l
targetPmFileListFile data/m.pm.l
linear transformation
linearTransformationMode the type of linear transformation – either according to [4] stylianou
(stylianou) or to [15] (kain)
covarianceType type of covariance matrices; for the possible types, cf. diag
trainGmm
featNum dimensionality of features (D according to [13]) 32
mixNum number of Gaussian mixture densities 4
speech alignment, for details cf. [16]
twType speech alignment type; dtw for dynamic time warping or tw for dtw
text-independent alignment based on unit selection
twDp† if dynamic programming (DP) is to be applied or a context-free 1
minimization (i.e., w = 1 in the unit selection formula in [13]),
which is much faster
twFeatNum† dimensionality of features used for unit-selection-based align- 16
ment
twAlpha† w according to [13] 0.3
twDeltaM† cf. twUnitSelectionCost 1
twNBest† cf. fullDp 3
parallelization; since text-independent speech alignment can be very time-consuming (see [17]), it should be parallelized
twParallelFileNum the index of the file to be processed in the training file 0
list file – the resulting frame index sequence is written to
twParallelIndexFile, and then the program stops. If all
index files of all parallel computations have been generated,
they have to be concatenated in their correct order as given
by the file list files (e.g. by means of the Cygwin command
cat) and written to an index file containing the alignment
information of the whole corpus. Now, the config file en-
try twParallelIndexFile has to be adapted accordingly,
and the parameter twParallelFileNum has to be set to -1,
which means that the alignment is read from this file instead
of computing it. If no parallelization is to be carried out, set
twParallelFileNum to 0.
twParallelIndexFile see description of twParallelFileNum data/index.txt
parameter description example
conversion
lsfSmoothingOrder The linearly transformed line spectral frequencies (LSFs) are 3.0
smoothed by means of smoothing using this smoothing or-
der; for a justification see [18].
inputWavFile input wav, pm, v and output wav files or file list files data/f.01.wav
inputPmFile data/f.01.pm
inputVoicingFile data/f.01.v
outputWavFile data/output.wav
residual prediction
residualPrediction if residual prediction is to be applied 0
linearTransformation if linear transformation is to be applied; if not, VTLN-based 1
voice conversion is performed
nResidual‡ the number of residuals to be considered. A smaller number Inf
can significantly accelerate the voice conversion. Put Inf to
consider all residuals seen in training.
useInputPhase‡ if the source phase spectra are to be copied to the target 0
alpha1‡ w1 according to [19] (w2 = 1 − ω1 , w3 = 0) 0.0
suendermannSmoothOrder‡ residual smoothing strength, σ0 according to [13] 2.0
VTLN
alphaVoiced⋆ VTLN warping factor for the residuals of voiced signal portions 1.6
(referred to as alpha in Section 3.2), cf. [20]
alphaUnvoiced VTLN warping factor for unvoiced signal portions; should be 1.0
around 1.0 and only be varied, if the unvoiced sounds of both
involved speakers strongly differ
synthesis
unvoicedAmp amplification factor of unvoiced signal portions; this can be of 1.0
interest to adapt the gains of voiced and unvoiced signal por-
tions, which are handled seperately in the conversion process
and also behave differently from speaker to speaker
lengthRatio⋆ fundamental frequency ratio (referred to as rho in Section 3.2) 0.6
perceptual if the perceptual filter according to [21] is to be used 1

Table 1. Parameters of the config file.



These parameters are estimated in training. Consequently, they should only be explicitly given in the config file, if the estimated
values are to be ignored and replaced.

Only applicable if twType is tw.

Only applicable if residualPrediction is 1.
file author reference date
consist.m I. Nabney http://www.ncrg.aston.ac.uk/netlab/ 1996–2001
dist2.m
distchck.m The MathWorks, Inc. http://www.mathworks.com/ 1993–2002
dp.m⋆ D. Ellis [email protected] 2003-03-15
dtw.m⋆
dumpmemmex.dll The MathWorks, Inc. http://www.mathworks.com/ 1993–2006
find_pmarks.m⋆ W. Goncharoff [email protected] 1997-12-06
frq2mel.m M. Brookes [email protected] 1998-04-03
getMfccfilterParams.m⋆ Interval Research Corp. http://www.interval.com/ 1998
gmmactiv.m⋆ I. Nabney http://www.ncrg.aston.ac.uk/netlab/ 1996–2001
gmm.m
gmmactive.m⋆
gmmem.m⋆
gmminit.m⋆
gmmpost.m
gmmprob.m
hashadd.m D. Mochihashi [email protected] 2004
hashinit.m
hashval.m
inpoly.m D. Doolin [email protected] 1999-03-26
kmeans2.m⋆ I. Nabney http://www.ncrg.aston.ac.uk/netlab/ 1996–2001
mel2frq.m M. Brookes [email protected] 1998-04-03
mfcc2spec.m⋆ Interval Research Corp. http://www.interval.com/ 1998
mvnpdf.m The MathWorks, Inc. http://www.mathworks.com/ 1993–2002
normpdf.m
simmx.m D. Ellis [email protected] 2003-03-15
spec2mfcc.m⋆ Interval Research Corp. http://www.interval.com/ 1998

Table 2. Copyright notes.



modified by the author of this paper

You might also like