Efficient Neural Speech Synthesis For Low-Resource Languages Through Multilingual Modeling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Efficient neural speech synthesis for low-resource languages through

multilingual modeling
Marcel de Korte, Jaebok Kim, Esther Klabbers

ReadSpeaker
Huis ter Heide, the Netherlands
{marcel.korte,jaebok.kim,esther.judd}@readspeaker.com

Abstract it is not a suitable solution for many languages for which large
quantities of high-quality multi-speaker data are not available.
Recent advances in neural TTS have led to models that can Multilingual multi-speaker synthesis aims to address this issue
produce high-quality synthetic speech. However, these mod- by training a multilingual model on the data of multiple lan-
arXiv:2008.09659v1 [eess.AS] 20 Aug 2020

els typically require large amounts of training data, which can guages. Among the first to propose a neural approach to multi-
make it costly to produce a new voice with the desired qual- lingual modeling was [9]. Instead of modeling languages sepa-
ity. Although multi-speaker modeling can reduce the data re- rately, they modeled language variation through cluster adap-
quirements necessary for a new voice, this approach is usually tive training, where a mean tower as well as language basis
not viable for many low-resource languages for which abundant towers were trained. They found that multilingual modeling
multi-speaker data is not available. In this paper, we therefore did not harm naturalness for high-resource languages, while
investigated to what extent multilingual multi-speaker model- low-resource languages benefited from multilingual modeling.
ing can be an alternative to monolingual multi-speaker model- Another study by [10] scaled up the number of unseen low-
ing, and explored how data from foreign languages may best be resource languages to twelve, and similarly found that multilin-
combined with low-resource language data. We found that mul- gual models tend to outperform single speaker models.
tilingual modeling can increase the naturalness of low-resource
language speech, showed that multilingual models can produce More recently, multilingual modeling was also adopted in
speech with a naturalness comparable to monolingual multi- S2S architectures [11, 12, 13, 14, 15, 16], however mostly for
speaker models, and saw that the target language naturalness the purposes of code-mixing and cross-lingual synthesis. Lan-
was affected by the strategy used to add foreign language data. guage information was typically represented either with a lan-
guage embedding [12, 15] or with a separate encoder for each
Index Terms: neural TTS, sequence-to-sequence models, mul- language [11], while [13] applied both approaches to code-
tilingual synthesis, multi-speaker models, data reduction mixing and accent conversion. With regards to multilingual
modeling, [12] showed that multilingual models can attain a
1. Introduction naturalness and speaker similarity that is comparable to that of a
single speaker model for high-resource target languages, while
Over the past few years, developments in sequence-to-sequence research from [16] obtained promising results with a crosslin-
(S2S) neural text-to-speech (TTS) research have led to synthetic gual transfer learning approach.
speech that sounds almost indistinguishable from human speech
(e.g. [1, 2, 3]). However, large amounts of high-quality record- While research into S2S multilingual modeling is clearly
ings are typically required from a professional voice talent to vibrant, there appears to exist little systematic research into how
train models of such quality, which can make them prohibitively S2S multilingual models could be used to increase speech natu-
expensive to produce. To counter this issue, investigations into ralness for low-resource languages. To fill this void, this paper
how S2S models can facilitate multi-speaker data has become a investigated to what extent results that are found in S2S mono-
popular topic of research [4, 5, 6]. A study by [7], for example, lingual multi-speaker modeling are transferable to multilingual
showed that multi-speaker models can perform as well or even multi-speaker modeling, and if it is possible to attain higher nat-
better than single-speaker models when large amounts of target uralness on low-resource languages with multilingual models
speaker data are not available, and that single-speaker models than with single speaker models. Because multilingual model-
only perform better when substantial amounts of data are used. ing can benefit from the inclusion of large amounts of non-target
Their research also showed that the amount of data necessary language data, we also experimented with several data addition
for an additional speaker can be as little as 1250 or 2500 sen- strategies and evaluated to what extent these strategies are ef-
tences without significantly reducing naturalness. With regards fective to improve naturalness for low-resource languages. As
to parametric synthesis, [8] investigated the effect of several this research is primarily addressing the viability of different
multi-speaker modeling strategies for class imbalanced data. approaches with regards to low-resource languages, our focus
Their research found that for limited amounts of speech, multi- is not so much on maximizing naturalness but rather on gain-
speaker modeling and oversampling could improve speech nat- ing a better understanding of how different strategies work and
uralness compared to single speaker models, while undersam- would potentially scale up using larger amounts of data.
pling was found to generally have a harmful effect. They also The rest of this paper is organized as follows. In Section 2,
showed that ensemble methods can further improve naturalness, we describe the architecture used to conduct our experiments.
but this strategy comes with a considerable computational cost In Section 3, we describe the experimental design and give de-
that is usually not feasible for S2S modeling. tails about training and evaluation. In Section 4, we provide the
Although the above research shows that multi-speaker mod- experimental results. Finally, in Section 5, we discuss conclu-
eling can be an effective strategy to reduce data requirements, sions and directions for future research.
2. System architecture was suggested by [19] that the model might become less ro-
bust if the variation in the class weights becomes too large. To
2.1. S2S Acoustic model
counter this effect, we applied a square root operation to the
The architecture that is used in this paper for acoustic model- weights and found that this led to better naturalness compared
ing is based on VoiceLoop [17]. This architecture is appealing to both the unbalanced and the balanced weights. The weights
for several reasons: the architecture is relatively small which were then normalized to correct for the square root operation,
makes it more suitable to train with smaller amounts of data, the where j is the index that iterates over the number of classes:
model takes relatively little time to train, and it is capable of dis- c
entangling speaker information well for seen speakers [18]. To nαi = αi × PN (2)
make the architecture suitable for multilingual modeling and in- j cj × αj
crease its naturalness and robustness, we made several changes
to the architecture. First, we incorporated a separate encoder 3. Experimental setup
for each language to disentangle language information, simi- In this paper, we aimed to answer the following research ques-
lar to [11]. We empirically found that representing language tions:
information this way was more effective than using a language
embedding. This language encoder is used to convert phonemes 1. To what extent does adding data from non-target lan-
from a language-dependent phone set into 256-dimensional em- guage speakers increase the naturalness for various
beddings. Second, we added a 3-layer convolutional prenet Npr amounts of data from a low-resource language?
in the style of [1] to better model phonetic context. Third, we 2. How does replacing monolingual multi-speaker mod-
added a two-layer LSTM recurrency Nr with 512 nodes to the els with multilingual multi-speaker models affect speech
decoder to better retain long-term information. The model was naturalness?
trained to produce 80-dimensional mel-spectrogram features in 3. In what way can additional non-target language data best
a way similar to [1]. The resulting architecture is visualized in be added to improve the naturalness of low-resource tar-
Figure 1. get language speech?
Two listening experiments were designed to answer these re-
Attention network
Update network (N_u)
search questions.
(N_a)

3.1. Experimental design


The first experiment was designed to compare the naturalness of
Prenet (N_pr) F_u Output network (N_o) single speaker models with that of multilingual models for dif-
ferent amounts of data from the target speaker. For this purpose,
F_o we trained three single speaker models using 2000, 4000, and
8000 sentences (referred to as SING-2k, SING-4k, and SING-
Language encoder 1 ... ... Language encoder N Speaker embedding
Recurrent network
(N_r)
8k respectively) as target language data. We also trained three
multilingual models with the same amount of data for the target
language, and added an additional 16000 sentences from a for-
Phoneme sequence Phoneme sequence
Speaker input 80-dim mel- eign language speaker (referred to as MULT-2k+16k, MULT-
language 1 language N spectrogram
4k+16k, MULT-8k+16k). We hypothesize that the multilin-
Figure 1: Overview of the acoustic model architecture used in gual multi-speaker models will perform better than the single
this paper speaker models when the data set of the target speaker is lim-
ited, as we expect that the addition of foreign language data will
improve the robustness of the model. We also hypothesize that
the effect will become smaller when more target language data
2.1.1. Class weighted loss is available. We used the data of an American English speaker
Training a multilingual model on a mixture of high-resource and for the target language, and the data of a Dutch speaker as aux-
low-resource languages can lead to class imbalances between iliary language data. Other language pairs were tried internally
languages, which can negatively affect naturalness for minority to ensure that findings were reproducible. However, for the pur-
classes. Although it is possible to address this issue through poses of the listening test, American English was chosen as the
over- and undersampling as explored in [8], we instead decided target language to make subjective evaluation more straightfor-
to change the weighting of classes through our loss function, ward, while Dutch was chosen to informally evaluate potential
following [19]. The purpose of the reweighting is to increase adverse effects in the auxiliary language.
importance of minority class samples, while reducing the im- The second experiment was designed to compare monolin-
pact of majority classes. The advantage of this approach is that gual multi-speaker models to multilingual multi-speaker mod-
the re-weighting operation has a low computational cost, and is els, with similar as well as larger amounts of non-target lan-
therefore more efficient than oversampling or ensemble-based guage data. To evaluate how the models would behave when
methods. The class weights were computed as follows: given similar amounts of data, we created a monolingual model
(MONO-2k+16k) with 2000 sentences of our target speaker and
r
c 16000 sentences from another American English speaker. This
αi = (1) model was compared to the MULT-2k+16k model that was also
ci × N
used in the previous experiment. We hypothesize that because
Where αi denotes the class weight that is computed for of the effort to separate languages with language encoders, the
class i, N refers to the number of classes, c is the total num- multilingual model should attain a naturalness close to or sim-
ber of samples, and ci is the number of samples for class i. It ilar to the naturalness of the monolingual model. Although
there is more overlap in terms of pronunciation and prosody for 3.2. Training procedure
monolingual speakers than for multilingual speakers, we expect
The training of all acoustic models was done in two stages.
that its effect on the naturalness of the target speaker should be
Each model was first pretrained on sentences of up to 800
limited because the rest of the model is trained jointly.
frames (≈ 9.3 seconds), and split into separate parts up to 200
Because multilingual modeling makes it more straightfor- frames similar to [17] to aid learning. For the pretraining,
ward to include language data from non-target languages, we Stochastic Gradient Descent was used, with a batch size of 32,
also used this experiment to analyze whether adding more for- a learning rate of 0.1, and momentum of 0.75. After pretrain-
eign language data could improve naturalness, and in which ing, the model was finetuned using the ADAM optimizer, with
way additional data can best be added. We designed three addi- a batch size of 64, a learning rate of 0.0001 and betas of 0.9 and
tional models to evaluate this question. The first model, MULT- 0.98. For the monolingual multi-speaker models, class weight-
2k+2x16k, was trained on the same data as the MULT-2k+16k ing was applied to the loss function to correct for imbalances
model, but with an additional 16000 sentences from a second in the speaker distribution. For the multilingual multi-speaker
Dutch speaker. If naturalness increases as a result of this addi- models, class weighting was applied to counter both speaker
tional data, this could indicate that it is beneficial for the model and language imbalances. The input to the models consisted of
to have data from multiple speakers in the data set, for example phonemes from a separate phoneset per language which were
to better separate speaker specific prosody and pronunciation then converted into integers, while on the output side the mod-
patterns. The second model, MULT-2k+16k+16k, was trained els were trained to produce unnormalized 80-dimensional mel-
on the same data as the MULT-2k+16k model, but with an ad- spectrogram features. The mel-spectrogram features were then
ditional 16000 sentences from a third language, in this case decoded by a WaveGlow vocoder [23], that was trained in uni-
French. If naturalness increases significantly as a result of this versal fashion [24] on a proprietary data set consisting of 5000
strategy, it could be an indication that the model benefits from sentences each from 3 female and 2 male speakers.
the ability to distinguish between large amounts of data, for
example to better handle differences in prosody or pronuncia-
tion. The third model, MULT-2k+16x2k, was again trained on 4. Results
2000 sentences from the target speaker, and an additional 2000 4.1. Experiment 1: Single-speaker modeling vs multilin-
sentences from 16 speakers of 14 languages (13 European lan- gual modeling
guages as well as Arabic). If this approach increases naturalness
significantly, this could be an indication that the model benefits For the first experiment, 30 participants were invited to evaluate
from language variety or from a lack of class imbalances. 10 stimulus panels with 7 audio samples per panel. Of the 300
resulting data points, we discarded 15 data points where one
To train the models, we used a proprietary text-to-speech single sample was rated considerably higher than the resynthe-
data set. The speech consisted of recordings from professional sized sample. If multiple samples were rated higher than the
voice talents who were asked to read aloud texts in a studio envi- resynthesized sample, we did not consider them anomalies and
ronment. After recording, all speech was processed and down- did not remove them. The rationale behind this approach is that
sampled to 22 kHz. Foreign language recordings, for example if just a single sample was rated higher, it was more likely to be
English recordings for non-English languages, were excluded an outlier, and would also have a larger impact in the Wilcoxon
to ensure that the results of this experiment were not impacted rank testing than if multiple samples were rated higher.
by such sentences.
For both experiments, we used a MUltiple Stimuli with
Hidden Reference and Anchor (MUSHRA) test to evaluate nat-
uralness [20]. Speaker similarity was not subjectively evalu-
ated, because we found that the speaker characteristics of the
target speaker were not harmed by the addition of data from
other speakers. For the test, we recruited 30 participants with
a good command of English. For both experiments, we created
three separate test sets, each containing 10 stimulus panels with
audio from unseen sentences. A participant was assigned one
out of three test sets for both of the experiments, hence every
participant evaluated 20 panels. This way, the time to com-
plete the test was reduced whilst ensuring that results were not
significantly impacted by a particular sentence. Following the Figure 2: Boxplot showing the naturalness of single speaker
MUSHRA guidelines, we included a resynthesized sample on models and multilingual models used in experiment 1. Red lines
each stimulus panel, both as a reference and as a hidden anchor. show median values, green lines show mean values
For the design of the listening tests, we used the publicly
available WebAudioEvaluationTool [21]. Both the panels as The MUSHRA scores of the first experiment are displayed
well as the samples within a panel were randomized. In ad- in Figure 2. The results showed that the naturalness signifi-
dition, the initial value of each slider in a panel was randomized cantly increased when more target language data was available,
to nudge participants to use the whole spectrum from 0 (com- both for single speaker and multilingual models. More interest-
pletely unnatural) to 100 (completely natural). Participants had ingly, adding foreign language data to the target language data
to listen to every sample and change the value of every slider generally had a positive effect on the naturalness of the target
before being allowed to proceed to the next panel. The exper- speaker. When comparing the models with 2000 sentences of
iments were then analyzed with a Wilcoxon signed-rank test, target language data, we found that the MULT-2k+16k model
where a Holm-Bonferroni correction [22] was used to reduce outperformed the SING-2k model significantly, and was on par
the chance of Type 1 errors. with the SING-4k model (p ≈ 0.172). Although the SING-2k
model generally produced stable attention, the naturalness rat- system combinations, the differences were not significant.
ings for this model were negatively impacted by occasional mis- The results of this experiment showed that no significant
pronunciations that almost never occurred in the speech of other difference in naturalness was found between the model with
models. For the models that were trained on 4000 sentences auxiliary target language data and the model with auxiliary non-
from the low-resource language, the MULT-4k+16k model still target language data. We suspect that the difference is lim-
produced significantly more natural speech than the SING-4k ited because the task of mel-spectrogram prediction is relatively
model. When comparing the models for which 8000 target lan- language-indepedent. In fact, given that languages are sepa-
guage sentences were available, the difference in naturalness rately modeled in the encoder, it might in some cases be benefi-
between the single speaker and the multilingual model was no cial to have auxiliary non-target language data instead of target
longer significant (p ≈ 0.506). All other system combinations language data because the architecture allows for better disen-
were significantly different, and the resynthesized speech was tanglement of speaker-specific prosodic and pronunciation in-
rated significantly higher than speech from all other systems. formation.
The results obtained for multilingual modeling followed When analyzing the multilingual models with additional
similar patterns to the results in the monolingual multi-speaker data, we found that the naturalness of a multilingual model
settings in [7, 8]. Similar to [7], the addition of non-target lan- could even surpass that of a monolingual model, but that
guage data helped to improve the robustness and naturalness this was dependent on the sort of data added. While the
of the model when data quantities for the target language were MULT-2k+16x2k and the MULT-2k+16k+16k approach af-
limited, and similar to [8], the difference became insignificant fected the naturalness of the target language positively, the
when more target language data was available. The fact that MULT-2k+2x16k approach did not lead to a significant natu-
the same effects could be replicated in a multilingual setting as ralness increase. These results thus suggest that when adding
in a monolingual setting suggests that the model does not suf- more data, a multilingual model benefits most from language
fer from being trained on different language inputs. We suspect variation and a reduction of class imbalances.
that the effect is minimal because language information is well
separated by the language encoders, thus limiting pronunciation 5. Conclusions and Future Research
overlap, while benefiting from shared training in the decoder.
This paper aimed to investigate the effectiveness of multilin-
4.2. Experiment 2: Monolingual vs multilingual multi- gual modeling to improve speech naturalness of low-resource
speaker modeling language neural speech synthesis. Our results showed that the
addition of auxiliary non-target language data can positively im-
Our second experiment was designed to better understand how pact the naturalness of low-resource language speech and can
various monolingual and multilingual model strategies may ef- be a viable alternative to auxiliary target language data when
fect naturalness. We again asked 30 participants to evaluate ten such data is not readily available. We furthermore found that
different stimulus panels from one out of three test sets. Each when more target language data was available, the inclusion of
panel consisted of a resynthesized sample as the reference and the auxiliary non-target language data did not negatively affect
hidden anchor, and a sample from each of the five models. A naturalness. Although we did not compare multilingual mod-
similar procedure as in the first experiment was applied to re- els with single speaker models for even larger amounts of target
move anomalies, discarding 11 out of 300 data points. language data in this research, we expect that results from mul-
tilingual modeling will largely mimic the effects observed in
Table 1: Subjective MUSHRA naturalness scores for systems in studies of monolingual multi-speaker modeling [7]. Finally, we
Experiment 2 explored several strategies for including additional non-target
language data. We showed that not all data addition strategies
System identifier Mean Median Average rank are equally effective, and reported that language diversity and
minimizing class imbalances appear to be the most important
MONO-2k+16k 42.58 45 4.24
variables to consider when adding data.
MULT-2k+16k 45.41 47 3.96
Based on our conclusions, we identify several directions for
MULT-2k+2x16k 44.48 47 3.98
future research. First of all, the current research didn’t con-
MULT-2k+16k+16k 45.51 48 3.91
sider the issue of language proximity on the effect of multi-
MULT-2k+16x2k 47.24 50 3.72
lingual modeling. Although languages are modeled separately
Resynthesis 88.00 92 1.20
in the encoders, language proximity may positively affect nat-
uralness. Additionally, this research evaluated low-resource
The results of the second experiment are displayed in Ta- language speech naturalness at a general level, while it may
ble 1. When comparing the monolingual and the multilin- be more interesting to focus on the naturalness of language-
gual model that have similar amounts of data, we found that specific characteristics such as language-specific phonemes or
the multilingual MULT-2k+16k model performed on par with stress patterns. We furthermore note that the amount of auxil-
the monolingual MONO-2k+16k model (p ≈ 0.054).A sig- iary data used was relatively limited in our experiments. Fur-
nificant difference between the monolingual model and multi- ther analysis could be done to find out whether our findings
lingual models was found for some of the multilingual mod- hold when scaled up with more data. Finally, we found that
els with additional data, with a significant difference between the MULT-2k+16x2k model was most effective to improve nat-
the MONO-2k+16k and the MULT-2k+16x2k model (p ≈ uralness of target language speech, but this result does not clar-
0.0003), while the difference between the MONO-2k+16k and ify whether this effect can be attributed to the large variation
the MULT-2k+16k+16k model was marginally significant after in languages and speakers, or to the minimization of class im-
Holm-Bonferroni correction (p ≈ 0.007). Similar to the first balances. It would be interesting to disentangle these variables
experiment, the resynthesized speech was rated significantly by comparing this model to a monolingual multi-speaker model
better than the speech of all other systems. For the remaining with similar amounts of data per speaker.
6. References [18] E. Nachmani, A. Polyak, Y. Taigman, and L. Wolf, “Fitting new
speakers based on a short untranscribed sample,” arXiv preprint
[1] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, arXiv:1802.06984, 2018.
Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan et al., “Natural
tts synthesis by conditioning wavenet on mel spectrogram pre- [19] T. Alumäe, S. Tsakalidis, and R. M. Schwartz, “Improved multi-
dictions,” in 2018 IEEE International Conference on Acoustics, lingual training of stacked neural network acoustic models for low
Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4779– resource languages.” in Interspeech, 2016, pp. 3883–3887.
4783. [20] I. Recommendation, “1534-1,method for the subjective assess-
[2] Z. Kons, S. Shechtman, A. Sorin, C. Rabinovitz, and R. Hoory, ment of intermediate sound quality (mushra),” International
“High quality, lightweight and adaptable tts using lpcnet,” arXiv Telecommunications Union, Geneva, Switzerland, 2001.
preprint arXiv:1905.00590, 2019. [21] N. Jillings, B. Man, D. Moffat, J. D. Reiss et al., “Web audio eval-
[3] Y. Ren, Y. Ruan, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. uation tool: A browser-based listening test environment,” 2015.
Liu, “Fastspeech: Fast, robust and controllable text to speech,” [22] S. Holm, “A simple sequentially rejective multiple test proce-
in Advances in Neural Information Processing Systems, 2019, pp. dure,” Scandinavian journal of statistics, pp. 65–70, 1979.
3165–3174.
[23] R. Prenger, R. Valle, and B. Catanzaro, “Waveglow: A flow-based
[4] Y. Jia, Y. Zhang, R. Weiss, Q. Wang, J. Shen, F. Ren, P. Nguyen, generative network for speech synthesis,” in ICASSP 2019-2019
R. Pang, I. L. Moreno, Y. Wu et al., “Transfer learning from IEEE International Conference on Acoustics, Speech and Signal
speaker verification to multispeaker text-to-speech synthesis,” in Processing (ICASSP). IEEE, 2019, pp. 3617–3621.
Advances in neural information processing systems, 2018, pp.
4480–4490. [24] R. Valle, J. Li, R. Prenger, and B. Catanzaro, “Mellotron: Mul-
tispeaker expressive voice synthesis by conditioning on rhythm,
[5] A. Gibiansky, S. Arik, G. Diamos, J. Miller, K. Peng, W. Ping,
pitch and global style tokens,” arXiv preprint arXiv:1910.11997,
J. Raiman, and Y. Zhou, “Deep voice 2: Multi-speaker neural text-
2019.
to-speech,” in Advances in neural information processing systems,
2017, pp. 2962–2970.
[6] Y. Deng, L. He, and F. Soong, “Modeling multi-speaker latent
space to improve neural tts: Quick enrolling new speaker and en-
hancing premium voice,” arXiv preprint arXiv:1812.05253, 2018.
[7] J. Latorre, J. Lachowicz, J. Lorenzo-Trueba, T. Merritt, T. Drug-
man, S. Ronanki, and V. Klimkov, “Effect of data reduction on
sequence-to-sequence neural tts,” in ICASSP 2019-2019 IEEE In-
ternational Conference on Acoustics, Speech and Signal Process-
ing (ICASSP). IEEE, 2019, pp. 7075–7079.
[8] H.-T. Luong, X. Wang, J. Yamagishi, and N. Nishizawa, “Train-
ing multi-speaker neural text-to-speech systems using speaker-
imbalanced speech corpora,” arXiv preprint arXiv:1904.00771,
2019.
[9] B. Li and H. Zen, “Multi-language multi-speaker acoustic mod-
eling for lstm-rnn based statistical parametric speech synthesis,”
2016.
[10] A. Gutkin, “Uniform multilingual multi-speaker acoustic model
for statistical parametric speech synthesis of low-resourced lan-
guages,” 2017.
[11] E. Nachmani and L. Wolf, “Unsupervised polyglot text-to-
speech,” in ICASSP 2019-2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019,
pp. 7055–7059.
[12] Y. Zhang, R. J. Weiss, H. Zen, Y. Wu, Z. Chen, R. Skerry-Ryan,
Y. Jia, A. Rosenberg, and B. Ramabhadran, “Learning to speak
fluently in a foreign language: Multilingual speech synthesis and
cross-language voice cloning,” arXiv preprint arXiv:1907.04448,
2019.
[13] Y. Cao, X. Wu, S. Liu, J. Yu, X. Li, Z. Wu, X. Liu, and H. Meng,
“End-to-end code-switched tts with mix of monolingual record-
ings,” in ICASSP 2019-2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019,
pp. 6935–6939.
[14] L. Xue, W. Song, G. Xu, L. Xie, and Z. Wu, “Building a mixed-
lingual neural tts system with only monolingual data,” arXiv
preprint arXiv:1904.06063, 2019.
[15] Z. Liu and B. Mak, “Cross-lingual multi-speaker text-to-speech
synthesis for voice cloning without using parallel corpus for un-
seen speakers,” arXiv preprint arXiv:1911.11601, 2019.
[16] T. Tu, Y.-J. Chen, C.-c. Yeh, and H.-y. Lee, “End-to-end text-to-
speech for low-resource languages by cross-lingual transfer learn-
ing,” arXiv preprint arXiv:1904.06508, 2019.
[17] Y. Taigman, L. Wolf, A. Polyak, and E. Nachmani, “Voiceloop:
Voice fitting and synthesis via a phonological loop,” arXiv
preprint arXiv:1707.06588, 2017.

You might also like