| | |
| | |
Stat |
Members: 3657 Articles: 2'599'751 Articles rated: 2609
08 October 2024 |
|
| | | |
|
Article overview
| |
|
AdaVITS: Tiny VITS for Low Computing Resource Speaker Adaptation | Kun Song
; Heyang Xue
; Xinsheng Wang
; Jian Cong
; Yongmao Zhang
; Lei Xie
; Bing Yang
; Xiong Zhang
; Dan Su
; | Date: |
1 Jun 2022 | Abstract: | Speaker adaptation in text-to-speech synthesis (TTS) is to finetune a
pre-trained TTS model to adapt to new target speakers with limited data. While
much effort has been conducted towards this task, seldom work has been
performed for low computational resource scenarios due to the challenges raised
by the requirement of the lightweight model and less computational complexity.
In this paper, a tiny VITS-based TTS model, named AdaVITS, for low computing
resource speaker adaptation is proposed. To effectively reduce parameters and
computational complexity of VITS, an iSTFT-based wave construction decoder is
proposed to replace the upsampling-based decoder which is resource-consuming in
the original VITS. Besides, NanoFlow is introduced to share the density
estimate across flow blocks to reduce the parameters of the prior encoder.
Furthermore, to reduce the computational complexity of the textual encoder,
scaled-dot attention is replaced with linear attention. To deal with the
instability caused by the simplified model, instead of using the original text
encoder, phonetic posteriorgram (PPG) is utilized as linguistic feature via a
text-to-PPG module, which is then used as input for the encoder. Experiment
shows that AdaVITS can generate stable and natural speech in speaker adaptation
with 8.97M model parameters and 0.72GFlops computational complexity. | Source: | arXiv, 2206.00208 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|