| | |
| | |
Stat |
Members: 3669 Articles: 2'599'751 Articles rated: 2609
22 March 2025 |
|
| | | |
|
Article overview
| |
|
IQDUBBING: Prosody modeling based on discrete self-supervised speech representation for expressive voice conversion | Wendong Gan
; Bolong Wen
; Ying Yan
; Haitao Chen
; Zhichao Wang
; Hongqiang Du
; Lei Xie
; Kaixuan Guo
; Hai Li
; | Date: |
2 Jan 2022 | Abstract: | Prosody modeling is important, but still challenging in expressive voice
conversion. As prosody is difficult to model, and other factors, e.g., speaker,
environment and content, which are entangled with prosody in speech, should be
removed in prosody modeling. In this paper, we present IQDubbing to solve this
problem for expressive voice conversion. To model prosody, we leverage the
recent advances in discrete self-supervised speech representation (DSSR).
Specifically, prosody vector is first extracted from pre-trained VQ-Wav2Vec
model, where rich prosody information is embedded while most speaker and
environment information are removed effectively by quantization. To further
filter out the redundant information except prosody, such as content and
partial speaker information, we propose two kinds of prosody filters to sample
prosody from the prosody vector. Experiments show that IQDubbing is superior to
baseline and comparison systems in terms of speech quality while maintaining
prosody consistency and speaker similarity. | Source: | arXiv, 2201.00269 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|