Voice conversion (VC) can be achieved by first extracting source content information and target speaker information, and then reconstructing waveform with these information. However, current approaches normally either extract dirty content information with speaker information leaked in, or demand a large amount of annotated data for training. Besides, the quality of reconstructed waveform can be degraded by the mismatch between conversion model and vocoder. In this paper, we adopt the end-to-end framework of VITS for high-quality waveform reconstruction, and propose strategies for clean content information extraction without text annotation. We disentangle content information by imposing an information bottleneck to WavLM features, and propose the spectrogram-resize based data augmentation to improve the purity of extracted content information. Experimental results show that the proposed method outperforms the latest VC models trained with annotated data and has greater robustness.
Use the sliders below to perform vertical or horizontal spectrogram-resize operation.
Source | Target | Conversion | |
---|---|---|---|
F_p335_302 | F_p253_219 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
F_p362_103 | M_p237_001 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_p237_001 | F_p335_302 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_p272_219 | M_p259_464 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
Source | Target | Conversion | |
---|---|---|---|
F_5142_36377_000004_000012 | F_p253_219 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
F_8463_287645_000023_000001 | M_p237_001 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_1320_122617_000013_000001 | F_p335_302 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_5105_28233_000016_000001 | M_p259_464 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
Source | Target | Conversion | |
---|---|---|---|
F_3575_170457_000032_000001 | M_5105_28233_000016_000001 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
F_8463_287645_000023_000001 | F_3575_170457_000032_000001 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_5105_28233_000016_000001 | M_1320_122617_000013_000001 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
M_908_31957_000018_000000 | F_5142_36377_000004_000012 | VQMIVC BNE-PPG-VC YourTTS | FreeVC FreeVC (w/o SR) FreeVC-s |
We also conduct objective experiments to other models that are not listed in the paper. We randomly select 400 utterances (200 from VCTK, 200 from LibriTTS) as source speech, and 12 speakers (6 seen, 6 unseen) as target speaker. The results are shown below.