PyTorch Implementation of FastDiff (IJCAI'22): a conditional diffusion probabilistic model capable of generating high fidelity speech efficiently.
We provide our implementation and pretrained models as open source in this repository.
Visit our demo page for audio samples.
- April.22, 2021: FastDiff accepted by IJCAI 2022. The expected release time of the full version codes (including pre-trained models, more datasets, and more neural vocoders) is at the IJCAI-2022 conference (before July. 2022). Please star us and stay tuned!
- June.21, 2022: The LJSpeech checkpoint and demo code are provided.
We provide an example of how you can generate high-fidelity samples using FastDiff.
To try on your own dataset, simply clone this repo in your local machine provided with NVIDIA GPU + CUDA cuDNN and follow the below intructions.
You can also use pretrained models we provide. Details of each folder are as in follows:
Dataset | Config | Pretrained Model |
---|---|---|
LJSpeech | modules/FastDiff/config/FastDiff.yaml |
OneDrive |
LibriTTS | modules/FastDiff/config/FastDiff_libritts.yaml |
Coming Soon |
VCTK | modules/FastDiff/config/FastDiff_vctk.yaml |
Coming Soon |
More supported datasets are coming soon.
Put the checkpoints in checkpoints/$your_experiment_name/model_ckpt_steps_*.ckpt
See requirements in requirement.txt
:
By default, this implementation uses as many GPUs in parallel as returned by torch.cuda.device_count()
.
You can specify which GPUs to use by setting the CUDA_DEVICES_AVAILABLE
environment variable before running the training module.
- Download LJSpeech checkpoint and put it in
checkpoint/FastDiff/model_ckpt_steps_*.ckpt
- Specify the input
$text
, and an int-type index$model_index
to choose the TTS model.0
(Portaspeech, Ren et al),1
(FastSpeech 2, Ren et al), or2
(DiffSpeech, Liu et al). - Set
N
for reverse sampling, which is a trade off between quality and speed. - Run the following command.
CUDA_VISIBLE_DEVICES=$GPU python egs/demo_tts.py --N $N --text $text --model $model_index
Generated wav files are saved in checkpoints/FastDiff/
by default.
- Make
wavs
directory and copy wav files into the directory. - Set
N
for reverse sampling, which is a trade off between quality and speed. - Run the following command.
CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config $path/to/config --exp_name $your_experiment_name --infer --hparams='test_input_dir=wavs,N=$N'
Generated wav files are saved in checkpoints/$your_experiment_name/
by default.
- Make
mels
directory and copy generated mel-spectrogram files into the directory.
You can generate mel-spectrograms using Tacotron2, Glow-TTS and so forth. - Set
N
for reverse sampling, which is a trade off between quality and speed. - Run the following command.
CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config $path/to/config --exp_name $your_experiment_name --infer --hparams='test_mel_dir=mels,use_wav=False,N=$N'
Generated wav files are saved in checkpoints/$your_experiment_name/
by default.
Note: If you find the output wav noisy, it's likely because of the mel-preprocessing mismatch between the acoustic and vocoder models. Please tackle this mismatch or use the provided validated acoustic model, and now we have: PortaSpeech
- Set
raw_data_dir
,processed_data_dir
,binary_data_dir
in the config file - Download dataset to
raw_data_dir
. Note: the dataset structure needs to followegs/datasets/audio/*/pre_align.py
, or you could rewritepre_align.py
according to your dataset. - Preprocess Dataset
# Preprocess step: unify the file structure.
python data_gen/tts/bin/pre_align.py --config $path/to/config
# Binarization step: Binarize data for fast IO.
CUDA_VISIBLE_DEVICES=$GPU python data_gen/tts/bin/binarize.py --config $path/to/config
CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config $path/to/config --exp_name $your_experiment_name --reset
Coming Soon.
Coming Soon, and you can use our pre-derived noise schedule in this time.
CUDA_VISIBLE_DEVICES=$GPU python tasks/run.py --config $path/to/config --exp_name $your_experiment_name --infer
This implementation uses parts of the code from the following Github repos: NATSpeech, Tacotron2, and DiffWave-Vocoder as described in our code.
If you find this code useful in your research, please consider citing:
@article{huang2022fastdiff,
title={FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis},
author={Huang, Rongjie and Lam, Max WY and Wang, Jun and Su, Dan and Yu, Dong and Ren, Yi and Zhao, Zhou},
journal={arXiv preprint arXiv:2204.09934},
year={2022}
}
This is not an officially supported Tencent product.