Giter VIP home page Giter VIP logo

sc-wavernn's Introduction

SC-WaveRNN

Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions

Dipjyoti Paula, Yannis Pantazisb and Yannis Stylianoua

aComputer Science Department, University of Crete

bInst. of Applied and Computational Mathematics, Foundation for Research and Technology - Hellas

Abstract:

Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.

Audio Samples:

gen_tacotron_spk_embed Audio samples can be found in here.

Tacotron + WaveRNN Diagram:

Tacotron with SC-WaveRNN diagrams

WaveRNN Diagram:

SC-WaveRNN diagrams

Pytorch implementation of Tarotron and WaveRNN model.

Installation

Ensure you have:

Then install the rest with pip:

pip install -r requirements.txt

Preprocessing

Download your Dataset.

  • VCTK Corpus

Edit hparams.py, point wav_path to your dataset and run:

python preprocess.py

or use preprocess.py --path to point directly to the dataset


Speaker encoder

Follow repo speaker_embeddings_GE2E

Train Tacotron & WaveRNN

Here's my recommendation on what order to run things:

1 - Train Tacotron with:

python train_tacotron.py

2 - You can leave that finish training or at any point you can use:

python train_tacotron.py --force_gta

this will force tactron to create a GTA dataset even if it hasn't finish training.

3 - Train WaveRNN with:

python train_wavernn.py --gta

NB: You can always just run train_wavernn.py without --gta if you're not interested in TTS

4 - Generate Sentences with WaveRNN model:

python gen_wavernn.py --file <...> --weights <...> --output <...>

the reference speech path should be provided in --file <...> .

4 - Generate Sentences with both models using:

python gen_tacotron.py --file <...> --weights_path <...> --weights_voc <...> --output <...> --input_text <...>

the reference speech path should be provided in --file <...> .

And finally, you can always use --help on any of those scripts to see what options are available :)


References

Acknowlegements

sc-wavernn's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sc-wavernn's Issues

need speaker_embedding in gen_wavernn.py

Hi, I have some questions about this work.
In function gen_from_file of gen_wavernn.py, we need to input speaker_embedding extracted by wav. But actually, we only get mel-spectrogram for vocoder in TTS system to generate waveforms. Under such circumstances, how can we get speaker_embedding? Thank you.

Minimum hours of data required for fine-tuning for a single unseen speaker

Thank you for your amazing work!!
For the TTS task, assuming that the synthesizer(Tacotron2) + vocoder has already been trained on a significant number of speakers, what would be the minimum amount of data that would be required to fine-tune the vocoder to a new unseen speaker? Would 5-10 hours be sufficient? Would be helpful to have an approximate amount. Just to add more details, this is for TTS in Hindi and I plan to train Tacotron2 + SC-WaveRNN on ~150 hours of Hindi data with several 100s of speakers before fine-tuning on a new unseen speaker. Thanks!

CSV file missing when preprocessing the data

Hi,
Thanks for this open-source implementation!
It seems that the original vctk dataset doesn't have the csv file to generate the input for the tacotron training, could you please provide the one you are using?
Another thing is that I didn't find the "gen_tacotron_spk_embed.py" you mentioned in README.

Thanks a lot!

package error

my problem:
image
my settings:
image
my pythorch :
image
I need a gentleman to help me

ETA

Hi,

looking forward to give it a try. Any plans on releasing the code soon? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.