Giter VIP home page Giter VIP logo

voicefilter's Introduction

VoiceFilter

Note from Seung-won (2020.10.25)

Hi everyone! It's Seung-won from MINDs Lab, Inc. It's been a long time since I've released this open-source, and I didn't expect this repository to grab such a great amount of attention for a long time. I would like to thank everyone for giving such attention, and also Mr. Quan Wang (the first author of the VoiceFilter paper) for referring this project in his paper.

Actually, this project was done by me when it was only 3 months after I started studying deep learning & speech separation without a supervisor in the relevant field. Back then, I didn't know what is a power-law compression, and the correct way to validate/test the models. Now that I've spent more time on deep learning & speech since then (I also wrote a paper published at Interspeech 2020 😊), I can observe some obvious mistakes that I've made. Those issues were kindly raised by GitHub users; please refer to the Issues and Pull Requests for that. That being said, this repository can be quite unreliable, and I would like to remind everyone to use this code at their own risk (as specified in LICENSE).

Unfortunately, I can't afford extra time on revising this project or reviewing the Issues / Pull Requests. Instead, I would like to offer some pointers to newer, more reliable resources:

  • VoiceFilter-Lite: This is a newer version of VoiceFilter presented at Interspeech 2020, which is also written by Mr. Quan Wang (and his colleagues at Google). I highly recommend checking this paper, since it focused on a more realistic situation where VoiceFilter is needed.
  • List of VoiceFilter implementation available on GitHub: In March 2019, this repository was the only available open-source implementation of VoiceFilter. However, much better implementations that deserve more attention became available across GitHub. Please check them, and choose the one that meets your demand.
  • PyTorch Lightning: Back in 2019, I could not find a great deep-learning project template for myself, so I and my colleagues had used this project as a template for other new projects. For people who are searching for such project template, I would like to strongly recommend PyTorch Lightning. Even though I had done a lot of effort into developing my own template during 2019 (VoiceFilter -> RandWireNN -> MelNet -> MelGAN), I found PyTorch Lightning much better than my own template.

Thanks for reading, and I wish everyone good health during the global pandemic situation.

Best regards, Seung-won Park


Unofficial PyTorch implementation of Google AI's: VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking.

Result

  • Training took about 20 hours on AWS p3.2xlarge(NVIDIA V100).

Audio Sample

Metric

Median SDR Paper Ours
before VoiceFilter 2.5 1.9
after VoiceFilter 12.6 10.2

  • SDR converged at 10, which is slightly lower than paper's.

Dependencies

  1. Python and packages

    This code was tested on Python 3.6 with PyTorch 1.0.1. Other packages can be installed by:

    pip install -r requirements.txt
  2. Miscellaneous

    ffmpeg-normalize is used for resampling and normalizing wav files. See README.md of ffmpeg-normalize for installation.

Prepare Dataset

  1. Download LibriSpeech dataset

    To replicate VoiceFilter paper, get LibriSpeech dataset at http://www.openslr.org/12/. train-clear-100.tar.gz(6.3G) contains speech of 252 speakers, and train-clear-360.tar.gz(23G) contains 922 speakers. You may use either, but the more speakers you have in dataset, the more better VoiceFilter will be.

  2. Resample & Normalize wav files

    First, unzip tar.gz file to desired folder:

    tar -xvzf train-clear-360.tar.gz

    Next, copy utils/normalize-resample.sh to root directory of unzipped data folder. Then:

    vim normalize-resample.sh # set "N" as your CPU core number.
    chmod a+x normalize-resample.sh
    ./normalize-resample.sh # this may take long
  3. Edit config.yaml

    cd config
    cp default.yaml config.yaml
    vim config.yaml
  4. Preprocess wav files

    In order to boost training speed, perform STFT for each files before training by:

    python generator.py -c [config yaml] -d [data directory] -o [output directory] -p [processes to run]

    This will create 100,000(train) + 1000(test) data. (About 160G)

Train VoiceFilter

  1. Get pretrained model for speaker recognition system

    VoiceFilter utilizes speaker recognition system (d-vector embeddings). Here, we provide pretrained model for obtaining d-vector embeddings.

    This model was trained with VoxCeleb2 dataset, where utterances are randomly fit to time length [70, 90] frames. Tests are done with window 80 / hop 40 and have shown equal error rate about 1%. Data used for test were selected from first 8 speakers of VoxCeleb1 test dataset, where 10 utterances per each speakers are randomly selected.

    Update: Evaluation on VoxCeleb1 selected pair showed 7.4% EER.

    The model can be downloaded at this GDrive link.

  2. Run

    After specifying train_dir, test_dir at config.yaml, run:

    python trainer.py -c [config yaml] -e [path of embedder pt file] -m [name]

    This will create chkpt/name and logs/name at base directory(-b option, . in default)

  3. View tensorboardX

    tensorboard --logdir ./logs

  4. Resuming from checkpoint

    python trainer.py -c [config yaml] --checkpoint_path [chkpt/name/chkpt_{step}.pt] -e [path of embedder pt file] -m name

Evaluate

python inference.py -c [config yaml] -e [path of embedder pt file] --checkpoint_path [path of chkpt pt file] -m [path of mixed wav file] -r [path of reference wav file] -o [output directory]

Possible improvments

  • Try power-law compressed reconstruction error as loss function, instead of MSE. (See #14)

Author

Seungwon Park at MINDsLab ([email protected], [email protected])

License

Apache License 2.0

This repository contains codes adapted/copied from the followings:

voicefilter's People

Contributors

seungwonpark avatar stegben avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

voicefilter's Issues

Question about start point of SDR

Dear @seungwonpark

First of all, I would like to thank you for great open source.
I would like to test your nice code and I tried to train voice filter.

But i get the problem with SDR. When i saw SDR graph in voicefilter github,
SDR value from 2 to 10dB. But in my case, SDR value is from -0.8 to 1.2.

image

I am trying to find the cause of the problem but I can not find it.

Can you help me to find the cause of problem?

I used the default yaml and generator.py. ( train-clean-100, train-clean-360, dev-clean are used
to train)

Could you let me know what i can check?

Thanks you!

Training setting problem

Hi,

Thank you for publishing your code!
I am encountering a training problem. As an initial phase I have tried to train only on 1000 samples from LibriSpeech train-clean-100 dataset. I am using the default configuration as was published in your VoiceFilter repo. The only difference is that I used batch size of 6 due to memory limitations. Is it possible that the problem is related to the small batch size that I use?

Another question is related to the generation of the training and testing sets. I have noticed that there is an option to use a VAD for generating the training set but by default it is not used. What is the best practice? to use the VAD or not?

I appreciate your help!

inference

why in the inference the target record is the same as in mixed record?
as for me the point of all this voicefilter is that the target record is not the same as in mixed, and is just has the voice of the same person

Real-time inference

Hi, I'd like to use this voice filtering in real-time. Would it be possible to modify the inference code to run the model in real time for audio PCM data?

Out of memory when Inferencing a single file.

I tried to try the trained model on a single input and it gave OOM on GCP with 1 Nvidia P100.
RuntimeError: CUDA out of memory. Tried to allocate 4.66 GiB (GPU 0; 15.90 GiB total capacity; 14.37 GiB already allocated; 889.81 MiB free; 19.21 MiB cached)
The file size of the mixed wav(19 MB) file was about 5 minutes and for reference file was 11 seconds.
I don't know why it shows 14.37 GiB allocated when not even training. I tried to restart the instance but it did not help.
Can you please suggest a way to reduce the memory required while Inference?
Thank you!

Question about utils/evaluation.py

Hello @seungwonpark , thank you greatly for your work!
I notice that the utils/evalluation.py has a "break" in the loop of test dataloader.
That is, in the evaluation process, only the first case generated from test dataloader is taken account into computing test loss and test SDR. Could this raise problems like #5 and #9 ?

Look forward to your response.

fianl model

Is there any possible to provide the final checkpoint ?The training takes too much time.

Question when training VoiceFilter

Hi, it's me again:)
Because of insufficient computer storage,I skipped the following step:


Preprocess wav files

In order to boost training speed, perform STFT for each files before training by:

python generator.py -c [config yaml] -d [data directory] -o [output directory] -p [processes to run]

This will create 100,000(train) + 1000(test) data. (About 160G)

Then I downloaded embedder.pt, train-clear-100.tar.gz and dev-clean.tar.gz. I unziped tar.gz files and put those unzipped file folders to the root directory of voicefilter.I also specifying train_dir, test_dir at config.yaml, such as:

  train_dir: '/home/../voicefilter/train/train-clean-100'
  test_dir: '/home/../voicefilter/dev/dev-clean'

After that, when I enter this instruction:

python trainer.py -c [config yaml] -e [path of embedder pt file] -m [name]

Error prompts pop up on the screen:AssertionError: no training file found

I want to know which step I made a mistake, or what configuration was missing? Thanks! REALLY looking forward to your reply!

Try partial convolution padding scheme

Train loss of initial implementation with nn.Conv2d converged at 6e-3.

Now, I'm trying partial convolution padding scheme to replace naive zero-padding. Work in progress at pconv branch.

question about ffmpeg-normalize

hi~ i meet with a problem when doing ./normalize-resample.sh , it seems that wav in /tmp did not exit, i try to fix it but failed, does anyone know where the problem is ? i also run the commend " ffmpeg-normalize 1.wav -o 1-norm.wav" to test this normalize tool , and occured the same question ; how to make this normalize tool (ffmpeg-normalize) work?
yingyingying
image

embedder.pt with new dataset

Hi, if in case I wanted to use another dataset of audio files for the training and the test (not the one used here) the embedder.pt that I have to insert when I run "trainer.py" as I can generate it or which one I have to use ? Thank you

VoiceFilter realization problem

Seungwon, hello.

My name is Vladimir. I am a researcher at Speech Technology Center, Russia, St. Petersburg. Your implementation of the VoiceFilter algorithm (https://github.com/mindslab-ai/voicefilter) is very interesting to me and my colleagues. Unfortunately, we could not get the SDR metric dynamics like yours, using your code with the standard settings in the default.yaml file. SDR converged to 4.5 dB after 200k iterations (see figure below), but not to 10 dB after 65k as in your results. Could you tell us your training settings, as well as the neural network architecture that you used to get your result?

voicefilter_train_dynamics

Our python environment:

  1. tqdm (ver. 4.32.1);
  2. numpy (ver. 1.16.3);
  3. torch (ver. 1.1.0);
  4. pyyaml (ver. 5.1);
  5. librosa (ver. 0.6.3);
  6. mir_eval (ver. 0.5);
  7. matplotlib (ver. 3.1.0);
  8. tensorboardX (ver. 1.7);
  9. ffmpeg (ver. 4.1.3);
  10. ffmpeg_normalize (1.14.0);
  11. python (ver. 3.6).

We use four Nvidia GeForce GTX 1080 Ti when training one VoiceFilter's model. Subsets train-clean-100, train-clean-360 and train-other-500 from LibriSpeech dataset are used to train VoiceFilter's model and dev-clean is used to test VoiceFilter's model. We use the pretrained d-vector model to encode the target speaker.

We used your default configuration file:

audio:
  n_fft: 1200
  num_freq: 601
  sample_rate: 16000
  hop_length: 160
  win_length: 400
  min_level_db: -100.0
  ref_level_db: 20.0
  preemphasis: 0.97
  power: 0.30

model:
  lstm_dim: 400
  fc1_dim: 600
  fc2_dim: 601

data:
  train_dir: 'path/to/train/data'
  test_dir: 'path/to/test/data'
  audio_len: 3.0

form:
  input: '*-norm.wav'
  dvec: '*-dvec.txt' 
  target:
    wav: '*-target.wav'
    mag: '*-target.pt'
  mixed:
    wav: '*-mixed.wav'
    mag: '*-mixed.pt'

train:
  batch_size: 8
  num_workers: 16
  optimizer: 'adam'
  adam: 0.001
  adabound:
    initial: 0.001
    final: 0.05
  summary_interval: 1
  checkpoint_interval: 1000

log:
  chkpt_dir: 'chkpt'
  log_dir: 'logs'

embedder:
  num_mels: 40
  n_fft: 512
  emb_dim: 256
  lstm_hidden: 768
  lstm_layers: 3
  window: 80
  stride: 40

The neural network architecture was standard and followed your implementation.

Question when preprocessing wav files

Hi all, I encountered a question when I tried to preprocess wav files.
When I input this

python generator.py -c [config yaml] -d [data directory] -o [output directory] -p [processes to run]

the command line display error messages like this:

Traceback (most recent call last):
  File "generator.py", line 98, in <module>
    os.makedirs(args.out_dir, exist_ok=False)
TypeError: makedirs() got an unexpected keyword argument 'exist_ok'

Traceback (most recent call last):
  File "generator.py", line 128, in <module>
    for spk in train_folders]
TypeError: glob() got an unexpected keyword argument 'recursive'
Traceback (most recent call last):
  File "generator.py", line 150, in <module>
    with Pool(cpu_num) as p:
AttributeError: __exit__

When I queried these questions, I found that they were all due to incompatibility of Python versions.My Python version is 2.7. Is "generator. py" only suitable for Python 3.5 environment? Is there any way to run this code in Python 2.7?

Looking forward to your reply!

Question about normalize-resample.sh

Thank you for your great job! I have a question when I tried to run the project.
I set 'N' as my CPU core number , then I input 'chmod a+x normalize-resample.sh'.
However, after I input './normalize-resample.sh ', there is no output on the command line. Is this normal?
Furthermore, what is the function of this script?

Next, copy utils/normalize-resample.sh to root directory of unzipped data folder. Then:

vim normalize-resample.sh # set "N" as your CPU core number.
chmod a+x normalize-resample.sh
./normalize-resample.sh # this may take long

Looking for your reply!

Cannot reproduce reported SDR & retrain the speaker embedding

Hello, I have two questions about the implementation.

  1. I cannot reproduce the results reported in the README.
    I have trained for around > 400k steps on Librispeech 360h + 100h clean dataset, using the embedder provided in this repo.
    However, I can only obtain up to a maximum SDR of 5.5.

To obtain data from the Librispeech 360h + 100h, I generate the mixed audios for 360h and 100h separately, then add them together in another folder. Is this the right way when I want to use more data to train the voice filter module?

  1. I got worse results when retraining the speaker embedding
    I retrained the embedder using the following repo: Speaker verification on 3 datasets: Librispeech, VoxCeleb1, VoxCeleb2.

Theoretically, I expect the voice filter module will benefit from the embedder trained on more data, but the results got even worse. Can you share how you train this embedder?

Thank you in advance!

the model implementation comprehension

Hello, I'm a master student in ITMO university in Saint-Petersburg, Russia.

Could you explain me please, what exactly this model implemenation do?
As for me (variant 1) it takes as input mixed sound of voice of a
person A and voice of a person B and clear voice A, the same as in
mixed one and trying to extract it from the mixed one.
(that is really strange because it is useless)
And in the paper (variant 2) it is said that it should take the mixed
one and clear voice of the target person but NOT the same sound as in
mixed one! And this is the point.

When I tried to look at train test, made by generator, I found out that in every example of ******-mixed.wav there is ******-target.wav with another voice! (but not another phrase of target person as I thought it should be)

Am I right? Or what's going on here?

Waiting for your answer,
thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.