Giter VIP home page Giter VIP logo

zerospeech's People

Contributors

bhigy avatar bshall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zerospeech's Issues

Running on multiple GPUs

Hello, I am currently trying to run the model, however, I could not install amp for unknown reason thus my training speed is very slow. I am running with batch size 16 (because of GPU usage) and it seems to take roughly 9 days or more to run 500000 steps. Therefore, I would like to run the model with multiple GPUs for faster training, and I am wondering if it is possible to do so. Also, I am curious what makes the model training slow and heavy because the training took less time and GPU usage than a day when I ran VQ-VAE with Deconv decoder.

keyerror when preprocess data

I set the directory for data as datasets/2019/english, when I run the script preprocess.py, it raises
keyerror: 'accessing unknown key in a struct: dataset.in_dir'
but I can't find how to solve it.
Could you help me?

ImportError when trying to compute ABX score

Hi,

I am trying to compute ABX score on the pretrained model, english version. Using the command from the README, I first got following error:
ModuleNotFoundError: No module named 'numba.decorators'
This is apparently a solved bug in librosa but the fix hasn't made it into a release yet. I solved it by putting strict requirements for all packages in requirements.txt (== instead of >=, which by the way is probably better for reproducibility of results).

Now I get another error:
ImportError: cannot import name 'Config' from 'omegaconf'
I fear it might again be a version specific issue. Could you provide the exact version you use for each package (pip list)?

How to perform Acoustic Unit Discovery?

Many thanks for your great repo!

In the introduction part, I see that you've also attempted the Acoustic Unit Discovery task. However, I haven't found the corresponding description/instruction in the readme. Could you please give me a hint?

Vector dimension does not match other files

Thanks for sharing your work.
I tried to reproduction it, but when I perform the zs2020evaluate for the auxiliary embedding, it reports error

Traceback (most recent call last):
  File "/home/speech/anaconda3/envs/zerospeech2020/bin/zerospeech2020-evaluate", line 33, in <module>
    sys.exit(load_entry_point('zerospeech2020==0.2', 'console_scripts', 'zerospeech2020-evaluate')())
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/evaluation/main.py", line 197, in main
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/evaluation/evaluation_2019.py", line 60, in evaluate
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/evaluation/evaluation_2019.py", line 60, in <dictcomp>
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/evaluation/evaluation_2019.py", line 102, in _evaluate_single
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/evaluation/bitrate.py", line 81, in bitrate
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/read_2019_features.py", line 93, in read_all
  File "/home/speech/anaconda3/envs/zerospeech2020/lib/python3.8/site-packages/zerospeech2020-0.2-py3.8.egg/zerospeech2020/read_2019_features.py", line 22, in log_or_raise
zerospeech2020.read_2019_features.ReadZrsc2019Exception: Vector dimension does not match other files: S011_2504494747.txt

I checked the output files in test and auxiliary_embedding1, it did have different dims. But I'm not sure how to set the parameters of the auxiliary embedding and how to change it.
Hope you can answer this question.

About sampling rate 8 kHz

Hi, very appreciate for your great jobs!!

However, I'd like to train your VQVAE on my own dataset, whose sampling rate is 8 kHz. So.... would you please tell me except fftl, hop_len, win_len, and so on, which parameters should I change?

Thank you very much if you could help me!

Why is batch_size = 52 instead of 32 or 64?

In config/training/default.yaml file the batch_size is defined as 52. In general, the batch_size value will be power of 2. For example 8, 16, 32, 64, 128 so on.

training:
batch_size: 52
sample_frames: 32
n_steps: 500000
optimizer:
lr: 4e-4
scheduler:
milestones:
- 300000
- 400000
gamma: 0.5
checkpoint_interval: 20000
n_workers: 8

Use as Universal Vocoder

@bshall Thank you for this implementation. Can I use this repository as a universal vocoder? I want to train tacotron with vq-vae features. Will this work?

the speaker id

hi,i want to know the speaker id of the raw audio because i want to get the output without the speaker conversion

RuntimeError: Running cythonize failed!

I am interested in vocal cloning from van der Oord et al and found this repo. I try to install on a Google Colab and get:

!pip install -r ZeroSpeech/requirements.txt

Collecting numpy==1.18.2 (from -r ZeroSpeech/requirements.txt (line 1))
  Using cached numpy-1.18.2.zip (5.4 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  error: subprocess-exited-with-error
  
  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

I also start a virtual machine on the cloud, clone the repo, try to install, and get a similar error:

$ pip install -r requirements.txt 
Collecting numpy==1.18.2 (from -r requirements.txt (line 1))
  Using cached numpy-1.18.2.zip (5.4 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [63 lines of output]
      Running from numpy source directory.
      <string>:461: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
      /tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py:75: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
        required_version = LooseVersion('0.29.14')
      /tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py:77: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
        if LooseVersion(cython_version) < required_version:
      performance hint: _generator.pyx:811:41: Exception check after calling '_shuffle_int' will always require the GIL to be acquired.
      Possible solutions:
          1. Declare '_shuffle_int' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
          2. Use an 'int' return type on '_shuffle_int' to allow an error code to be returned.
      performance hint: _generator.pyx:840:45: Exception check after calling '_shuffle_int' will always require the GIL to be acquired.
      Possible solutions:
          1. Declare '_shuffle_int' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
          2. Use an 'int' return type on '_shuffle_int' to allow an error code to be returned.
      
      Error compiling Cython file:
      ------------------------------------------------------------
      ...
          cdef sfc64_state rng_state
      
          def __init__(self, seed=None):
              BitGenerator.__init__(self, seed)
              self._bitgen.state = <void *>&self.rng_state
              self._bitgen.next_uint64 = &sfc64_uint64
                                         ^
      ------------------------------------------------------------
      
      _sfc64.pyx:90:35: Cannot assign type 'uint64_t (*)(void *) except? -1 nogil' to 'uint64_t (*)(void *) noexcept nogil'. Exception values are incompatible. Suggest adding 'noexcept' to type 'uint64_t (void *) except? -1 nogil'.
      Processing numpy/random/_bounded_integers.pxd.in
      Processing numpy/random/_generator.pyx
      Processing numpy/random/_sfc64.pyx
      Traceback (most recent call last):
        File "/tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py", line 238, in <module>
          main()
        File "/tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py", line 234, in main
          find_process_files(root_dir)
        File "/tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py", line 225, in find_process_files
          process(root_dir, fromfile, tofile, function, hash_db)
        File "/tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py", line 191, in process
          processor_function(fromfile, tofile)
        File "/tmp/pip-install-d1oosqgf/numpy_df17a3083fa845ed943b7f14bde0105d/tools/cythonize.py", line 80, in process_pyx
          subprocess.check_call(
        File "/opt/conda/envs/bshall/lib/python3.10/subprocess.py", line 369, in check_call
          raise CalledProcessError(retcode, cmd)
      subprocess.CalledProcessError: Command '['/opt/conda/envs/bshall/bin/python3.10', '-m', 'cython', '-3', '--fast-fail', '-o', '_sfc64.c', '_sfc64.pyx']' returned non-zero exit status 1.
      Cythonizing sources
      Traceback (most recent call last):
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
          return hook(metadata_directory, config_settings)
        File "/tmp/pip-build-env-ux6kv4on/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 366, in prepare_metadata_for_build_wheel
          self.run_setup()
        File "/tmp/pip-build-env-ux6kv4on/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 480, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "/tmp/pip-build-env-ux6kv4on/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 488, in <module>
        File "<string>", line 469, in setup_package
        File "<string>", line 275, in generate_cython
      RuntimeError: Running cythonize failed!
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

I found this issue with a similar issue on numpy 1.18.5, and they suggest --no-build-isolation, but I still get the error:

$ python -m pip install numpy==1.18.2 --no-build-isolation
Collecting numpy==1.18.2
  Using cached numpy-1.18.2.zip (5.4 MB)
  Preparing metadata (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [63 lines of output]
      Running from numpy source directory.
      <string>:461: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
      /tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py:75: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
        required_version = LooseVersion('0.29.14')
      /tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py:77: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
        if LooseVersion(cython_version) < required_version:
      performance hint: _generator.pyx:811:41: Exception check after calling '_shuffle_int' will always require the GIL to be acquired.
      Possible solutions:
          1. Declare '_shuffle_int' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
          2. Use an 'int' return type on '_shuffle_int' to allow an error code to be returned.
      performance hint: _generator.pyx:840:45: Exception check after calling '_shuffle_int' will always require the GIL to be acquired.
      Possible solutions:
          1. Declare '_shuffle_int' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
          2. Use an 'int' return type on '_shuffle_int' to allow an error code to be returned.
      
      Error compiling Cython file:
      ------------------------------------------------------------
      ...
          cdef sfc64_state rng_state
      
          def __init__(self, seed=None):
              BitGenerator.__init__(self, seed)
              self._bitgen.state = <void *>&self.rng_state
              self._bitgen.next_uint64 = &sfc64_uint64
                                         ^
      ------------------------------------------------------------
      
      _sfc64.pyx:90:35: Cannot assign type 'uint64_t (*)(void *) except? -1 nogil' to 'uint64_t (*)(void *) noexcept nogil'. Exception values are incompatible. Suggest adding 'noexcept' to type 'uint64_t (void *) except? -1 nogil'.
      Processing numpy/random/_bounded_integers.pxd.in
      Processing numpy/random/_generator.pyx
      Processing numpy/random/_sfc64.pyx
      Traceback (most recent call last):
        File "/tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py", line 238, in <module>
          main()
        File "/tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py", line 234, in main
          find_process_files(root_dir)
        File "/tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py", line 225, in find_process_files
          process(root_dir, fromfile, tofile, function, hash_db)
        File "/tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py", line 191, in process
          processor_function(fromfile, tofile)
        File "/tmp/pip-install-bhazifhj/numpy_638e295665ae4b3e88c648f1b94929e4/tools/cythonize.py", line 80, in process_pyx
          subprocess.check_call(
        File "/opt/conda/envs/bshall/lib/python3.10/subprocess.py", line 369, in check_call
          raise CalledProcessError(retcode, cmd)
      subprocess.CalledProcessError: Command '['/opt/conda/envs/bshall/bin/python', '-m', 'cython', '-3', '--fast-fail', '-o', '_sfc64.c', '_sfc64.pyx']' returned non-zero exit status 1.
      Cythonizing sources
      Traceback (most recent call last):
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
          return hook(metadata_directory, config_settings)
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/setuptools/build_meta.py", line 366, in prepare_metadata_for_build_wheel
          self.run_setup()
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/setuptools/build_meta.py", line 480, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "/opt/conda/envs/bshall/lib/python3.10/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 488, in <module>
        File "<string>", line 469, in setup_package
        File "<string>", line 275, in generate_cython
      RuntimeError: Running cythonize failed!
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.

One possible problem is that I'm not sure that Nvidia/Apex is installed. So I follow this thread to install it on Colab. I get this error:

DEPRECATION: --build-option and --global-option are deprecated. pip 23.3 will enforce this behaviour change. A possible replacement is to use --config-settings. Discussion can be found at https://github.com/pypa/pip/issues/11859
WARNING: Implying --no-binary=:all: due to the presence of --build-option / --global-option. 
Processing /content/gdrive/MyDrive/apex
  Running command pip subprocess to install build dependencies
  Using pip 23.1.2 from /usr/local/lib/python3.10/dist-packages/pip (python 3.10)
  Collecting setuptools
    Using cached setuptools-69.0.3-py3-none-any.whl
  Collecting wheel
    Using cached wheel-0.42.0-py3-none-any.whl
  Installing collected packages: wheel, setuptools
    Creating /tmp/pip-build-env-f0jjytpy/overlay/local/bin
    changing mode of /tmp/pip-build-env-f0jjytpy/overlay/local/bin/wheel to 755
  ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
  ipython 7.34.0 requires jedi>=0.16, which is not installed.
  lida 0.0.10 requires fastapi, which is not installed.
  lida 0.0.10 requires kaleido, which is not installed.
  lida 0.0.10 requires python-multipart, which is not installed.
  lida 0.0.10 requires uvicorn, which is not installed.
  Successfully installed setuptools-69.0.3 wheel-0.42.0
  Installing build dependencies ... done
  Running command Getting requirements to build wheel
  Traceback (most recent call last):
    File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
      main()
    File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/tmp/pip-build-env-f0jjytpy/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=['wheel'])
    File "/tmp/pip-build-env-f0jjytpy/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 295, in _get_build_requires
      self.run_setup()
    File "/tmp/pip-build-env-f0jjytpy/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 311, in run_setup
      exec(code, locals())
    File "<string>", line 5, in <module>
  ModuleNotFoundError: No module named 'packaging'
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /usr/bin/python3 /usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py get_requires_for_build_wheel /tmp/tmpp0zh_uyr
  cwd: /content/gdrive/MyDrive/apex
  Getting requirements to build wheel ... error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

I install packaging with !pip install packaging but I get the same error.

How can I install the numpy requirements for running this project?

How did you installing Apex?

I am trying to run your code, but failed with an error with Apex

I install Apex by

conda install -c conda-forge nvidia-apex

But I got an error:

Traceback (most recent call last):
File "train.py", line 123, in
train_model()
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
overrides=args.overrides,
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/hydra/_internal/hydra.py", line 86, in run
job_subdir_key=None,
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "train.py", line 50, in train_model
gamma=cfg.training.scheduler.gamma)
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 382, in init
super(MultiStepLR, self).init(optimizer, last_epoch)
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 73, in init
self.optimizer.step = with_counter(self.optimizer.step)
File "/share/mini1/sw/std/python/anaconda3-2019.07/v3.7/envs/torch_1.4/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 55, in with_counter
instance_ref = weakref.ref(method.self)
AttributeError: 'function' object has no attribute 'self'

Similar bug was found here, but there is no way to fix yet NVIDIA/apex#552

Time pre epoch

Hi! I tried to train your model on VCTK dataset.
But i think, that I have slow train speed:
15 minutes for 1 epoch: 28k samples with 25 batchsize
train on GPU 1080Ti .
Please, tell me is it normal speed? Thank you!
Screenshot_2

How to train the model without using apex amp (automatic mixed precision)?

Hi,

Thanks for providing the repository and instructions!
I was wondering if any specific problem might happen if we don't use the amp for training? Or whether I have to expect different behavior or results from the model compared to what mentioned in the paper? If yes, could you please specify?
I am trying to train the code without amp, and for that I just removed the lines which you used amp for the saving checkpoints and applying it on the encoder, decoder and the optimizer. There remains only one question in my mind!

In the code "train.py" line 99:
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), 1)

I replaced "amp.master_params(optimizer)" with the whole model parameters like in the following:

torch.nn.utils.clip_grad_norm_( chain(encoder.parameters(), decoder.parameters() ), 1)

Is it the correct way to skip using amp for this line? Since here we apply gradient clipping to all model parameters not the master ones which amp.optimizer refers to.

RuntimeWarning: invalid value encountered in log

Hi @bshall ,

Still trying to replicate your results. I now get some scores, not exactly the same you report but close enough. However I also get warnings during the evaluation and I am wondering whether they might explain the small difference. Do you get anything like that:

/home/bjrhigy/opt/miniconda3/envs/zerospeech2020/lib/python3.8/site-packages/ABXpy/distances/metrics/kullback_leibler.py:15: RuntimeWarning: invalid value encountered in log
  pq = np.dot(x, np.log(y.transpose()))
/home/bjrhigy/opt/miniconda3/envs/zerospeech2020/lib/python3.8/site-packages/ABXpy/distances/metrics/kullback_leibler.py:17: RuntimeWarning: invalid value encountered in log
  np.sum(x * np.log(x), axis=1).reshape(x.shape[0], 1), (1, y.shape[0]))
/home/bjrhigy/opt/miniconda3/envs/zerospeech2020/lib/python3.8/site-packages/ABXpy/score.py:113: RuntimeWarning: invalid value encountered in less
  scores = (np.int8(dis_AX < dis_BX) -
/home/bjrhigy/opt/miniconda3/envs/zerospeech2020/lib/python3.8/site-packages/ABXpy/score.py:114: RuntimeWarning: invalid value encountered in greater
  np.int8(dis_AX > dis_BX))

The exact command I ran:
zerospeech2020-evaluate 2019 submission/ -o eval.json -D ~/corpora/zerospeech2020

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.