Giter VIP home page Giter VIP logo

denoising_dihard18's Introduction

A quick-use package for speech enhancement based on our DIHARD18 system

Original founder: @staplesinLA

Major contributor: @nryant @mmmaat(many thanks!)

The repository provides tools to reproduce the enhancement results of the speech preprocessing part of our DIHARD18 system[1]. The deep-learning based denoising model is trained on 400 hours of English and Mandarin audio; for full details see [1,2,3]. Currently the tools accept 16 kHz, 16-bit monochannel WAV files. Please convert the audio format in advance.

Additionally, this package integrates a voice activity detection (VAD) module based on py-webrtcvad, which provides a Python interface to the WebRTC VAD. The default parameters are tuned on the development set of DIHARD18.

[1] Sun, Lei, et al. "Speaker Diarization with Enhancing Speech for the First DIHARD Challenge." Proc. Interspeech 2018 (2018): 2793-2797. PDF

[2] Gao, Tian, et al. "Densely connected progressive learning for lstm-based speech enhancement." 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. PDF

[3] Sun, Lei, et al. "Multiple-target deep learning for LSTM-RNN based speech enhancement." 2017 Hands-free Speech Communications and Microphone Arrays (HSCMA). IEEE, 2017. PDF

Main Prerequisites

How to use it?

  1. Install all dependencies (Note that you need to have Python and pip already installed on your system) :

     sudo apt-get install openmpi-bin
     pip install numpy scipy librosa
     pip install cntk-gpu
     pip install webrtcvad
     pip install wurlitzer
     pip install joblib
    

    Make sure the CNTK engine installed successfully by querying its version:

     python -c "import cntk; print(cntk.__version__)"
    
  2. Download the speech enhancement repository :

     git clone https://github.com/staplesinLA/denoising_DIHARD18.git
    
  3. Install the pretrained model:

     cd denoising_DIHARD18
     ./install_model.sh
    
  4. Specify parameters in run_eval.sh:

    • For the speech enhancement tool:

        WAV_DIR=<path to original wavs>
        SE_WAV_DIR=<path to output dir>
        USE_GPU=<true|false, if false use CPU, default=true>
        GPU_DEVICE_ID=<GPU device id on your machine, default=0>
        TRUNCATE_MINUTES=<audio chunk length in minutes, default=10>
      

      We recommend using a GPU for decoding as it's much faster than CPU. If decoding fails with a CUDA Error: out of memory error, reduce the value of TRUNCATE_MINUTES.

    • For the VAD tool:

        VAD_DIR=<path to output dir>
        HOPLENGTH=<duration in milliseconds of VAD frame size, default=30>
        MODE=<WebRTC aggressiveness, default=3>
        NJOBS=<number of parallel processes, default=1>
      
  5. Execute run_eval.sh:

     ./run_eval.sh
    

Use within docker

  1. Install docker

  2. Install nvidia docker, a plugin to use your GPUs within docker

  3. Build the image using the provided Dockerfile:

     docker build -t dihard18 .
    
  4. Run the evaluation script within docker with the following commands:

     docker run -it --rm --runtime=nvidia -v /abs/path/to/dihard/data:/data dihard18 /bin/bash
     # you are now in the docker machine
     ./run_eval.sh  # before launcing the script you can edit it to modify the parameters
    
    • The option --runtime=nvidia enables the use of GPUs within docker

    • The option -v /absolute/path/to/dihard/data:/data mounts the folder where the data are stored into Docker in the /data folder. The directory /absolute/path/to/dihard/data must contain a wav/ subdirectory. The results will be stored in the directories wav_pn_enhanced/ and vad/.

Details

  1. Speech enhancement model

    The scripts accept 16 kHz, 16-bit monochannel WAV files. Please convert the audio format in advance. To easily rebuild the waveform, the input feature is log-power spectrum (LPS). As the model has dual outputs including "IRM" and "LPS", the final used component is the "IRM" target which directly applies a mask to the original speech. Compared with "LPS" output, it can yield better speech intelligibility and fewer distortions.

  2. VAD module

    The optional parameters of WebRTC VAD are aggressiveness mode (default=3) and hop length (default=30 ms). The default settings are tuned on the development set of the First DIHARD challenge. For the development set, here is the comparison between original speech and processed speech in terms of VAD metrics:

    VAD(default) Original_Dev Processed_Dev
    Miss 11.85 7.21
    FA 6.12 6.17
    Total 17.97 13.38

    And the performance on the evaluation set:

    VAD(default) Original_Eval Processed_Eval
    Miss 17.49 8.89
    FA 6.36 6.4
    Total 23.85 15.29
  3. Effectiveness

    The contribution of a single sub-module on the final speaker diarization performance is too trivial to analyze. However, it can be seen clearly that the enhancement based pre-processing is beneficial to at least VAD performance. Users can also tune the default VAD parameters to obtain a desired trade-off between Miss and False Alarm rates.

denoising_dihard18's People

Contributors

mmmaat avatar nryant avatar staplesinla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

denoising_dihard18's Issues

RuntimeError: Failed to parse Dictionary from the input stream.

Hello,
I try running the denoising program but it seems fail to load the model. Can you give me a hand? My environment is: Ubuntu 16.04, python 3.6, CNTK version=2.7. Error log is shown below.

Processing file: /home/l/denoising_DIHARD18/data/DH_0002.wav, segment: 1/3.
ERROR: Problem encountered while processing file "/home/l/denoising_DIHARD18/data/DH_0002.wav". Skipping. Full error output:
Traceback (most recent call last):
File "main_denoising.py", line 95, in run
super(Process, self).run()
File "/home/l/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "main_denoising.py", line 194, in denoise_wav
gpu_id)
File "/home/l/denoising_DIHARD18/decode_model.py", line 68, in decode_model
model_dnn = load_model(MODELF)
File "/home/l/anaconda3/lib/python3.6/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
result = f(*args, **kwds)
File "/home/l/anaconda3/lib/python3.6/site-packages/cntk/ops/functions.py", line 1721, in load_model
return Function.load(model, device, format)
File "/home/l/anaconda3/lib/python3.6/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
result = f(*args, **kwds)
File "/home/l/anaconda3/lib/python3.6/site-packages/cntk/ops/functions.py", line 1635, in load
return cntk_py.Function.load(str(model), device, format.value)
RuntimeError: Failed to parse Dictionary from the input stream.

[CALL STACK]
[0x7ff44b946ac9] + 0x7c5ac9
[0x7ff44bb383d0] CNTK::operator>>(std::istream&, CNTK::Dictionary&) + 0xa0
[0x7ff44b94be83] CNTK::Function:: Load (std::__cxx11::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t>> const&, CNTK::DeviceDescriptor const&, CNTK::ModelFormat) + 0xa3
[0x7ff44c9188d6] + 0x1cb8d6
[0x5592001e6ad1] _PyCFunction_FastCallDict + 0x91
[0x55920027667c] + 0x19e67c
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x559200271459] PyEval_EvalCodeEx + 0x329
[0x559200272376] + 0x19a376
[0x5592001e699e] PyObject_Call + 0x3e
[0x55920029a470] _PyEval_EvalFrameDefault + 0x1ab0
[0x55920026fc26] + 0x197c26
[0x559200270941] + 0x198941
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x559200271459] PyEval_EvalCodeEx + 0x329
[0x559200272376] + 0x19a376
[0x5592001e699e] PyObject_Call + 0x3e
[0x55920029a470] _PyEval_EvalFrameDefault + 0x1ab0
[0x55920026fc26] + 0x197c26
[0x559200270941] + 0x198941
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920026fa94] + 0x197a94
[0x559200270941] + 0x198941
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x559200271459] PyEval_EvalCodeEx + 0x329
[0x559200272376] + 0x19a376
[0x5592001e699e] PyObject_Call + 0x3e
[0x55920029a470] _PyEval_EvalFrameDefault + 0x1ab0
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920026fc26] + 0x197c26
[0x559200270941] + 0x198941
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x559200270d7b] _PyFunction_FastCallDict + 0x11b
[0x5592001e6f5f] _PyObject_FastCallDict + 0x26f
[0x5592001eba03] _PyObject_Call_Prepend + 0x63
[0x5592001e699e] PyObject_Call + 0x3e
[0x55920024302b] + 0x16b02b
[0x5592002769b7] + 0x19e9b7
[0x5592001e6d7b] _PyObject_FastCallDict + 0x8b
[0x5592002767ce] + 0x19e7ce
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x55920026fa94] + 0x197a94
[0x559200270941] + 0x198941
[0x559200276755] + 0x19e755
[0x559200299a7a] _PyEval_EvalFrameDefault + 0x10ba
[0x55920027070b] + 0x19870b
[0x559200276755] + 0x19e755
[0x559200298cba] _PyEval_EvalFrameDefault + 0x2fa
[0x559200271459] PyEval_EvalCodeEx + 0x329
[0x5592002721ec] PyEval_EvalCode + 0x1c
[0x5592002ec9a4] + 0x2149a4
[0x5592002ecda1] PyRun_FileExFlags + 0xa1
[0x5592002ecfa4] PyRun_SimpleFileExFlags + 0x1c4
[0x5592002f0a9e] Py_Main + 0x63e
[0x5592001b84be] main + 0xee
[0x7ff461639830] __libc_start_main + 0xf0
[0x55920029f773] + 0x1c7773

Error while processing file

Hi, I am find finding error in processing the file.

Processing file: /home/denoising_DIHARD18-master/data/inp1.wav, segment: 1/1.
ERROR: Problem encountered while processing file "/home/denoising_DIHARD18-master/data/inp1.wav". Skipping. Full error output:
Traceback (most recent call last):
File "main_denoising.py", line 95, in run
super(Process, self).run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/denoising_DIHARD18-master/decode_model.py", line 68, in decode_model
model_dnn = load_model(MODELF)
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/home/.local/share/virtualenvs/env/lib/python3.5/site-packages/wurlitzer.py", line 307, in pipes
yield stdout_r, stderr_r
File "/home/denoising_DIHARD18-master/decode_model.py", line 68, in decode_model
model_dnn = load_model(MODELF)
File "/home/.local/share/virtualenvs/env/lib/python3.5/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
result = f(*args, **kwds)
File "/home/.local/share/virtualenvs/env/lib/python3.5/site-packages/cntk/ops/functions.py", line 1721, in load_model
return Function.load(model, device, format)
File "/home/.local/share/virtualenvs/env/lib/python3.5/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
result = f(*args, **kwds)
File "/home/.local/share/virtualenvs/env/lib/python3.5/site-packages/cntk/ops/functions.py", line 1635, in load
return cntk_py.Function.load(str(model), device, format.value)
RuntimeError: Failed to parse Dictionary from the input stream.

error while installing librosa

Hi,

When building docker, an error raises when trying to install librosa, due to unability to build wheel for llvmlite 0.32.1.

Had to add pip install llvmlte==0.31.0 before pip install librosa webrtcvad to make it build correctly.

Memory Leak both on CPU and GPU

Hello,

i am experiencing memory leak when running the evaluation (run_eval.sh script) .
I have this issue both with and without using the provided docker image and for both CPU and GPU evaluation.
My GPU goes OOM even when the file duration is a couple of seconds. When using the CPU the script will end up using all my RAM after processing a few hundreds of wavs of 4-5 seconds each.

Below i am providing my traceback for GPU=true and when running the run_eval script with the docker image provided. Does anyone have any clue about why this is happening ? Is the script not suitable to process many short wav files ?

UPDATE: I experience the same issue even with long wav files and regardless of the truncates_minutes value.

`About to throw exception 'CUDA failure 2: out of memory ; GPU=0 ; hostname=fe609e0f3385 ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))'
CUDA failure 2: out of memory ; GPU=0 ; hostname=fe609e0f3385 ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))
Traceback (most recent call last):
File "main_denoising.py", line 121, in
args.gpu_id, args.truncate_minutes)
File "main_denoising.py", line 89, in main_denoising
decode_model(use_gpu=use_gpu, gpu_id=gpu_id)
File "/dihard18/decode_model.py", line 39, in decode_model
out_noisy_fea = output_nodes.eval(real_noisy_fea)
File "/root/anaconda3/envs/dihard18/lib/python3.5/site-packages/cntk/ops/functions.py", line 733, in eval
_, output_map = self.forward(arguments, outputs, device=device, as_numpy=as_numpy)
File "/root/anaconda3/envs/dihard18/lib/python3.5/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
result = f(*args, *kwds)
File "/root/anaconda3/envs/dihard18/lib/python3.5/site-packages/cntk/ops/functions.py", line 867, in forward
keep_for_backward)
File "/root/anaconda3/envs/dihard18/lib/python3.5/site-packages/cntk/cntk_py.py", line 1980, in _forward
return _cntk_py.Function__forward(self, args)
RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=fe609e0f3385 ; expr=cudaMalloc((void
) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))

[CALL STACK]
[0x7f2ad32d7e89] + 0x732e89
[0x7f2acb69ef4f] + 0xec4f4f
[0x7f2acb6f2347] float* Microsoft::MSR::CNTK::TracingGPUMemoryAllocator:: Allocate (int, unsigned long, unsigned long) + 0x57
[0x7f2acb6f2676] Microsoft::MSR::CNTK::GPUMatrix:: Resize (unsigned long, unsigned long, bool) + 0xf6
[0x7f2acb5f5509] Microsoft::MSR::CNTK::Matrix:: Resize (unsigned long, unsigned long, unsigned long, bool, bool) + 0xc9
[0x7f2ad3778b09] Microsoft::MSR::CNTK::LearnableParameter:: InitShape (Microsoft::MSR::CNTK::TensorShape const&) + 0x309
[0x7f2ad38c7d72] std::shared_ptr<Microsoft::MSR::CNTK::ComputationNode> Microsoft::MSR::CNTK::ComputationNetworkBuilder:: TypedCreateLearnableParameter (std::__cxx11::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t>> const&, Microsoft::MSR::CNTK::TensorShape const&) + 0x1b2
[0x7f2ad3544f35] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: CreateLearnableParameterFromVariable (CNTK::Variable const&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, CNTK::NDShape const&, std::__cxx11::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t>> const&) + 0x65
[0x7f2ad35ee328] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x2e8
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35f2e08] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetOutputVariableNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x238
[0x7f2ad35ee65e] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase CNTK::CompositeFunction:: GetNode (CNTK::Variable const&, std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork&, Microsoft::MSR::CNTK::ComputationNetworkBuilder&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>&, std::unordered_map<CNTK::Variable,bool,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,bool>>>&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x61e
[0x7f2ad35efa51] std::pair<std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork,std::unordered_map<CNTK::Variable,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNodeBase>>>> CNTK::CompositeFunction:: CreateComputationNetwork (std::shared_ptrCNTK::Function const&, CNTK::DeviceDescriptor const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, std::unordered_map<CNTK::Variable,CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x251
[0x7f2ad35f18fd] std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork CNTK::CompositeFunction:: GetComputationNetwork (CNTK::DeviceDescriptor const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, bool) + 0x69d
[0x7f2ad3500e9f] CNTK::CompositeFunction:: Forward (std::unordered_map<CNTK::Variable,std::shared_ptrCNTK::Value,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrCNTK::Value>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrCNTK::Value,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrCNTK::Value>>>&, CNTK::DeviceDescriptor const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&) + 0xf9f
[0x7f2ad3498603] CNTK::Function:: Forward (std::unordered_map<CNTK::Variable,std::shared_ptrCNTK::Value,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrCNTK::Value>>> const&, std::unordered_map<CNTK::Variable,std::shared_ptrCNTK::Value,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocator<std::pair<CNTK::Variable const,std::shared_ptrCNTK::Value>>>&, CNTK::DeviceDescriptor const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&, std::unordered_set<CNTK::Variable,std::hashCNTK::Variable,std::equal_toCNTK::Variable,std::allocatorCNTK::Variable> const&) + 0x93
[0x7f2ad41c92bd] + 0x6a2bd
[0x7f2b052425e9] PyCFunction_Call + 0xf9
[0x7f2b052c77c0] PyEval_EvalFrameEx + 0x6ba0
[0x7f2b052cab49] + 0x144b49
[0x7f2b052c9df5] PyEval_EvalFrameEx + 0x91d5
[0x7f2b052cab49] + 0x144b49
[0x7f2b052cacd8] PyEval_EvalCodeEx + 0x48
[0x7f2b05220661] + 0x9a661
[0x7f2b051ed236] PyObject_Call + 0x56
[0x7f2b052c7234] PyEval_EvalFrameEx + 0x6614
[0x7f2b052cab49] + 0x144b49
[0x7f2b052c9df5] PyEval_EvalFrameEx + 0x91d5
[0x7f2b052cab49] + 0x144b49
[0x7f2b052c9df5] PyEval_EvalFrameEx + 0x91d5
[0x7f2b052cab49] + 0x144b49
[0x7f2b052c9df5] PyEval_EvalFrameEx + 0x91d5
[0x7f2b052cab49] + 0x144b49
[0x7f2b052c9df5] PyEval_EvalFrameEx + 0x91d5
[0x7f2b052cab49] + 0x144b49
[0x7f2b052cacd8] PyEval_EvalCodeEx + 0x48
[0x7f2b052cad1b] PyEval_EvalCode + 0x3b
[0x7f2b052f0020] PyRun_FileExFlags + 0x130
[0x7f2b052f1623] PyRun_SimpleFileExFlags + 0x173
[0x7f2b0530c8c7] Py_Main + 0xca7
[0x400add] main + 0x15d
[0x7f2b042a7830] __libc_start_main + 0xf0
[0x4008b9] `

Account responsible for LFS bandwidth should purchase more data packs to restore access.

Hi,
I am unable to clone or pull from Github because of a git-lfs quota.
`(base) ➜ ami git lfs clone https://github.com/staplesinLA/denoising_DIHARD18.git
WARNING: 'git lfs clone' is deprecated and will not be updated
with new flags from 'git clone'

'git clone' has been updated in upstream Git to have comparable
speeds to 'git lfs clone'.
Cloning into 'denoising_DIHARD18'...
remote: Enumerating objects: 234, done.
remote: Total 234 (delta 0), reused 0 (delta 0), pack-reused 234
Receiving objects: 100% (234/234), 123.07 KiB | 2.09 MiB/s, done.
Resolving deltas: 100% (132/132), done.
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
error: failed to fetch some objects from 'https://github.com/staplesinLA/denoising_DIHARD18.git/info/lfs'`

I yesterday cloned several times since I used a docker and it automatically re-cloned everytime I ran it. For this problem I feel responsible and would be happy to pay for one git-lfs pack for this month since I also need to re-download it (but only once this timeπŸ˜‚). https://docs.github.com/en/github/setting-up-and-managing-billing-and-payments-on-github/upgrading-git-large-file-storage.
You can shoot me an email [email protected]

license?

I'm unable to find any indication of the license on this code. Any thoughts? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.