Giter VIP home page Giter VIP logo

shas's People

Contributors

gegallego avatar johntsi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

shas's Issues

Using Wav2Vec2-base as backbone does not work

Training using "facebook/wav2vec2-base" as backbone consistently fails with the following error:

1020it [01:35, 10.71it/s]
Starting epoch 0 ...
Traceback (most recent call last):
  File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/train.py", line 365, in <module>
    train(args)
  File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/train.py", line 147, in train
    logits = sfc_model(wav2vec_hidden, out_mask)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/models.py", line 41, in forward
    x = self.transformer(x, src_key_padding_mask=attention_mask)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 198, in forward
    output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 336, in forward
    x = x + self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 189, in forward
    return F.layer_norm(
  File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/functional.py", line 2347, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[28, 999, 768]

Training with the default "facebook/wav2vec2-xls-r-300m" using the same setup gives me no issues.

Could this have something to do with the fact that wav2vec2-base uses "do_stable_layer_norm": false, whereas facebook/wav2vec2-xls-r-300m uses "do_stable_layer_norm": true?
My first guess would be that the assumptions made here might not hold if "do_stable_layer_norm": false.

wav2vec_model.encoder.layer_norm = torch.nn.Identity()

I will let you know if I find any additional information about this.

EDIT:

Actually it was something much simpler, the wav2vec2 base model has different hidden dimension (768 instead of 1024). Changing constants.py seems to fix everything:
https://github.com/mt-upc/SHAS/blob/main/src/supervised_hybrid/constants.py#L4

Feel free to close the issue if you think this is obvious.

Hybrid W2V pad token is hardcoded

The pad token for the wav2vec hybrid segmentation method is hardcoded to the token "<pad>".

predictions = "".join(["0" if char == "<pad>" else "1" for char in tokens_preds])

This causes problems if we load a model that used a different pad token. For example, PereLluis13/Wav2Vec2-Large-XLSR-53-catalan uses "<PAD>" instead. In this case, then the prediction will always be "0".

This can be fixed in 1 line by doing the comparison against processor.tokenizer.pad_token.

Before the fix:

[{duration: 10.06, offset: 0.0, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
  {duration: 10.06, offset: 9.94, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
  {duration: 10.06, offset: 19.94, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
...

After the fix:

[{duration: 2.82, offset: 0.0, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
  {duration: 9.74, offset: 3.5, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
  {duration: 2.44, offset: 13.76, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
...

Fixed in #3

incorrect implementation of pdac algorithm?

Hello thanks for sharing your code!

I wanted to clarify the correctness of pdac recursive implementation as currently i receive segments that are > max_segment_length.

Mostly I think the issues are in lines: #L121-L123 and #L134-L135 in the implementation that should be deleted.

    def recusrive_split(sgm):
        if sgm.duration < max_segment_length:
            segments.append(sgm)
        else:
            j = 0
            sorted_indices = np.argsort(sgm.probs)
            while j < len(sorted_indices):
                split_idx = sorted_indices[j]
                split_prob = sgm.probs[split_idx]
### this line could allow for sgm.duration > max_seglength
>                if split_prob > threshold:  
>                    segments.append(sgm)
>                    break

                sgm_a, sgm_b = split_and_trim(sgm, split_idx, threshold)
                if (
                    sgm_a.duration > min_segment_length
                    and sgm_b.duration > min_segment_length
                ):
                    recusrive_split(sgm_a)
                    recusrive_split(sgm_b)
                    break
                j += 1
### this line could allow for sgm.duration > max_seglength
>            else:
>                segments.append(sgm)

Could you please explain the need for those two clauses? , from empirical experiment and by matching with the algorithm in the paper they are not needed.

image

segment.py passes 5 arguments but infer.py requires 4

Hi all,
I have successfully installed your repo and downloaded your English ckpt.
When I run the segment.py script I got the error:

Traceback (most recent call last):
  File "/home/ubuntu/SHAS/./src/supervised_hybrid/segment.py", line 342, in <module>
    segment(args)
  File "/home/ubuntu/SHAS/./src/supervised_hybrid/segment.py", line 236, in segment
    probs, _ = infer(
TypeError: infer() takes 4 positional arguments but 5 were given

This is due to the fact that here the wav_path.name is also passed while being useless for the infer function.
If you remove it, the script perfectly works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.