mt-upc / shas Goto Github PK
View Code? Open in Web Editor NEWSHAS: Approaching optimal Segmentation for End-to-End Speech Translation
License: MIT License
SHAS: Approaching optimal Segmentation for End-to-End Speech Translation
License: MIT License
Training using "facebook/wav2vec2-base" as backbone consistently fails with the following error:
1020it [01:35, 10.71it/s]
Starting epoch 0 ...
Traceback (most recent call last):
File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/train.py", line 365, in <module>
train(args)
File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/train.py", line 147, in train
logits = sfc_model(wav2vec_hidden, out_mask)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/scratch/jiranzotmp/trabajo/ICASSP2023_argumentation/software/SHAS/src/supervised_hybrid/models.py", line 41, in forward
x = self.transformer(x, src_key_padding_mask=attention_mask)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 198, in forward
output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/transformer.py", line 336, in forward
x = x + self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/modules/normalization.py", line 189, in forward
return F.layer_norm(
File "/home/jiranzo/anaconda3/envs/shas/lib/python3.9/site-packages/torch/nn/functional.py", line 2347, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Given normalized_shape=[1024], expected input with shape [*, 1024], but got input of size[28, 999, 768]
Training with the default "facebook/wav2vec2-xls-r-300m" using the same setup gives me no issues.
Could this have something to do with the fact that wav2vec2-base uses "do_stable_layer_norm": false, whereas facebook/wav2vec2-xls-r-300m uses "do_stable_layer_norm": true?
My first guess would be that the assumptions made here might not hold if "do_stable_layer_norm": false.
SHAS/src/supervised_hybrid/models.py
Line 80 in 418b5e6
I will let you know if I find any additional information about this.
EDIT:
Actually it was something much simpler, the wav2vec2 base model has different hidden dimension (768 instead of 1024). Changing constants.py seems to fix everything:
https://github.com/mt-upc/SHAS/blob/main/src/supervised_hybrid/constants.py#L4
Feel free to close the issue if you think this is obvious.
The pad token for the wav2vec hybrid segmentation method is hardcoded to the token "<pad>".
SHAS/src/segmentation_methods/utils.py
Line 333 in a64a70f
This causes problems if we load a model that used a different pad token. For example, PereLluis13/Wav2Vec2-Large-XLSR-53-catalan uses "<PAD>" instead. In this case, then the prediction will always be "0".
This can be fixed in 1 line by doing the comparison against processor.tokenizer.pad_token.
Before the fix:
[{duration: 10.06, offset: 0.0, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
{duration: 10.06, offset: 9.94, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
{duration: 10.06, offset: 19.94, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
...
After the fix:
[{duration: 2.82, offset: 0.0, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
{duration: 9.74, offset: 3.5, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
{duration: 2.44, offset: 13.76, rW: 0, speaker_id: NA, uW: 0, wav: Debate24_726.13_2273.72.wav},
...
Fixed in #3
Hello thanks for sharing your code!
I wanted to clarify the correctness of pdac recursive implementation as currently i receive segments that are > max_segment_length.
Mostly I think the issues are in lines: #L121-L123 and #L134-L135 in the implementation that should be deleted.
def recusrive_split(sgm):
if sgm.duration < max_segment_length:
segments.append(sgm)
else:
j = 0
sorted_indices = np.argsort(sgm.probs)
while j < len(sorted_indices):
split_idx = sorted_indices[j]
split_prob = sgm.probs[split_idx]
### this line could allow for sgm.duration > max_seglength
> if split_prob > threshold:
> segments.append(sgm)
> break
sgm_a, sgm_b = split_and_trim(sgm, split_idx, threshold)
if (
sgm_a.duration > min_segment_length
and sgm_b.duration > min_segment_length
):
recusrive_split(sgm_a)
recusrive_split(sgm_b)
break
j += 1
### this line could allow for sgm.duration > max_seglength
> else:
> segments.append(sgm)
Could you please explain the need for those two clauses? , from empirical experiment and by matching with the algorithm in the paper they are not needed.
Hi all,
I have successfully installed your repo and downloaded your English ckpt.
When I run the segment.py script I got the error:
Traceback (most recent call last):
File "/home/ubuntu/SHAS/./src/supervised_hybrid/segment.py", line 342, in <module>
segment(args)
File "/home/ubuntu/SHAS/./src/supervised_hybrid/segment.py", line 236, in segment
probs, _ = infer(
TypeError: infer() takes 4 positional arguments but 5 were given
This is due to the fact that here the wav_path.name is also passed while being useless for the infer function.
If you remove it, the script perfectly works.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.