Comments (9)
Do we need the contents of data/binary/wmt14_en_fr
directory for evaluation?
from lite-transformer.
Hi Kishore,
Thank you for asking! We already included the preprocessed binary file of test dataset zipped in our provided checkpoint tar. You could test the checkpoint on the test dataset, by moving the test*
and dict*
files to data/binary/wmt14_en_fr
(mkdir
if you do not have it) and calling the test.sh
. If you would like to test the checkpoint on the validation set, please run the configs/wmt14.en-fr/prepare.sh to get the preprocessed valid dataset.
from lite-transformer.
I am closing this issue. If you have any following up questions, please feel free to re-open it.
from lite-transformer.
I am getting the same issue while testing the model. Even though required test* and dict* files are already in their required place.
Could you(@Michaelvll ) please help me to test the trained checkpoint by resolving the error mentioned in original issue by @kishorepv ?
from lite-transformer.
Hi @tomshalini, could you please provide the command you used for testing?
from lite-transformer.
Hi @tomshalini, could you please provide the command you used for testing?
Hello @Michaelvll ,
I am using below command for testing:
configs/wmt14.en-fr/test.sh '/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint_best.pt' 0 test
from lite-transformer.
Hi @tomshalini, could you please provide the command you used for testing?
Hello @Michaelvll ,
I am using below command for testing:configs/wmt14.en-fr/test.sh '/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint_best.pt' 0 test
Could you try to use configs/wmt14.en-fr/test.sh '/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/' 0 test
? We will automatically add the checkpoint_best.pt
in the test.sh
.
from lite-transformer.
Thank you @Michaelvll for your help. Now, I am getting the below error even though I am running on 2 GPUs.
Traceback (most recent call last):
File "generate.py", line 192, in
cli_main()
File "generate.py", line 188, in cli_main
main(args)
File "generate.py", line 106, in main
hypos = task.inference_step(generator, models, sample, prefix_tokens)
File "/home/shalinis/lite-transformer/fairseq/tasks/fairseq_task.py", line 246, in inference_step
return generator.generate(models, sample, prefix_tokens=prefix_tokens)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 146, in generate
encoder_outs = model.forward_encoder(encoder_input)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 539, in forward_encoder
return [model.encoder(**encoder_input) for model in self.models]
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 539, in
return [model.encoder(**encoder_input) for model in self.models]
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/models/transformer_multibranch_v2.py", line 314, in forward
x = layer(x, encoder_padding_mask)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/models/transformer_multibranch_v2.py", line 693, in forward
x, _ = self.self_attn(query=x, key=x, value=x, key_padding_mask=encoder_padding_mask)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/modules/multibranch.py", line 37, in forward
x = branch(q.contiguous(), incremental_state=incremental_state)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py", line 131, in forward
output = self.linear2(output)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 91, in forward
return F.linear(input, self.weight, self.bias)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/functional.py", line 1676, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: no kernel image is available for execution on the device
Namespace(ignore_case=False, order=4, ref='/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496//exp/test_gen.out.ref', sacrebleu=False, sentence_bleu=False, sys='/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496//exp/test_gen.out.sys')
Traceback (most recent call last):
File "score.py", line 88, in
main()
File "score.py", line 84, in main
score(f)
File "score.py", line 78, in score
print(scorer.result_string(args.order))
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 127, in result_string
return fmt.format(order, self.score(order=order), *bleup,
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 103, in score
return self.brevity() * math.exp(psum / order) * 100
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 117, in brevity
r = self.stat.reflen / self.stat.predlen
ZeroDivisionError: division by zero
from lite-transformer.
Thank you @Michaelvll for your help. Now, I am getting the below error even though I am running on 2 GPUs.
Traceback (most recent call last):
File "generate.py", line 192, in
cli_main()
File "generate.py", line 188, in cli_main
main(args)
File "generate.py", line 106, in main
hypos = task.inference_step(generator, models, sample, prefix_tokens)
File "/home/shalinis/lite-transformer/fairseq/tasks/fairseq_task.py", line 246, in inference_step
return generator.generate(models, sample, prefix_tokens=prefix_tokens)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 146, in generate
encoder_outs = model.forward_encoder(encoder_input)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 539, in forward_encoder
return [model.encoder(**encoder_input) for model in self.models]
File "/home/shalinis/lite-transformer/fairseq/sequence_generator.py", line 539, in
return [model.encoder(**encoder_input) for model in self.models]
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/models/transformer_multibranch_v2.py", line 314, in forward
x = layer(x, encoder_padding_mask)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/models/transformer_multibranch_v2.py", line 693, in forward
x, _ = self.self_attn(query=x, key=x, value=x, key_padding_mask=encoder_padding_mask)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/modules/multibranch.py", line 37, in forward
x = branch(q.contiguous(), incremental_state=incremental_state)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/lite-transformer/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py", line 131, in forward
output = self.linear2(output)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 91, in forward
return F.linear(input, self.weight, self.bias)
File "/home/shalinis/.conda/envs/integration/lib/python3.7/site-packages/torch/nn/functional.py", line 1676, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: no kernel image is available for execution on the device
Namespace(ignore_case=False, order=4, ref='/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496//exp/test_gen.out.ref', sacrebleu=False, sentence_bleu=False, sys='/home/shalinis/lite-transformer/checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496//exp/test_gen.out.sys')
Traceback (most recent call last):
File "score.py", line 88, in
main()
File "score.py", line 84, in main
score(f)
File "score.py", line 78, in score
print(scorer.result_string(args.order))
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 127, in result_string
return fmt.format(order, self.score(order=order), *bleup,
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 103, in score
return self.brevity() * math.exp(psum / order) * 100
File "/home/shalinis/lite-transformer/fairseq/bleu.py", line 117, in brevity
r = self.stat.reflen / self.stat.predlen
ZeroDivisionError: division by zero
@Michaelvll could you please help me in resolving the above issue?
from lite-transformer.
Related Issues (20)
- Quantization HOT 1
- transfomer model with different paramters HOT 3
- Export model to ONNX HOT 1
- Please share your quantization, quantization+pruning checkpoints HOT 1
- Missing Data Preparation section for the CNN / DailyMail dataset HOT 1
- Error while testing the model HOT 8
- Can not get the result as the paper if train the transformer from scratch. HOT 2
- How to measure the FLOPs/MACs? HOT 2
- in paragra 4 of HOT 1
- in the paragra 4 of paper HOT 1
- about the global and local features in fig 3 HOT 2
- TransformerEncoderLayer HOT 4
- about kernel size HOT 1
- about dynamicconv_cuda HOT 1
- about padding!!! HOT 2
- About data ! HOT 1
- wmt16_en_de dataset link HOT 1
- model pruning
- Can‘t find the cnn branch,
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lite-transformer.