easy-use-bart's People
easy-use-bart's Issues
Bug about "k_out" parameter in "DataTrainingArguments"
It seems that the "k_out" parameter is not defined in the "DataTrainingArguments".
Could you please check it, and what is the meaning of "k_out"?
Thanks!
Traceback (most recent call last):
File "finetune.py", line 302, in
main()
File "finetune.py", line 270, in main
metrics.update(eval_top1_acc(out_pred_path, out_pred_ref, data_args.k_out)) ## top1_metrics
AttributeError: 'DataTrainingArguments' object has no attribute 'k_out'
Error with the hf_arparser.py file
Hi, thank you for your code, I try to run your code and I got an error about the hf_argparser.py file.
I try to search around but I don't understand why. (I guess it is maybe because the version that I am using is different with you?)
I'm using:
- Python 3.8.10 (this is different)
- transformers==3.3.1
- torch==1.7.0
If you have any clues about the error, please let me know.
Thank you very much for your support.
Here are the details about the error.
File "finetune.py", line 302, in <module>
main()
File "finetune.py", line 103, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/home/xanh_2/vir_bart/lib/python3.8/site-packages/transformers/hf_argparser.py", line 40, in __init__
self._add_dataclass_arguments(dtype)
File "/home/xanh_2/vir_bart/lib/python3.8/site-packages/transformers/hf_argparser.py", line 80, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/usr/lib/python3.8/typing.py", line 774, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
question about the runtimeerror
Hello, thank you for sharing nice job~ i want to reprdocue this project and then run the permGen project.
But when i run CUDA_VISIBLE_DEVICES=0 bash scripts/train_rocstory.sh, it always reminds:
RuntimeError: CUDA out of memory. Tried to allocate 982.00 MiB (GPU 0; 10.76 GiB total capacity; 7.80 GiB already allocated; 603.44 MiB free; 9.16 GiB reserved in total by PyTorch)
Prehaps it is the problem of batch_size, and i modify the line 363 in seq2seq_trainer.py with:
logger.info(" Total train batch size (w. parallel, distributed & accumulation) = %d", 64)
and per_device_train_batch_size=16 ,per_device_eval_batch_size=8 in train_rocstory.sh
It finally works, and i want to know Is there any other better way to modify the RunTimeError?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.