cheneydon / efficient-bert Goto Github PK
View Code? Open in Web Editor NEWThis repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".
This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".
当我运行create_pretrain_feature.sh 中的如下一段时(Wikipedia only 的那一段),即:
python create_pretrain_feature.py --lowercase --vocab_path $VOCAB_PATH --wiki_dir $WIKI_DIR
会报这个错误:
[11 07:34:01] Namespace(batch_size=64, book_dir=PosixPath('.'), concate_data_dir=PosixPath('.'), exp_dir='./exp/tmp/20220311-193401', local_rank=0, lowercase=True, merge_path='', start_epoch=1, teacher_model='bert_base', total_epochs=10, train_ratio=1, val_ratio=0, vocab_path='./pretrained_ckpt/bert-base-uncased-vocab.txt', wiki_dir=PosixPath('dataset/pretrain_data/wikipedia_nomask'))
Traceback (most recent call last):
File "create_pretrain_feature.py", line 54, in <module>
total_examples += int(num_epoch_examples[epoch % len(num_epoch_examples)] * args.train_ratio)
ZeroDivisionError: integer division or modulo by zero
我不知道导致len(num_epoch_examples)==0
的原因是什么。
而且奇怪的是,当跳过这段代码,执行Wikipedia + BooksCorpus那一段的时候,即:
# Wikipedia + BooksCorpus
python create_pretrain_feature.py --lowercase --vocab_path $VOCAB_PATH --wiki_dir $WIKI_DIR --book_dir $BOOK_DIR --concate_data_dir $CONCATE_DATA_DIR
一切正常,bookcorpus_nomask、wiki_book_nomask、 wikipedia_nomask这三个文件夹里各保存了5个data_epoch_x的文件。
请问是哪里出了问题?
Hi,
thanks for providing this training code and the pretrained model. But how do you load the model in pytorch? In your test.py you only do tests on tinybert, roberts, etc but don't load EfficientBert. The code doesn't really explain it.
Regards
我对bash create_pretrain_data.sh
这个文件有些疑问。
在这个文件里,
text_formatting.py
的输出保存到了./dataset/pretrain_data/format_data/wikicorpus_en_format.txt
,而create_data.py
的输入是wikipedia_en_format.txt
,这里是否存在命名的问题?
我运行到python pretrain_data_scripts/create_data.py \--train_corpus $FORMAT_WIKI_PATH \--output_dir $WIKI_SAVE_DIR --vocab_path $VOCAB_PATH \--lowercase --epochs_to_generate 5 \--max_seq_len 128 --max_predictions_per_seq 0
这一段的时候,都会报错,找不到wikipedia_en_format.txt
文件
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.