Giter VIP home page Giter VIP logo

bert-emd's Introduction

BERT-EMD

This repository contains an implementation with PyTorch of model presented in the paper "BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance" in EMNLP 2020. The figure below illustrates a high-level view of the model's architecture. BERT-EMD Model For more details about the techniques of BERT-EMD, refer to our paper.

Installation

Run command below to install the environment (using python3).

pip install -r requirements.txt 

Data and Pre-train Model Prepare

  1. Get GLUE data:
python download_glue_data.py --data_dir glue_data --tasks all

BaiduYun for alternative

  1. Get BERT-Base offical model from here, download and unzip to directory ./model/bert_base_uncased. Convert tf model to pytorch model:
cd bert_finetune
python convert_bert_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path ../model/bert_base_uncased \
--bert_config_file ../model/bert_base_uncased/bert_config.json \
--pytorch_dump_path ../model/pytorch_bert_base_uncased

Or you can download the pytorch version directly from huggingface and download to ../model/pytorch_bert_base_uncased.

  1. Get finetune teacher model, take task MRPC for example (working dir: ./bert_finetune):
export MODEL_PATH=../model/pytorch_bert_base_uncased/
export TASK_NAME=MRPC
python run_glue.py \
  --model_type bert \
  --model_name_or_path $MODEL_PATH \
  --task_name $TASK_NAME \
  --do_train \
  --do_eval \
  --do_lower_case \
  --data_dir ../data/glue_data/$TASK_NAME/ \
  --max_seq_length 128 \
  --per_gpu_train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 4.0 \
  --save_steps 2000 \
  --output_dir ../model/$TASK_NAME/teacher/ \
  --evaluate_during_training \
  --overwrite_output_dir
  1. Get the pretrained general distillation TinyBERT v2 student model: 4-layer and 6-layer. Unzip to directory model/student/layer4 and model/student/layer6 respectively. (This link may be temporarily unavailable, for alternative you can download from here BaiduYun).

  2. Distill student model, take 4-layer student model for example:

cd ../bert_emd
export TASK_NAME=MRPC
python emd_task_distill.py  \
--data_dir ../data/glue_data/$TASK_NAME/ \
--teacher_model ../model/$TASK_NAME/teacher/ \
--student_model ../model/student/layer4/ \
--task_name $TASK_NAME \
--output_dir ../model/$TASK_NAME/student/ \
--beta 0.01 --theta 1

update 2021/08/06

We replace the layer weight update method with division by addition. In our experiments, this normalization method is better than softmax on some datasets. Wight can be in range from 1e-3 to 1e+3

update 2022/06/01

We add the hyperparameters for best-performing models as bellow and fixed some bugs.

Hyperparameters configurations for best-performing models

Layer Num Task alpha beta T_emd T Learning Rate
4 CoLA 1 0.001 5 1 2.00E-05
4 MNLI 1 0.005 1 3 5.00E-05
4 MRPC 1 0.001 10 1 2.00E-05
4 QQP 1 0.005 1 3 2.00E-05
4 QNLI 1 0.005 1 3 2.00E-05
4 RTE 1 0.005 1 1 2.00E-05
4 SST-2 1 0.001 1 1 2.00E-05
4 STS-b 1 0.005 1 1 3.00E-05
6 CoLA 1 0.001 1 7 2.00E-05
6 MNLI 1 0.005 1 1 5.00E-05
6 MRPC 1 0.005 1 1 2.00E-05
6 QQP 1 0.005 1 1 2.00E-05
6 QNLI 1 0.001 1 1 5.00E-05
6 RTE 1 0.005 1 1 2.00E-05
6 SST-2 1 0.001 1 1 2.00E-05
6 STS-b 1 0.005 1 1 3.00E-05

bert-emd's People

Contributors

lxk00 avatar

Stargazers

 avatar  avatar Chuanming avatar  avatar  avatar 徐同学 avatar Toughq avatar  avatar Suyu Liu avatar David avatar  avatar HaoleiYang avatar Yaxin Fan avatar  avatar anon.W avatar vencent,W.jun avatar Vladimir Gurevich avatar ehud baumatz avatar Xue Fuzhao avatar Jun Fu avatar IronMan avatar kleeeeea avatar xuhaiyang-mPLUG avatar  avatar Canyu Chen avatar Shuhuai Ren avatar Fahimeh Saleh avatar 赵斌 avatar Yice Zhang avatar Marcin Dąbrowski avatar  avatar Haoli Bai avatar Jiaxi Yang avatar Jacob avatar Roman Hossain Shaon avatar 爱可可-爱生活 avatar Leo Leung avatar Lei Li avatar  avatar  avatar  avatar  avatar  avatar Yalong Guo avatar Ramsey avatar felix-wang avatar Khoa Duong avatar Piji Li avatar yujun avatar Kaiyuan Li avatar

Watchers

anon.W avatar  avatar IronMan avatar

bert-emd's Issues

General Distill Stage?

Hello! The work is great and thanks for sharing the codes!

But I am confused about the general distill stage: In the Bert-emd folder, I see a file called "general_distill" but it seems that the file is not used and also can you tell me what is the pregenerate-dataset in general_distill.py?

Than you very much!

args.embedding_emd 参数没有定义

File "emd_task_distill.py", line 390, in transformer_loss
if args.embedding_emd:

AttributeError: 'Namespace' object has no attribute 'embedding_emd'

I can't reproduce the paper's result.

Hello! The work is great and thanks for sharing the codes!

I want to reproduce the BERT-EMD4's result, but there are so many hyperparameters that it is difficult to reproduce the experimental results. Can you provide the hyperparameters and experimental details used by different datasets?

Thank you very much!

Duplicated softmax on layer weights?

Thanks for sharing the code!
In emd_task_distill.py, you seem to perform softmax on the layer weights twice by default. Is it intended or do I misunderstanding anything?

First softmax:

if args.update_weight:
get_new_layer_weight(att_trans_matrix, att_distance_matrix, stu_layer_num, tea_layer_num, T=T)

  • where "get_new_layer_weight()" contains:
    student_layer_weight = softmax(student_layer_weight / T)
    teacher_layer_weight = softmax(teacher_layer_weight / T)

Second softmax:

if args.add_softmax:
att_student_weight = softmax(att_student_weight)
att_teacher_weight = softmax(att_teacher_weight)
rep_student_weight = softmax(rep_student_weight)
rep_teacher_weight = softmax(rep_teacher_weight)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.