Giter VIP home page Giter VIP logo

random-moe-as-dropout's Introduction

Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers

License: MIT

Code for this paper Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers

Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal, Shiwei Liu, Zhangyang Wang

Our implementation is based on fastmoe repo and huggingface repo. More training script and pre-trained models are coming soon.

Overview

Despite their remarkable achievement, gigantic transformers encounter significant drawbacks, including exorbitant computational and memory footprints during training, as well as severe collapse evidenced by a high degree of parameter redundancy. Sparsely-activated Mixture-of-Experts (SMoEs) have shown promise to mitigate the issue of training efficiency, yet they are prone to (1) redundant experts, due to representational collapse; and (2) poor expert scalability for inference and downstream fine-tuning, primarily due to overfitting of the learned routing policy to the number of activated experts during training. As recent research efforts are predominantly focused on improving routing policies to encourage expert specializations, this work focuses on exploring the overlooked scalability bottleneck of SMoEs and leveraging it to effectively scale dense transformers. To this end, we propose a new plug-and-play training framework, SMoE-Dropout, to enable scaling transformers to better accuracy in their full capacity without collapse. Specifically, SMoE-Dropout consists of a randomly initialized and fixed router network to activate experts and gradually increases the activated expert number as training progresses over time. Transformers trained by SMoE-Dropout naturally exhibit a self-slimmable property subject to resource availability, offering smooth and consistent performance boosts with an increase in activated experts during inference or fine-tuning. The framework of our SMoE-Dropout is demonstrated in the following figure.

Prerequisite

Usage

Pretraining Transformer-XL on enwik8:
bash script/table1/smoe_dropout.sh
bash script/table1/directly_dense_training.sh
Transfor pretrained model on SST-2:
bash script/table2/sst2/dense_model.sh [pretrained-checkpoint]
bash script/table2/sst2/smoe_dropout.sh [pretrained-checkpoint]
Ablation:
bash script/figure5/8layers_smoe_dropout.sh
bash script/figure5/12layers_smoe_dropout.sh

Citation

@inproceedings{
  chen2023sparse,
  title={Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers},
  author={Tianlong Chen and Zhenyu Zhang and AJAY KUMAR JAISWAL and Shiwei Liu and Zhangyang Wang},
  booktitle={The Eleventh International Conference on Learning Representations },
  year={2023},
  url={https://openreview.net/forum?id=w1hwFUb_81}
}

random-moe-as-dropout's People

Contributors

kyriection avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

random-moe-as-dropout's Issues

what's the random routing policy of SMoE-Dropout?

what's the random routing policy of SMoE-Dropout?
I read the paper and can not find any detailed description of it.
Are you using the standard dropout as the routing strategy during training?
Or why is your method called SMoE-Dropout?
Appreciate to get you answer.

About the code of modulization stage

Hi, @Kyriection Thanks for the exciting work.
I notice that you split the MLP of the model to get some smaller MLPs as MoE. But I didn't find any code about this stage in this repository. Did I miss something? Would you give some details about this stage?
Thanks a lot.

Question about BERT config used in the paper

Hi, I'm curious about the config for BERT model used in the paper. Out of 12 layers of BERT which layers use MoE FFN layer?
Also are you planning share training script and configs for BERT/RoBERTa?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.