Giter VIP home page Giter VIP logo

Comments (7)

albertfgu avatar albertfgu commented on July 26, 2024 4

Hi, the WikiText-103 LM config has been added to the README. It can be run with

python -m train experiment=s4-wt103 wandb=null

Note that these experiments are quite expensive. We used 8 A100 GPUs and trained for around 5 days (according to the original paper, the baseline Transformer used 3 days). This is because the S4 model overfits harder on this small dataset, so we turned up the regularization very high and trained for longer.

from s4.

thanhlt998 avatar thanhlt998 commented on July 26, 2024

Hi, the WikiText-103 LM config has been added to the README. It can be run with

python -m train experiment=s4-wt103 wandb=null

Note that these experiments are quite expensive. We used 8 A100 GPUs and trained for around 5 days (according to the original paper, the baseline Transformer used 3 days). This is because the S4 model overfits harder on this small dataset, so we turned up the regularization very high and trained for longer.

Could you provide me your pretrained S4 LM on the wikitext-103 corpus to experiment the power of this architecture on other downstream tasks? Thanks!

from s4.

yuvalkirstain avatar yuvalkirstain commented on July 26, 2024

@albertfgu Thanks for the config update.

Can you please upload the logs from the Wikitext-103 experiment? It will help a lot in reproducing the results and provide an early signal if something is wrong.
I am trying to reproduce the results and after 23,000 steps the validation perplexity is ~29 (I expected a lower perplexity at this stage).

Thank you very much!

from s4.

albertfgu avatar albertfgu commented on July 26, 2024

Hi @yuvalkirstain, I am working on exporting the logs. It is a bit complicated because this experiment was split into multiple runs with checkpointing/resuming because of resource management on our cluster.

That said, your perplexity after 23000 steps actually tracks ours very closely. As noted in the paper, S4 had tendencies to overfit on this dataset, so we used very high regularization that slowed down training speed.

from s4.

gaceladri avatar gaceladri commented on July 26, 2024

Hello, @albertfgu are you planning to release your pre-trained models for text? I am very interested on them. Also, are you planing to integrate your models in Huggingface? huggingface/transformers#14837

from s4.

albertfgu avatar albertfgu commented on July 26, 2024

I think we are leaning toward not releasing the one trained for the paper because of a few reasons, such as the model implementation still undergoing changes and improvements. We are working with HuggingFace to release a version of the model though.

from s4.

albertfgu avatar albertfgu commented on July 26, 2024

A WikiText-103 model has been re-trained and released. Instructions for using it are located throughout the READMEs, for example here.

from s4.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.