Comments (7)
Hi, the WikiText-103 LM config has been added to the README. It can be run with
python -m train experiment=s4-wt103 wandb=null
Note that these experiments are quite expensive. We used 8 A100 GPUs and trained for around 5 days (according to the original paper, the baseline Transformer used 3 days). This is because the S4 model overfits harder on this small dataset, so we turned up the regularization very high and trained for longer.
from s4.
Hi, the WikiText-103 LM config has been added to the README. It can be run with
python -m train experiment=s4-wt103 wandb=null
Note that these experiments are quite expensive. We used 8 A100 GPUs and trained for around 5 days (according to the original paper, the baseline Transformer used 3 days). This is because the S4 model overfits harder on this small dataset, so we turned up the regularization very high and trained for longer.
Could you provide me your pretrained S4 LM on the wikitext-103 corpus to experiment the power of this architecture on other downstream tasks? Thanks!
from s4.
@albertfgu Thanks for the config update.
Can you please upload the logs from the Wikitext-103 experiment? It will help a lot in reproducing the results and provide an early signal if something is wrong.
I am trying to reproduce the results and after 23,000 steps the validation perplexity is ~29 (I expected a lower perplexity at this stage).
Thank you very much!
from s4.
Hi @yuvalkirstain, I am working on exporting the logs. It is a bit complicated because this experiment was split into multiple runs with checkpointing/resuming because of resource management on our cluster.
That said, your perplexity after 23000 steps actually tracks ours very closely. As noted in the paper, S4 had tendencies to overfit on this dataset, so we used very high regularization that slowed down training speed.
from s4.
Hello, @albertfgu are you planning to release your pre-trained models for text? I am very interested on them. Also, are you planing to integrate your models in Huggingface? huggingface/transformers#14837
from s4.
I think we are leaning toward not releasing the one trained for the paper because of a few reasons, such as the model implementation still undergoing changes and improvements. We are working with HuggingFace to release a version of the model though.
from s4.
A WikiText-103 model has been re-trained and released. Instructions for using it are located throughout the READMEs, for example here.
from s4.
Related Issues (20)
- The Issue only occurs in the aan dataset HOT 1
- Using Neumann series to compute the DFT of basis kernels directly HOT 5
- Several examples doesn't work (Sashimi checkpoints / sampleRNN training) HOT 4
- information mismatch in s4/models/s4/experiments.md
- Paper, Table 1, Convolution number of parameters HOT 2
- About `krylov()` HOT 1
- Missing or misplaced "old" config folder? HOT 4
- "pretrained_model" is not defined before being called in train.py HOT 2
- Question on HMDB51 Dataset (S4ND Video Experiment)
- Unable to generate the weather using generate.py with time Series training checkpoint
- Large difference of inference result between forward and step
- AttributeError: 'SSMKernelDPLR' object has no attribute 'kernel' HOT 1
- Training on 12bits audio instead of 8bit? (Question, what do I need to change?)
- S4 Listops have nan loss HOT 2
- Quantization for S4/ Hippo
- The dynamics of the latent state of the model
- segmentation fault when running python -m train pipeline=mnist model=s4 HOT 1
- how to use the S4Block .step()
- KeyError in train.py self.dataset = SequenceDataset.registry[self.hparams.dataset._name_]
- Why is Sashimi's effect in speech signal enhancement (denoisy) so bad?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s4.