This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, arxiv, 2017).
A novel sequence to sequence framework utilizes the self-attention mechanism, instead of Convolution operation or Recurrent structure, and achieve the state-of-the-art performance on WMT 2014 English-to-German translation task. (2017/06/12)
The official Tensorflow Implementation can be found in: tensorflow/tensor2tensor.
To learn more about self-attention mechanism, you could read "A Structured Self-attentive Sentence Embedding".
The project support training and translation with trained model now.
Note that this project is still a work in progress.
BPE related parts are not yet fully tested.
If there is any suggestion or error, feel free to fire an issue to let me know. :)
Modified by GBC:
install according to requirements.txt
. Especially notice the torchtext 0.8
is required, thus it requires pytorch 1.7
An example of training for the WMT'16 Multimodal Translation task (http://www.statmt.org/wmt16/multimodal-task.html).
Modified by GBC:
conda install -c conda-forge spacy # install spacy
# python -m spacy download en
# python -m spacy download de
Not ok due to network, go to github and download two packages de_core_news_sm
and en_core_web_sm
, then install via
pip install de_core_news_sm
pip install en_core_web_sm
Modified by GBC because en
and de
are not subjected to current spacy language model, changed to de_core_news_sm
and en_core_web_sm
. The codes in the preprocess.py
are changed accordingly
python preprocess.py -lang_src de_core_news_sm -lang_trg en_core_web_sm -share_vocab -save_data m30k_deen_shr.pkl
train_multi30k_de_en.sh
python translate.py -data_pkl m30k_deen_shr.pkl -model trained.chkpt -output prediction.txt
Since the interfaces is not unified, you need to switch the main function call from
main_wo_bpe
tomain
.
python preprocess.py -raw_dir /tmp/raw_deen -data_dir ./bpe_deen -save_data bpe_vocab.pkl -codes codes.txt -prefix deen
python train.py -data_pkl ./bpe_deen/bpe_vocab.pkl -train_path ./bpe_deen/deen-train -val_path ./bpe_deen/deen-val -log deen_bpe -embs_share_weight -proj_share_weight -label_smoothing -output_dir output -b 256 -warmup 128000 -epoch 400
- TODO:
- Load vocabulary.
- Perform decoding after the translation.
- Parameter settings:
- batch size 256
- warmup step 4000
- epoch 200
- lr_mul 0.5
- label smoothing
- do not apply BPE and shared vocabulary
- target embedding / pre-softmax linear layer weight sharing.
- coming soon.
- Evaluation on the generated text.
- Attention weight plot.
- The byte pair encoding parts are borrowed from subword-nmt.
- The project structure, some scripts and the dataset preprocessing steps are heavily borrowed from OpenNMT/OpenNMT-py.
- Thanks for the suggestions from @srush, @iamalbert, @Zessay, @JulesGM, @ZiJianZhao, and @huanghoujing.