Giter VIP home page Giter VIP logo

seq2seq-pytorch's Introduction

Sequence to Sequence models with PyTorch

This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch

At present it has implementations for :

* Vanilla Sequence to Sequence models

* Attention based Sequence to Sequence models from https://arxiv.org/abs/1409.0473 and https://arxiv.org/abs/1508.04025

* Faster attention mechanisms using dot products between the **final** encoder and decoder hidden states

* Sequence to Sequence autoencoders (experimental)

Sequence to Sequence models

A vanilla sequence to sequence model presented in https://arxiv.org/abs/1409.3215, https://arxiv.org/abs/1406.1078 consits of using a recurrent neural network such as an LSTM (http://dl.acm.org/citation.cfm?id=1246450) or GRU (https://arxiv.org/abs/1412.3555) to encode a sequence of words or characters in a source language into a fixed length vector representation and then deocoding from that representation using another RNN in the target language.

Sequence to Sequence

An extension of sequence to sequence models that incorporate an attention mechanism was presented in https://arxiv.org/abs/1409.0473 that uses information from the RNN hidden states in the source language at each time step in the deocder RNN. This attention mechanism significantly improves performance on tasks like machine translation. A few variants of the attention model for the task of machine translation have been presented in https://arxiv.org/abs/1508.04025.

Sequence to Sequence with attention

The repository also contains a simpler and faster variant of the attention mechanism that doesn't attend over the hidden states of the encoder at each time step in the deocder. Instead, it computes the a single batched dot product between all the hidden states of the decoder and encoder once after the decoder has processed all inputs in the target. This however comes at a minor cost in model performance. One advantage of this model is that it is possible to use the cuDNN LSTM in the attention based decoder as well since the attention is computed after running through all the inputs in the decoder.

Results on English - French WMT14

The following presents the model architecture and results obtained when training on the WMT14 English - French dataset. The training data is the english-french bitext from Europral-v7. The validation dataset is newstest2011

The model was trained with following configuration

* Source and target word embedding dimensions - 512

* Source and target LSTM hidden dimensions - 1024

* Encoder - 2 Layer Bidirectional LSTM

* Decoder - 1 Layer LSTM

* Optimization - ADAM with a learning rate of 0.0001 and batch size of 80

* Decoding - Greedy decoding (argmax)
Model BLEU Train Time Per Epoch
Seq2Seq 11.82 2h 50min
Seq2Seq FastAttention 18.89 3h 45min
Seq2Seq Attention 22.60 4h 47min

Times reported are using a Pre 2016 Nvidia GeForce Titan X

Running

To run, edit the config file and execute python nmt.py --config <your_config_file>

NOTE: This only runs on a GPU for now.

seq2seq-pytorch's People

Contributors

maximumentropy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seq2seq-pytorch's Issues

ValueError: Expecting property name: line 6 column 3 (char 83)

File "/home/mb75502/Seq2Seq-PyTorch/data_utils.py", line 27, in read_config
json_object = json.load(open(file_path, 'r'))
File "/home/mb75502/anaconda2/lib/python2.7/json/init.py", line 291, in load
**kw)
File "/home/mb75502/anaconda2/lib/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/home/mb75502/anaconda2/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/mb75502/anaconda2/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 6 column 3 (char 83)

Share tokenized data

@MaximumEntropy Could you, or anyone reading, this share the tokenized version of the data here? It's really important that I run this, but I can't install the Mosesdecoder on my server (that is shared).

RuntimeError: bool value of Tensor with more than one value is ambiguous

While running your code, I encountered this error.

Traceback (most recent call last):
  File "nmt.py", line 181, in <module>
    decoder_logit = model(input_lines_src, input_lines_trg)
  File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 841, in forward
    ctx_mask
  File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*in
Traceback (most recent call last):
  File "nmt.py", line 181, in <module>
    decoder_logit = model(input_lines_src, input_lines_trg)
  File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 841, in forward
    ctx_mask
  File "/home/cmaurya1/code/py2.7/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/cmaurya1/code/seq2seq/seq2seq_maximum_entropy/model.py", line 382, in forward
    output.append(isinstance(hidden, tuple) and hidden[0] or hidden)
RuntimeError: bool value of Tensor with more than one value is ambiguous

Any hint to solve?

Download Script for WMT Data?

Hey @MaximumEntropy, thanks for such a nice, clean repo. I was wondering if there was a specific script you used to download the wmt data. Maybe you can point us to what you used?

Also, do you mind sharing how many training examples there are in the WMT data? It looks like you have ~5hr train time per epoch. I was wondering how many training examples was in each epoch.

teacher forcing

firstly,thanks for your code,it's really helpful to me,but could i know where is the teacher forcing part,thanks again^_^

question about the 'batch_mask'

Hi, your code has a nice abstraction, thanks for your share. But I have a question about the 'attentionLSTM', it seems that you didn' t use any 'ctx_mask' or 'trg_mask' in your code related to attention part, won't this cause error using for attention ? I'm new to pytorch, hope for your reply!

how to run the code with beam search?

Dear authors,

Thanks for sharing you code. Your code is well structured and easy to read, but I still encounter a problem in running seq2seq with beam search.

In the evaluate.py, you have declared that the beam search is in TODO. And in the decode.py I found that the BeamSearchDecoder has been implemented, so I try to run the decode.py, but it throws a Exception like this:

Traceback (most recent call last):
  File "decode.py", line 479, in <module>
    decoder.translate()
  File "decode.py", line 242, in translate
    hypotheses, scores = decoder.decode_batch(j)
  File "decode.py", line 153, in decode_batch
    context
  File "/usr/share/Anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
TypeError: forward() takes at most 3 arguments (4 given)

So, could you tell me how to run the decode.py or give me some suggestions for implementing the beam search part. It will be highly appreciated for your any suggestions.

Thanks.

get_best() returns index 1

Hi. In the beam_search.py you have a function get_best(). Shouldn't this return the first element, as in index 0, of the sorted list instead of index 1?

Could you declare PyTorch Version ?

Hi , MaximumEntropy.
I run your code on pyTorch==0.4 cuda==9.0 and it have many deprecate warning.
could your tell me your pyTorch version .

Bugs in Seq2Seq model

Hi, the code has a nice abstraction and easy to follow. Thanks!!

However, there are some issues in your implement....

(code)

If you don't pass c_t through a Linear layer from encoder hidden to decoder hidden, then the code crashes. (Encoder and Decoder can have different dimensions)

(code)

When self.decoder.num_layers != 1 the view function will crash because of dimension dis-match.

Questions about the implementation

Hello!

I am reading your implementation line by line, and found it's nice and easy to follow. Thanks a lot! But I still have some questions. Since I didn't finish reading yet, I guess I will have more later on.

You set the requires_grad of two initial hidden states as false (code). Could you explain why you did this, since I thought they should be true for back-propagation. Also, it is wrong if we set them as true.

Does summarization.py work?

Hey, thanks for your implementations.

Trying to get the summarization code running, but I have a feeling it doesn't actually work (yet). Am i correct to assume so? For example, it's calling read_nmt_data instead of read_summarization_data and you've removed the file in your refactor branch.

Any tips on getting it to run?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.