Giter VIP home page Giter VIP logo

match-lstm's Introduction

match-lstm

Pytorch Implementation of "Learning Natural Language Inference with LSTM", 2016, S. Wang et al. (https://arxiv.org/pdf/1512.08849.pdf)

Env.

Requirements

Dataset

Word Embeddings

Experiment

# Create a pickle file: data/snli.pkl
$ python3 dataset.py

# Run
$ python3 main.py

Training time

  • 156 minutes per training epoch w/ a NVIDIA Titan Xp GPU
  • I plan to reduce the training time soon. (Work in progress)

Result

  • Epoch 6
  • Training loss: 0.361281, accuracy: 86.1% (mLSTM train accuracy: 92.0%)
  • Dev loss: 0.392275, accuracy: 85.8% (mLSTM dev accuracy: 86.9%)
  • Test loss: 0.397926, accuracy: 85.5% (mLSTM test accuracy: 86.1%)

Reference

https://github.com/shuohangwang/SeqMatchSeq

match-lstm's People

Contributors

donghyeonk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

match-lstm's Issues

How to use multi-GPU training?

I tried to use os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' in my python script.When running the script from the command line, it simply uses 1 GPU only.I also use nn.DataParallel(model), but get error "'DataParallel' object has no attribute 'req_grad_params'"

BrokenPipeError: [Errno 32] Broken pipe

tr_loader, _, _ = snlidata.get_dataloaders(batch_size=32, num_workers=0, pin_memory=torch.cuda.is_available())

if I do not set the num_workers as 0, I will get this BrokenPipeError. Does anyone have an idea about it?

training acc problems

image
I trained according to your training steps, but the difference between the training results and the report is very big and I am puzzled. Thank you.

oov problem

Sorry, I found that your code did not consider the problem of oov when constructing the vocabulary. When there is a word in the new test variant data that is not in your vocabulary, there is no corresponding unknow idx corresponding to it.
image

What is the explanation of using multiprocessing?

Hi,

I am kind of confused about the use or torch.multiprocessing. As far as my understanding, each batch is trained twice on the model. So training 10 epochs by 2 processes is essentially training 20 epochs. Is this correct? Since I've never used torch.multiprocessing, could you please give me more explanations about the use of it in your code? I'd appreciate it.

Best,
Zihao

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.