Giter VIP home page Giter VIP logo

spinn's Introduction

This repository was first used for the paper A Fast Unified Model for Sentence Parsing and Understanding, adapted for several subsequent papers, and is under active development for related future projects. It contains code for sentence understanding models that use tree structure or dynamic graph structure.

Installation

Requirements:

  • Python 3.6
  • PyTorch 0.4
  • Additional dependencies listed in python/requirements.txt

Install PyTorch based on instructions online: http://pytorch.org

Install the other Python dependencies using the command below.

python3 -m pip install -r python/requirements.txt

Running the code

The main executable for the SNLI experiments in the paper is supervised_classifier.py, whose flags specify the hyperparameters of the model. You can specify gpu usage by setting --gpu flag greater than or equal to 0. Uses the CPU by default.

Here's a sample command that runs a fast, low-dimensional CPU training run, training and testing only on the dev set. It assumes that you have a copy of SNLI available locally.

    PYTHONPATH=spinn/python \
        python3 -m spinn.models.supervised_classifier --data_type nli \
    --training_data_path ~/data/snli_1.0/snli_1.0_dev.jsonl \
    --eval_data_path ~/data/snli_1.0/snli_1.0_dev.jsonl \
    --embedding_data_path python/spinn/tests/test_embedding_matrix.5d.txt \
    --word_embedding_dim 5 --model_dim 10 --model_type CBOW

For full runs, you'll also need a copy of the 840B word 300D GloVe word vectors.

Semi-Supervised Parsing

You can train SPINN using only sentence-level labels. In this case, the integrated parser will randomly sample labels during training time, and will be optimized with the REINFORCE algorithm. The command to run this model looks slightly different:

python3 -m spinn.models.rl_classifier --data_type listops \
    --training_data_path spinn/python/spinn/data/listops/train_d20a.tsv \
    --eval_data_path spinn/python/spinn/data/listops/test_d20a.tsv  \
    --word_embedding_dim 32 --model_dim 32 --mlp_dim 16 --model_type RLSPINN \
    --rl_baseline value --rl_reward standard --rl_weight 42.0

Note: This model does not yet work well on natural language data, although it does on the included synthetic dataset called listops. Please look at the [sweep file][10] for an idea of which hyperparameters to use.

Log Analysis

This project contains a handful of tools for easier analysis of your model's performance.

For one, after a periodic number of batches, some useful statistics are printed to a file specified by --log_path. This is convenient for visual inspection, and the script parse_logs.py is an example of how to easily parse this log file.

Contributing

If you're interested in proposing a change or fix to SPINN, please submit a Pull Request. In addition, ensure that existing tests pass, and add new tests as you see appropriate. To run tests, simply run this command from the root directory:

nosetests python/spinn/tests

Adding Logging Fields

SPINN outputs metrics and statistics into a text protocol buffer format. When adding new fields to the proto file, the generated proto code needs to be updated.

bash python/build.sh

License

Copyright 2018, New York University

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

spinn's People

Contributors

abhirast avatar alexandresablayrolles avatar anhad13 avatar cgpotts avatar ciptah avatar dimalik avatar ethancaballero avatar hans avatar manning avatar mrdrozdov avatar raghavgupta93 avatar shivam13verma avatar sleepinyourhat avatar woojinchung avatar woollysocks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spinn's Issues

Scripts for collapsing/binarizing ptb trees

Hi, could you make public the scripts for unary-collapsing and binarizing the original PTB trees? I want to make sure that I use the same trees as you did. Thanks a lot!

Purpose of '(' and ')' parentheses in ListOps

Hi,

I would like to clarify what is the purpose of the parentheses of type '(' and ')' in the ListOps dataset.

In my understanding the order in which the operators are nested is already encapsulated by the brackets of type '[' and ']', so why are the parentheses '(' and ')' needed? Am I missing something?

Thanks!

Best checkpoint for NMT

Best checkpoint saving is broken for NMT. In the current state, every checkpoint with a BLEU > 0 will be saved as the "best" checkpoint.

Switch SacreBLEU

Codebase currently uses Moses' perl implementation of BLEU. Switch to SacreBLEU which is in python, this will offer flexibility and we can get per sentence BLEU scores with ease.

Please put MT architecture behind a flag.

Currently, I can't run classification experiments on master. I get the following error since a target_vocabulary is only defined if you're doing MT.

Traceback (most recent call last):
File "/Users/nikita.nangia/anaconda/envs/py36/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/Users/nikita.nangia/anaconda/envs/py36/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/nikita.nangia/Documents/General/NYU/ML2/research/spinn/python/spinn/models/rl_classifier.py", line 442, in
run(only_forward=FLAGS.expanded_eval_only_mode)
File "/Users/nikita.nangia/Documents/General/NYU/ML2/research/spinn/python/spinn/models/rl_classifier.py", line 382, in run
FLAGS.training_data_path, FLAGS.eval_data_path)
File "/Users/nikita.nangia/Documents/General/NYU/ML2/research/spinn/python/spinn/models/base.py", line 196, in load_data_and_embeddings
target_vocabulary=target_vocabulary) if raw_training_data is not None else None
UnboundLocalError: local variable 'target_vocabulary' referenced before assignment

Run model on arbitrary sentence

Sorry for the seemingly amateur question here. The code runs (with some minor modifications initially) to produce models, but I don't see where classifications are outputted/logged, if anywhere. How do I obtain classifications for cases in the test set, for example?

Cannot run code when following instructions: Assertion Error

Here and on the master branch whenever I pull I run into the same issue:
File "/home/fbaturst/Desktop/spinn-listops-release1/python/spinn/util/data.py", line 51, in TrimDataset
assert allow_cropping or diff == 0, "allow_eval_cropping is false but there are over-length eval examples."
AssertionError: allow_eval_cropping is false but there are over-length eval examples.

I wouldn't like to change the existing code too much so I would be very grateful I someone could give some background on this issue. I haven't modified anything in the original code so far.

Best,
Filip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.