Giter VIP home page Giter VIP logo

wlp-parser's Introduction

WLP-Parser: A wet lab protocol sequence tagger along with relation extraction.

This repository contains a collection of neural network models that we used to demostrate the utility of our dataset. These networks were trained using Pytorch.

A more detailed description of the wet lab protocol corpus can be found in this paper:

An Annotated Corpus for Machine Reading of Instructions in Wet Lab Protocols
Chaitanya Kulkarni, Wei Xu, Alan Ritter, Raghu Machiraju
In Proceedings of 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT, 2018)

Additional information regarding the action, entities and relations can be found here.

Also check out a working demo for sequence tagging and relation extraction using the methods in this repository.

Usage

[TO BE POPULATED]

Additional References

For Maximum Entropy Model used to label all the actions, entities and relations

A maximum entropy approach to named entity recognition.
Marek Rei and Helen Yannakoudakis
In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-2016)

For LSTM + CRF Model used for labelling actions and entities

Neural Architectures for Named Entity Recognition Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer In Proceedings of 14th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT, 2016)

License

Licensed under the MIT License.

wlp-parser's People

Contributors

chaitanya2334 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

wlp-parser's Issues

ImportError: cannot import name 'Data' from 'corpus.WLPDataset'

While trying to run the code, I get the following error:

Traceback (most recent call last):
File "Wlab_Parser/WLP-Parser/main.py", line 18, in
from corpus.WLPDataset import WLPDataset
File "Wlab_Parser/WLP-Parser/corpus/WLPDataset.py", line 32, in
from preprocessing.text_processing import gen_list2id_dict
File "Wlab_Parser/WLP-Parser/preprocessing/text_processing.py", line 7, in
from corpus.WLPDataset import Data
ImportError: cannot import name 'Data' from 'corpus.WLPDataset' (Wlab_Parser/WLP-Parser/corpus/WLPDataset.py)

Process finished with exit code 1

Also, I see cyclic dependencies of text_processing.py and WLPDataset.py where both these files have imports from each other.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.