Giter VIP home page Giter VIP logo

nlp-club's Introduction

This is the NLP Group GitHub Repository.

You will be able to find papers we have discussed, useful links and resources and a summary table of benefits and drawbacks there are to using some of these methodologies here.

Papers

Prior to a paper presentation please add your paper in the papers folder, naming the PDF after its title ( <title.pdf> ) so we have a copy and make sure we do not present the same paper twice.

Don't forget to check the Wiki for videos and references.

Past Papers

Date Title of paper Source
2019-01-09 Enriching Word Vectors with Subword Information aclweb
2019-01-23 Word Mover’s Embedding: From Word2Vec to Document Embedding aclweb
2019-02-06 Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank Stanford University
2019-02-20 Deep contextualized word representations arxiv
2019-03-06 Efficient Estimation of Word Representations in Vector Space arxiv
2019-03-20 Distributed Representations of Words and Phrases and their Compositionality arxiv
2019-06-26 Distributed Representations of Sentences and Documents Stanford University
2019-07-24 Latent Dirichlet Allocation Stanford
2019-07-31 Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec arxiv
2019-09-18 GloVe: Global Vectors for Word Representation Stanford University
2019-10-03 Algorithms for Non-negative Matrix Factorization paper
2019-10-23 Effective Approaches to Attention-based Neural Machine Translation arxiv
2019-11-21 Attention Is All You Need arxiv
2020-02-12 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks arxiv
2020-02-26 XLNet: Generalized Autoregressive Pretraining for Language Understanding arxiv

Future papers

  • CNN Understanding Convolutional Neural Networks for Text Classification (CNN paper; source: aclweb)
  • LSTM An LSTM Approach to Short Text Sentiment Classification with Word Embeddings (LSTM paper; source: aclweb)
  • eLmo (revisit) Deep contextualized word representations (ELMo paper; source: arxiv)
  • Bert BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Bert paper; source: arxiv)
  • Severn
  • On the dimensionality of Word Embeddings (how many dimensions are required?)
  • Visualizing Data using t-SNE (source: Journal Machine Learning Research)
  • Google Universal Sentence Embedding (Universal Sentence encoder; source: Google)

nlp-club's People

Contributors

evie-brown avatar iangrimstead avatar stuartnewcombe avatar thanasions avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

uk-gov-mirror

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.