Giter VIP home page Giter VIP logo

word2vec-sentiments's Introduction

Sentiment Analysis using Doc2Vec

Word2Vec is dope. In short, it takes in a corpus, and churns out vectors for each of those words. What's so special about these vectors you ask? Well, similar words are near each other. Furthermore, these vectors represent how we use the words. For example, v_man - v_woman is approximately equal to v_king - v_queen, illustrating the relationship that "man is to woman as king is to queen". This process, in NLP voodoo, is called word embedding. These representations have been applied widely. This is made even more awesome with the introduction of Doc2Vec that represents not only words, but entire sentences and documents. Imagine being able to represent an entire sentence using a fixed-length vector and proceeding to run all your standard classification algorithms. Isn't that amazing?

However, Word2Vec documentation is shit. The C-code is nigh unreadable (700 lines of highly optimized, and sometimes weirdly optimized code). I personally spent a lot of time untangling Doc2Vec and crashing into ~50% accuracies due to implementation mistakes. This tutorial aims to help other users get off the ground using Word2Vec for their own research. We use Word2Vec for sentiment analysis by attempting to classify the Cornell IMDB movie review corpus (http://www.cs.cornell.edu/people/pabo/movie-review-data/). The specific data set used is available for download at http://ai.stanford.edu/~amaas/data/sentiment/.

Show Me The Code

The IPython Notebook (code + tutorial) can be found in word2vec-sentiments.ipynb

The code to just run the Doc2Vec and save the model as imdb.d2v can be found in run.py. Should be useful for running on computer clusters.

What Does This Repo Contain

  • test-neg.txt test-pos.txt train-neg.txt train-pos.txt train-unsup.txt Training and testing data. Explained in more detail in the notebook.
  • word2vec-sentiment.ipynb The notebook (code + tutorial)
  • run.py Just the code

License

Copyright (c) 2015 Linan Qiu

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

word2vec-sentiments's People

Contributors

abe404 avatar analystanand avatar laugustyniak avatar linanqiu avatar scottlingran avatar val314159 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

word2vec-sentiments's Issues

Predict sentiment of new data

@linanqiu Thanks for the wonderful presentation of your program!
However, I want to know how this model can be used to predict the sentiment of a given sentence.

How to get label when input is a document?

Hello,
Very thank for tutorial! It's help me a lot.
I have a question, it's so new bie :)
After training, I have a document (sentence), I want get the label mapped to this document. But I don't know how to do this.
Can you show me how?
Thank u so much!

KeyError: 'TRAIN_NEG_0'

Hi,

Thanks for your detailed explanation!
I've tried out your code but encountered some errors..

Example of my sentence:
LabeledSentence([u'I', u'went', u'to', u'the', u'restaurant', u'the', u'other', u'day', u'with', u'family.', u'The', u'food', u'was', u'tasteless,', u'the', u'service', u'was', u'really', u'slow', u'and', u'the', u'Italian', u'guy', u'very', u'rude.', u'Not', u'sure', u'why', u'people', u'even', u'bother', u'to', u'dine', u'there.', u'Not', u'going', u'again', u'for', u'sure'], ['TRAIN_NEG_1'])

After training, I tried testing out the model using

model['TRAIN_NEG_0']
Traceback (most recent call last):
File "", line 1, in
File "/Users/administrator/anaconda/lib/python2.7/site-packages/gensim/models/word2vec.py", line 1259, in getitem
return self.syn0[self.vocab[words].index]
KeyError: 'TRAIN_NEG_0'

May I ask if there is any resolution? >_<

Learning Testing Sentence Label While training

As per my understanding for doc2vec algorithm,you must not learn doc2vec label for testing data while training the model,if you will learn all the testing documents labels it will give high accuracy which is misleading.

Instead what we should do

  • Learn training pos and neg train sentence along

  • Infer vectors of test pos and neg

I am sending pull request kindly check

can't clone this repo

Error downloading object: train-unsup.txt (6539951): Smudge error: Error downloading train-unsup.txt (6539951b1bfd1420c2df99fe824272dd846cd25bca4a2247a4266f12aec9f195): batch response: This repository is over its data quota. Purchase more data packs to restore access.

This repository is over its data quota. Purchase more data packs to restore access.
github.com/git-lfs/git-lfs/errors.newWrappedError
C:/Users/ttaylorr/go/src/github.com/git-lfs/git-lfs/src/github.com/git-lfs/git-lfs/errors/types.go:170: batch response
github.com/git-lfs/git-lfs/errors.newWrappedError
C:/Users/ttaylorr/go/src/github.com/git-lfs/git-lfs/src/github.com/git-lfs/git-lfs/errors/types.go:170: Error downloading train-unsup.txt (6539951b1bfd1420c2df99fe824272dd846cd25bca4a2247a4266f12aec9f195)
github.com/git-lfs/git-lfs/errors.newWrappedError
C:/Users/ttaylorr/go/src/github.com/git-lfs/git-lfs/src/github.com/git-lfs/git-lfs/errors/types.go:170: Smudge error

error with the training command

The new word2vec requires total_examples to be specified in the train command, now it gives the error:

ValueError: You must specify either total_examples or total_words, for proper alpha and progress calculations. The usual value is total_examples=model.corpus_count.

so I changed it to the following:

model.train(sentences.sentences_perm,total_examples=model.corpus_count())

but it gives a new error:

TypeError: 'int' object is not callable

Does anyone have an idea what to do with this?

Doesn't test data shouldn't be used while building the model?

A quote from the original paper:
"At test time, we freeze the vector representation for each
word, and learn the representations for the sentences using
gradient descent. Once the vector representations for the
test sentences are learned, we feed them through the logistic
regression to predict the movie rating."

shouldn't we use infer_vector instead? (which obviously drops the results)

ipynb file seems to be invalid

Github does not load the word2vec-sentiment.ipynb file, I suppose it is corrupt, can you upload a fresh one!?
Thanx for the update on the new gensim by the way, and I'm looking forward to do the tutorial!
Cheers, RP

Error when inspecting the model

@linanqiu So, when I run this:
model['TRAIN_NEG_0']

I get this error:
File "/Library/Python/2.7/site-packages/gensim/models/word2vec.py", line 1204, in getitem
return self.syn0[self.vocab[words].index]
KeyError: 'TRAIN_NEG_0'

I also get the same error when I'm training vectors i.e.
train_arrays[i] = model[prefix_train_pos]

MacOS 10.10.4
python 2.7.6
gensim 0.12.1
numpy 1.9.2

'numpy.ndarray' object has no attribute 'tags'

with Python 2.7.3, SciPy 0.15.1, Gensim 0.12.1, and NumPy 1.10.1, I ran into the error below
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(_self.__args, *_self.__kwargs)
File "/vagrant/ve2/local/lib/python2.7/site-packages/gensim/models/word2vec.py", line 701, in worker_loop
if not worker_one_job(job, init):
File "/vagrant/ve2/local/lib/python2.7/site-packages/gensim/models/word2vec.py", line 692, in worker_one_job
tally, raw_tally = self._do_train_job(items, alpha, inits)
File "/vagrant/ve2/local/lib/python2.7/site-packages/gensim/models/doc2vec.py", line 638, in _do_train_job
indexed_doctags = self.docvecs.indexed_doctags(doc.tags)
AttributeError: 'numpy.ndarray' object has no attribute 'tags'

AttributeError: 'numpy.ndarray' object has no attribute 'tags'

Tried to run the code and I'm getting the following error as soon as the call to train is invoked:
model.train(sentences.sentences_perm())

File "/Users/nikos/anaconda/lib/python2.7/site-packages/gensim/models/doc2vec.py", line 638, in _do_train_job
indexed_doctags = self.docvecs.indexed_doctags(doc.tags)
AttributeError: 'numpy.ndarray' object has no attribute 'tags'

My installation is as follows:

OSX Yosemite 10.10.5

Python 2.7.10 :: Anaconda 2.3.0 (x86_64)

gensim==0.12.1
scipy==0.15.1
numpy==1.9.2

Poor result

I ran the source code, but the result was 57%.

I don't understand how the result can be so bad. I'm doing something wrong. Can you help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.