Comments (16)
I'm seeing the same thing. How about that now?
I found that some common words are missing, like 'of', 'and', 'to', 'a'.
I think these words must be in the googlenews, but i don't know why i can't find them.
from word2vec.
I am seeing the same issue as @liu-zg15. While it reports words like 'a', 'to', and 'and' are not in the vocabulary, it has vectors for 'b', 'c', etc. This seems like it must be some sort of bug instead of a lack of vocab coverage... (however it found vectors for both 'dog' and 'cat' unlike the earlier commenter).
from word2vec.
Well, now that you mentioned it again, it is indeed surprising that "dog" is not included in a 3 million word vocabulary, specially when the word "cat" it is included...
from word2vec.
@sicotronic
Oh, thank you for your supporting.
material you provided and your comments is very useful.
from word2vec.
I am also facing the same issue could not find words like 'a', 'to' and 'of' but it appears corresponding words starting with uppercase 'A', 'To' and 'Of' are available.
from word2vec.
I'm seeing the same thing. How was this resolved?
from word2vec.
This is normally expected as it is practically impossible to cover all the words for a given language during training. You have to decide how to handle the unknown words, some common approaches are:
- Replace all the unknown words for the same token, for example "< UNK >", and train the model over that data to learn a average distribution for words out of vocabulary.
- Another option is to use the model as it is, and just assign a randomly initialized vector (with the same amount of dimensions) to the unknown words (this can be further enhanced if you assign random values within the range and distribution of the other words in the model).
from word2vec.
Thanks, I had assumed that the ~1000 most common english words ("dog" is ranked 754 here) would inevitably be included in a 3,000,000 word vocabulary, but I don't know enough about how the vocabulary is selected from the input corpus. (Sorry that this is off topic for this repo)
from word2vec.
You're welcome.
Yes, it depends on the corpus used for the training, for example if the model was trained only over hundred of thousands of business emails, even if you have more than 3 million words in your training data I doubt you will find the word "dog" with enough frequency to be included in the vocabulary of the model (usually, the vocabulary is restricted to the top n (~thousand) frequent words to limit the calculation time and memory usage).
from word2vec.
These google weights were trained on 100 billion (!) words and have a 3 million word vocab, so its still surprising to me that a word like "dog" did not make the cut.
from word2vec.
Ok, if you check in the source code you can see that the maximum "vocabulary hash" size is 3 million (https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L27) but it seems that the vocabulary size is not covering all the hash table space, (there is a function called ReduceVocab to reduce the vocabulary to only the top most frequent words here: https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L175). You should check out the documentation of the already trained model because I think that the vocabulary size is one of the needed parameters at training time.
from word2vec.
Hello @sicotronic
As you said : Another option is to use the model as it is, and just assign a randomly initialized vector (with the same amount of dimensions) to the unknown words (this can be further enhanced if you assign random values within the range and distribution of the other words in the model).
But can you explain why we can use it and give me some papers or examples. I confuse that assigning randomly a vector can it make any scene in word2vec model or sometimes make harm?
from word2vec.
I don't know if we have the same problem, but I also noticed that common words were missing. Looking at m.vocab
, it seems that the first character is missing from every word:
..., 'onductive_yarns', 'nrique_Tolentino', 'oronary_Interventions', 'nterface_NVMHCI', ...
Edit:
m = gensim.models.KeyedVectors.load_word2vec_format(path, binary=True)
loads the model fine, I guess I'll use that instead.
from word2vec.
Hi @DucVuMinh
I'm sorry for the lack of rigurosity. The idea behind using a random initialized vector, with values under the same distribution of the known words, for unknown words is that you will get a point in the vector space that looks like a real observed word and therefore you will be able to operate or apply all the distance calculations consistently with the known words, as well as retrain the embeddings to fit your data including a vector for your unknown words.
I co-authored a paper at IJCAI2017 (https://www.ijcai.org/proceedings/2017/573) where we used a similar idea when assigning vectors to words we want to replace. (Basically we wanted to turn question sentences into something that looks like statements where the vectors representing the wh-question words (who/when/where) were replaced by the vectors of words that are most likely to make the sentence "similar" (under a given metric) to most of the answer sentences for each question type).
Anyway, I think it is a somewhat-common trick to build vectors with random values (under the same distribution of the already known words) in order to initialize the vectors for unknown words and then make them fit your training dataset to represent the averaged distribution of the unknown words in your data. I'm not sure right now about exactly which other papers present this idea, but it is a technical tip shared mostly everywhere I can remember, I think if you google it you will find several results (answers at stackexchange.com, blogs, other repositories). I just did that and found this comment by dennybritz here:
dennybritz/cnn-text-classification-tf#10 (comment)
from word2vec.
so-called 'stop-words' like articles, particles, prepositions are eliminated in most w2v models, as they take a lot of memory (usually half of all words), having no independant meaning, thus useless in this sence.
from word2vec.
I am able to get vectors for m['DOG'] and m['CAT'], when used in uppercase. It's weird that its only accepting Uppercase words in my model.
I am using pretrained GoogleNews-negative300
from word2vec.
Related Issues (20)
- Encoding issues when attempting to load word2vec models for arabic language HOT 2
- word2vec.word2phrase() not working for text8
- [WinError 2] The system cannot find the file specified HOT 3
- Failing to load GoogleNews-vectors-negative300.bin HOT 2
- [Errno 2] No such file or directory: 'word2vec' HOT 3
- UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 96-97: unexpected end of data HOT 3
- How to set the word2vec parameter to the best?
- heroku install error
- unable to install word2vec on Ubuntu 18.04.1 HOT 2
- training is very slow HOT 1
- Dependency on Cython HOT 1
- is the module different from gensim.models.?
- Do I get tokenizer?
- While installing the word2Vec library getting the following errors. HOT 1
- Absolute vs relative file paths
- How can I use google's word2vec? HOT 2
- Broken link
- name 'indexes' is not defined
- ERROR: Could not build wheels for word2vec, which is required to install pyproject.toml-based projects HOT 3
- installing word2vec on Macintosh with an M1 chip
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from word2vec.