Giter VIP home page Giter VIP logo

Comments (16)

liu-zg15 avatar liu-zg15 commented on May 20, 2024 7

I'm seeing the same thing. How about that now?
I found that some common words are missing, like 'of', 'and', 'to', 'a'.
I think these words must be in the googlenews, but i don't know why i can't find them.

from word2vec.

paige-pruitt avatar paige-pruitt commented on May 20, 2024 3

I am seeing the same issue as @liu-zg15. While it reports words like 'a', 'to', and 'and' are not in the vocabulary, it has vectors for 'b', 'c', etc. This seems like it must be some sort of bug instead of a lack of vocab coverage... (however it found vectors for both 'dog' and 'cat' unlike the earlier commenter).

from word2vec.

sicotronic avatar sicotronic commented on May 20, 2024 2

Well, now that you mentioned it again, it is indeed surprising that "dog" is not included in a 3 million word vocabulary, specially when the word "cat" it is included...

from word2vec.

DucVuMinh avatar DucVuMinh commented on May 20, 2024 1

@sicotronic
Oh, thank you for your supporting.
material you provided and your comments is very useful.

from word2vec.

rhlbns avatar rhlbns commented on May 20, 2024 1

I am also facing the same issue could not find words like 'a', 'to' and 'of' but it appears corresponding words starting with uppercase 'A', 'To' and 'Of' are available.

from word2vec.

dkirkby avatar dkirkby commented on May 20, 2024

I'm seeing the same thing. How was this resolved?

from word2vec.

sicotronic avatar sicotronic commented on May 20, 2024

This is normally expected as it is practically impossible to cover all the words for a given language during training. You have to decide how to handle the unknown words, some common approaches are:

  • Replace all the unknown words for the same token, for example "< UNK >", and train the model over that data to learn a average distribution for words out of vocabulary.
  • Another option is to use the model as it is, and just assign a randomly initialized vector (with the same amount of dimensions) to the unknown words (this can be further enhanced if you assign random values within the range and distribution of the other words in the model).

from word2vec.

dkirkby avatar dkirkby commented on May 20, 2024

Thanks, I had assumed that the ~1000 most common english words ("dog" is ranked 754 here) would inevitably be included in a 3,000,000 word vocabulary, but I don't know enough about how the vocabulary is selected from the input corpus. (Sorry that this is off topic for this repo)

from word2vec.

sicotronic avatar sicotronic commented on May 20, 2024

You're welcome.
Yes, it depends on the corpus used for the training, for example if the model was trained only over hundred of thousands of business emails, even if you have more than 3 million words in your training data I doubt you will find the word "dog" with enough frequency to be included in the vocabulary of the model (usually, the vocabulary is restricted to the top n (~thousand) frequent words to limit the calculation time and memory usage).

from word2vec.

dkirkby avatar dkirkby commented on May 20, 2024

These google weights were trained on 100 billion (!) words and have a 3 million word vocab, so its still surprising to me that a word like "dog" did not make the cut.

from word2vec.

sicotronic avatar sicotronic commented on May 20, 2024

Ok, if you check in the source code you can see that the maximum "vocabulary hash" size is 3 million (https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L27) but it seems that the vocabulary size is not covering all the hash table space, (there is a function called ReduceVocab to reduce the vocabulary to only the top most frequent words here: https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L175). You should check out the documentation of the already trained model because I think that the vocabulary size is one of the needed parameters at training time.

from word2vec.

DucVuMinh avatar DucVuMinh commented on May 20, 2024

Hello @sicotronic
As you said : Another option is to use the model as it is, and just assign a randomly initialized vector (with the same amount of dimensions) to the unknown words (this can be further enhanced if you assign random values within the range and distribution of the other words in the model).
But can you explain why we can use it and give me some papers or examples. I confuse that assigning randomly a vector can it make any scene in word2vec model or sometimes make harm?

from word2vec.

jeLee6gi avatar jeLee6gi commented on May 20, 2024

I don't know if we have the same problem, but I also noticed that common words were missing. Looking at m.vocab, it seems that the first character is missing from every word:

..., 'onductive_yarns', 'nrique_Tolentino', 'oronary_Interventions', 'nterface_NVMHCI', ...

Edit:
m = gensim.models.KeyedVectors.load_word2vec_format(path, binary=True) loads the model fine, I guess I'll use that instead.

from word2vec.

sicotronic avatar sicotronic commented on May 20, 2024

Hi @DucVuMinh

I'm sorry for the lack of rigurosity. The idea behind using a random initialized vector, with values under the same distribution of the known words, for unknown words is that you will get a point in the vector space that looks like a real observed word and therefore you will be able to operate or apply all the distance calculations consistently with the known words, as well as retrain the embeddings to fit your data including a vector for your unknown words.

I co-authored a paper at IJCAI2017 (https://www.ijcai.org/proceedings/2017/573) where we used a similar idea when assigning vectors to words we want to replace. (Basically we wanted to turn question sentences into something that looks like statements where the vectors representing the wh-question words (who/when/where) were replaced by the vectors of words that are most likely to make the sentence "similar" (under a given metric) to most of the answer sentences for each question type).

Anyway, I think it is a somewhat-common trick to build vectors with random values (under the same distribution of the already known words) in order to initialize the vectors for unknown words and then make them fit your training dataset to represent the averaged distribution of the unknown words in your data. I'm not sure right now about exactly which other papers present this idea, but it is a technical tip shared mostly everywhere I can remember, I think if you google it you will find several results (answers at stackexchange.com, blogs, other repositories). I just did that and found this comment by dennybritz here:
dennybritz/cnn-text-classification-tf#10 (comment)

from word2vec.

ValeryRybakov avatar ValeryRybakov commented on May 20, 2024

so-called 'stop-words' like articles, particles, prepositions are eliminated in most w2v models, as they take a lot of memory (usually half of all words), having no independant meaning, thus useless in this sence.

from word2vec.

Akashtyagi avatar Akashtyagi commented on May 20, 2024

I am able to get vectors for m['DOG'] and m['CAT'], when used in uppercase. It's weird that its only accepting Uppercase words in my model.

I am using pretrained GoogleNews-negative300

from word2vec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.