Giter VIP home page Giter VIP logo

spanish-word-embeddings's Introduction

Spanish Word Embeddings

Below you find links to Spanish word embeddings computed with different methods and from different corpora. Whenever it is possible, a description of the parameters used to compute the embeddings is included, together with simple statistics of the vectors, vocabulary, and description of the corpus from which the embeddings were computed. Direct links to the embeddings are provided, so please refer to the original sources for proper citation (also see References). An example of the use of some of these embeddings can be found here or in this tutorial (both in Spanish).

Summary (and links) for the embeddings in this page:

Corpus Size Algorithm #vectors vec-dim Credits
1 Spanish Unannotated Corpora 2.6B FastText 1,313,423 300 José Cañete
2 Spanish Billion Word Corpus 1.4B FastText 855,380 300 Jorge Pérez
3 Spanish Billion Word Corpus 1.4B Glove 855,380 300 Jorge Pérez
4 Spanish Billion Word Corpus 1.4B Word2Vec 1,000,653 300 Cristian Cardellino
5 Spanish Wikipedia ??? FastText 985,667 300 FastText team

FastText embeddings from SUC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,313,423):

More vectors with different dimensiones (10, 30, 100, and 300) can be found here

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

FastText embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters:
    • min subword-ngram = 3
    • max subword-ngram = 6
    • minCount = 5
    • epochs = 20
    • dim = 300
    • all other parameters set as default

Corpus

  • Spanish Billion Word Corpus
  • Corpus Size: 1.4 billion words
  • Post processing: Besides the post processing of the raw corpus explained in the SBWCE page that included deletion of punctuation, numbers, etc., the following processing was applied:
    • Words were converted to lower case letters
    • Every sequence of the 'DIGITO' keyword was replaced by (a single) '0'
    • All words of more than 3 characteres plus a '0' were ommitted (example: 'padre0')

GloVe embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=855,380):

Algorithm

  • Implementation: GloVe
  • Parameters:
    • vector-size = 300
    • iter = 25
    • min-count = 5
    • all other parameters set as default

Corpus

Word2Vec embeddings from SBWC

Embeddings

Links to the embeddings (#dimensions=300, #vectors=1,000,653)

Algorithm

Corpus

FastText embeddings from Spanish Wikipedia

Embeddings

Links to the embeddings (#dimensions=300, #vectors=985,667):

Algorithm

  • Implementation: FastText with Skipgram
  • Parameters: FastText default parameters

Corpus

References

spanish-word-embeddings's People

Contributors

jorgeperezrojas avatar josecannete avatar zuik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spanish-word-embeddings's Issues

Can't unzip .vec file (invalid compressed data)

Hello,
I recently attempted to download the glove-sbwc.i25.vec.gz file, but when I tried to uncompress it, I get the following error:

~$ gzip -d glove-sbwc.i25.vec.gz

gzip: glove-sbwc.i25.vec.gz: invalid compressed data--crc error

gzip: glove-sbwc.i25.vec.gz: invalid compressed data--length error

Also, the browser consol shows the following messege when I click on the file to download it:

Mixed Content: The site at 'https://github.com/dccuchile/spanish-word-embeddings#glove-embeddings-from-sbwc:1' was loaded over a secure connection, but the file at 'https://users.dcc.uchile.cl/~jperez/word-embeddings/glove-sbwc.i25.vec.gz' was redirected through an insecure connection. This file should be served over HTTPS. This download has been blocked. See https://blog.chromium.org/2020/02/protecting-users-from-insecure.html for more details.

Thanks for any help!

Formato '.vec' a txt

Hola,

Estoy intentando utilizar el modelo de Glove en español para sacar unas métricas. El problema es que tengo que pasarlo a word2vec, y no consigo que funcione por temas de tipos de archivo. La función glove2word2vec solo he visto que funcione para archivos txt, tanto para cargar el modelo como para salvarlo. ¿Hay alguna manera de tener los archivos .vec de los modelos en formato txt?

Muchas gracias.

word frecuencies

Hola,

Sería genial si pudieran publicar los diccionarios con las frecuencias en las que aparece cada token del vocabulario en el corpus original ?

Muchas gracias :)

Bad GLOVE SBWC Size

I'm trying to use Glove SBWC embedding, but the size is of 3KB.

Why the embedding vectors have those size?

You don't have permission to access this resource

Hello!

When I try to download FastText embeddings from SBWC, in vector format (.vec.gz) (802 MB), I get this error.

Forbidden
You don't have permission to access this resource.Server unable to read htaccess file, denying access to be safe

I hope it can be solved soon.

Orden de los embeddings

Hola,

Los vectores se encuentran ordenados por frecuencia?

Esto es útil para que al hacer por ejemplo:

wordvectors_file_vec = 'fasttext-sbwc.3.6.e20.vec'
num_of_vectors = 50000
wordvectors = KeyedVectors.load_word2vec_format(wordvectors_file_vec, limit=num_of_vectors)

Los (por ejemplo) 50.000 vectores que se carguen sean los más frecuentes (y por ende posiblemente los más relevantes).

Saludos!

¿Cómo usarlo/How to use it?

Hola.
Me parece muy bueno tu repositorio.
En los ejemplos he visto que cargas los vectores embeding y que haces las funciones de similitud entre las palabras. ¿Pero cómo se utilizan para codificar palabras? Es decir, pasarle una frase: "Me gusta ir a la playa...." y que me genere el vecor [0.994, 7.982, 4.887, ...... , 0.231]
Gracias.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.