Giter VIP home page Giter VIP logo

python-topic-model's Introduction

python-topic-model

Implementations of various topic models written in Python. Note that some of the implementations (the models with MCMC) are extremely slow. I do not recommend to use it for large scale datasets.

Current implementations

python-topic-model's People

Contributors

assios avatar dongwookim-ml avatar judaschrist avatar jyscardioid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-topic-model's Issues

Input datatypes for the RTM model

Hi, @dongwookim-ml
I am going to use the RTM model, however, I am not able to understand the datatype. Can you provide what datatype RTM expects, so that I would be able to use this model on my own data?

Measure the similarity between documents

my concern is to add new documents out of the training set, then after natural processing applying relational topic model to measure the similarity between these documents.
is there any examples that help me in my issue

In the ptm code, some problem about at_model.py?

Hi arongdari,
I meet a problem when use your source code named “at_model.py” . In the seventh line, from six.moves import xrange ,
default
So could you tell me what wrong with the code ?
Many thanks!

Adding regularizer term for topic vectors

I have to add some regularizer term to the likelihood equation containing beta terms. I am unable to figure out from the code which part of the code does the M-step update for the conditional multinomial parameter beta

In the CTM code, what are the different attributes?

doc_cnt : structure and what is it?
doc_ids: is it a 1-D array of all the doc ids?
n_voca?

n_topics: I believe items are the actual documents. So, if I have 1000 documents, n_topics will be equal to 1000

Logging

Hi, maybe you should provide a way to set the loglevel when creating a model. Or an additional getter/setter pair. Thanks in advance.

How to create document links for RTM model

Can you explain how to create the list giving the links between documents for the RTM model.
In the RTM example notebook it was imported directly without any explanation and I can't figure out how to create it for a new dataset
Thanks

Per-document topic distribution (ATM)?

Thanks for a great package -- I just got the author-topic model successfully running and I was wondering whether there is a simple way to get the per-document topic distribution for (a) the documents an author-topic model was fitted on or (b) new documents. Thanks in advance for any replies!

[Question] write a vectorized form of do_e_step method in lda_vb.

Hi I am trying to implement LDA in tensorflow. I am quite new to both tensorflow and LDA. Currently I am following your lda_vb implementation.
Is to possible to have a vectorized implemenation (without for loops) of do_e_step method?
If yes, it would very helpful if you provide some insights on how to implement it.

Probability distribution for a document

In the notebooks in python-topic-model/notebook/, there are no small examples provided of how to infer the topic distribution for a new document or for the documents that the model was trained on.

Something like giving a list of integers as input (that map to the words of voca) for a new document, and getting the probability distribution that this document has for the trained topics. Or accessing the topics of all the trained documents.

How can this be achieved for lets say the LDA or the supervised LDA?

Author-topic LDA

Hi,

For author-topic model, could you please provide an example of showing topic of docs after model being trained?
I find only ways to show topic distribution of author and word distribution of topic, however, in my case, I care topic of docs much more.

Thanks.

Flat topic distributions in author-topic model

I tried running the author-topic model notebook. I noticed that the topic distributions of many authors were flat, meaning that all topics were equally likely. See example below.

I did not change the notebook in any way, so I suspect there is some error in the algorithm/code, although I have no inkling of what it might be. Just thought I'd share.

Is someone else able to reproduce this, or is it just me? Or did I misunderstand, and this is actually expected to happen?

Explanations on Stochastic (Gibbs) EM implementation for sLDA

Hi Dongwoo:
I am currently looking for a gibbs sampling estimation method for supervised LDA, your Stochastic (Gibbs) EM for sLDA (slda_gibbs.py) is exactly what i'm looking for.
I was wondering if there're any papers or other materials that can explain the math behind it, especially the matrix calculation part?

Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.