Comments (5)
Hello,
it depends on what your objective is. Any evaluation metric focuses on a specific aspect of a topic model. OCTIS includes different categories of evaluation metrics:
- topic coherence metrics (evaluating if the top-words of the topics make sense together)
- topic significance metrics that consider the document-topic and word-topic distributions to discover high-quality and junk topics. You can find the reference paper here.
- classification metrics (F1, accuracy, etc), which use the document-topic distributions as features to train a classifier. These metrics require that documents are labelled.
- diversity metrics, which consider the top-words or the word-topic distribution and compute the distance between a topic and the others.
I am not sure if BERTopic generates the document-topic and word-topic distributions (in that case, you will not be able to compute the topic significance metrics). Maybe you'd like to consider Contextualized Topic Models (CTM) which is a topic model that uses pre-trained contextualized representations (as BERTopic). CTM is part of OCTIS too.
Let me know if you have further questions,
Silvia
from octis.
Which diversity metric are you using? Can you also show the snippet of the code in which you call the metric?
In general a metric in OCTIS expects to receive in input the output of a Model()
. Any topic model in OCTIS returns a dictionary with up to 4 fields. Depending on the metric, the right field will be used to compute the metric (see here for the details on model_output
) So if you want to use a metric that uses the word-topic distribution to compute the diversity, then you will construct your model_output
like this:
model_output = {"topic-word-matrix": topic_term_dist}
And then use it to compute the score of a metric. For example,
div = KLDivergence()
result = div.score(model_output)
Let me know if it works.
Silvia
from octis.
Hello Silvia
Thank you for your feedback
I trained LDA model using Gensim, and I would like to evaluate using the topic significance, topic coherence and topic diversity.
for lda, I generated the word-topic distribution using the following code:
#get raw topic > word estimates
topics_terms = lda_model.state.get_lambda()
#convert estimates to probability (sum equals to 1 per topic)
topics_terms_proba = topics_terms / topics_terms.sum(axis=1)[:, None]
topic_term_dist = topics_terms_proba[:, fnames_argsort]
topic_term_dist
the "topic_term_dist" contains the normalized distribution of word-topic:
[[1.1844748e-05 4.0855210e-02 1.1844748e-05 ... 1.1844748e-05
1.1844748e-05 1.1844748e-05]
[8.0169802e-06 8.0169802e-06 8.0169802e-06 ... 8.0169802e-06
8.0169802e-06 8.0169802e-06]
[7.5956509e-06 7.5956509e-06 7.5956509e-06 ... 7.5956509e-06
7.5956509e-06 7.5956509e-06]
...
[1.2837388e-05 1.2837388e-05 1.2837388e-05 ... 1.2837388e-05
1.2837388e-05 1.2837388e-05]
[8.9911064e-06 8.9911064e-06 8.9911064e-06 ... 8.9911064e-06
8.9911064e-06 8.9911064e-06]
[1.6502319e-05 1.6502319e-05 1.6502319e-05 ... 1.6502319e-05
1.6502319e-05 1.6502319e-05]]
when I passe it to the diversity metric, I get the following error:
`IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
`
so, how can I resolve this error, do you have a code that you could reference to me.
thank you
from octis.
yes, it works perfectly.
also for the link you recommended regarding the keys of the model_output is very useful.
I still have another question, if i want to apply classification metrics ( precision and recall), you've mentioned that my documents should be labeled. I already mapped each dominant topic for each document, is that considered as labeled document, correct me if i am wrong.
for the key: *test-topic-document-matrix*
in the model_output, is it the document_topic distribution on unseen documents ?
thank you again for your help
from octis.
Hello, sorry for the late reply.
mapping each document with a topic is indeed a strategy to label documents. In OCTIS we provide some already labeled corpora, you may want to have a look at those. For example, 20 Newsgroups and BBC news.
And yes, test-topic-document-matrix
represents the document-topic distribution for documents that are unseen, i.e. the documents of the testing dataset.
from octis.
Related Issues (20)
- Docker image failed with OCTIS in requirement HOT 7
- problems partitioning custom dataset
- Dependency incompatibility HOT 2
- AttributeError: module 'numpy' has no attribute 'int'. HOT 1
- Input contains NaN, infinity or a value too large for ('float64') HOT 2
- Cannot install OCTIS HOT 4
- Attribute Error HOT 1
- OCTIS install error
- cy
- OCTIS install fails due to gensim version HOT 3
- Preprocessing custom dataset without removing punctuation HOT 1
- How do I handle this error
- Python 3.12.1 pip Installation Error HOT 3
- Can I get the original dataset?
- Error calculating coherence score for BERTopic model trained on Indic language HOT 1
- doc2bow error when running lda optimizer described in your docs HOT 1
- Memory issue with optimizer
- Installation error HOT 3
- The `python` and `scipy` version-compatibility, and KLDivergence() needs attention!
- AttributeError: 'list' object has no attribute 'lower' preprocessor.preprocess_dataset when num_processes != None HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from octis.