Giter VIP home page Giter VIP logo

nmtoan91 / lshkrepresentatives Goto Github PK

View Code? Open in Web Editor NEW
5.0 4.0 0.0 277 KB

Categorial and numerical (ordinal and nonordinal) Data Clustering Algorithm

Home Page: https://pypi.org/project/lshkrepresentatives/

License: MIT License

Python 82.46% Jupyter Notebook 17.54%
categorical-data clustering color dataset machine-learning nonmetric-data python random-forest shape unordered-data-structures-solutions

lshkrepresentatives's Introduction

Clustering algorithm for Mixed data of categorial and numerical (ordinal and nonordinal) data using LSH.

Notebook samples:

1. LSH-k-Representatives : Clustering of categorical attributes only:

2. LSH-k-Prototypes : Clustering of mixed data (categorical and numerical attributes):

3. LSH-k-Representatives-Full : Clustering of HUGE categorical attributes only:

4. Normalizing unstructed normal dataset:



Note 1: Different from k-Modes algorithm, LSH-k-Representatives define the "representatives" that keep the frequencies of all categorical values of the clusters. There are threee algorithms Note 2: The dataset is auto normalized if it detect string, or disjointed data, or nan

Installation:

Using pip:

pip install lshkrepresentatives numpy scikit-learn pandas networkx termcolor

Import the packages:

import numpy as np
from LSHkRepresentatives.LSHkRepresentatives import LSHkRepresentatives

Generate a simple categorical dataset:

X = np.array([['red',0,np.nan],['green',1,1],['blue',0,0],[1,5111,1],[2,2,2],[2,6513,'rectangle'],[2,3,6565]])

Using LSHk-Representatives (categorical clustering):

#Init instance of LSHkRepresentatives 
kreps = LSHkRepresentatives(n_clusters=2,n_init=5) 
#Do clustering for dataset X
labels = kreps.fit(X)
#Print the label for dataset X
print('Labels:',labels)
#Predict label for the random instance x
x = np.array(['red',5111,0])
label = kreps.predict(x)
print(f'Cluster of object {x} is: {label}')

Outcome:

SKIP LOADING distMatrix because: False bd=None
Generating disMatrix for DILCA
Saving DILCA to: saved_dist_matrices/json/DILCA_None.json
Generating LSH hash table:   hbits: 2(4)  k 1  d 3  n= 7
LSH time: 0.006518099999993865 Score:  6.333333333333334  Time: 0.0003226400000130525
Labels: [1 1 1 1 0 0 0]
Cluster of object [1 2 0] is: 1

Call built-in evaluattion metrics:

y = np.array([0,0,0,0,1,1,1])
kreps.CalcScore(y)

Outcome:

Purity: 1.00 NMI: 1.00 ARI: 1.00 Sil:  0.59 Acc: 1.00 Recall: 1.00 Precision: 1.00

Using LSHk-Prototypes (Mixed categorical and numerical attributes clustering):

For example: We have a dataset of 5 attributes (3 categorical and 2 numerical).

from LSHkRepresentatives.LSHkPrototypes import LSHkPrototypes
kprototypes = LSHkPrototypes(n_clusters=2,n_init=5) 
X = np.array([['red',0,np.nan,1,1],
              ['green',1,1,0,0],
              ['blue',0,0,3,4],
              [1,5111,1,1.1,1.2],
              [2,2,2,29.0,38.9],
              [2,6513,'rectangle',40,41.1],
              ['red',0,np.nan,30.4,30.1]])

attributeMasks = [0,0,0,1,1]
# attributeMasks = [0,0,0,1,1] means attributes are
# [categorial,categorial,categorial,numerical,numerical]
a = kprototypes.fit(X,attributeMasks,numerical_weight=2, categorical_weight=1)
print(a)

References:

T. N. Mau and V.-N. Huynh, ``An LSH-based k-Representatives Clustering Method for Large Categorical Data." Neurocomputing, Volume 463, 2021, Pages 29-44, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2021.08.050.

Bibtex:

@article{mau2021lsh,
  title={An LSH-based k-representatives clustering method for large categorical data},
  author={Mau, Toan Nguyen and Huynh, Van-Nam},
  journal={Neurocomputing},
  volume={463},
  pages={29--44},
  year={2021},
  publisher={Elsevier}
}

pypi/github repository

https://pypi.org/project/lshkrepresentatives/
https://github.com/nmtoan91/lshkrepresentatives

lshkrepresentatives's People

Contributors

nmtoan91 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lshkrepresentatives's Issues

predict() function

Hi,

Thank you for this clustering method, it is very useful. However, one thing that I think would provide a lot of benefit, is if there was a function, similar to the predict() function in the scikit-learn library, with which one may assign cluster labels to new/old data on a previously trained LSH-k-representatives instance.

Best regards,
Daniil

Handling of ordinal and nominal data, respectively

Hi,

Again, thanks for the great algorithm!

I just have a question about instances where there's a mix of ordinal and binary data. Does the algorithm handle all data as nominal, or is it possible for it to handle ordinal and binary/nominal data differently? The thing is that it appears to me, when running with mixed ordinal and binary data, that the algorithm weighs binary variables more (or at least distinguishes them more clearly between clusters), whereas ordinal variables (even though knowing the particular data I've been running the algorithm on there should naturally be groups which are strongly towards either end of the "spectrum") have relatively similar value proportions.

Best regards,
Daniil

The DILCA dissimilarity is not used in LSHkRepresentatives

Hi, first thank you for this work. I would like use your code in my experiments. However, it seems that the DILCA dissimilarity is not used in distance computations in your current implementation of LSHkRepresentatives. Am I wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.