Giter VIP home page Giter VIP logo

dbscan-on-spark's Introduction

DBSCAN on Spark

Overview

This is an implementation of the DBSCAN clustering algorithm on top of Apache Spark. It is loosely based on the paper from He, Yaobin, et al. "MR-DBSCAN: a scalable MapReduce-based DBSCAN algorithm for heavily skewed data".

I have also created a visual guide that explains how the algorithm works.

Current vesion of DBSCAN is dbscan-on-spark_2.10:0.2.0-SNAPSHOT

Be aware that current version of DBSCAN in this repo is : com.irvingc.spark dbscan-on-spark_2.10 0.2.0-SNAPSHOT It is not present in any official repository and to make it work, you need to build it yourself.

Getting DBSCAN on Spark

Version 0.1.0 of DBSCAN on Spark is published to bintray. If you use SBT you can include SBT in your application adding the following to your build.sbt:

resolvers += "bintray/irvingc" at "http://dl.bintray.com/irvingc/maven"

libraryDependencies += "com.irvingc.spark" %% "dbscan" % "0.1.0"

If you use Maven or Ivy you can use a similar resolver, but you just need to account for the scala version (the example is for Scala 2.10):

...

	<repositories>
		<repository>
			<id>dbscan-on-spark-repo</id>
			<name>Repo for DBSCAN on Spark</name>
			<url>http://dl.bintray.com/irvingc/maven</url>
		</repository>
	</repositories>
...

	<dependency>
		<groupId>com.irvingc.spark</groupId>
		<artifactId>dbscan_2.10</artifactId>
		<version>0.1.0</version>
	</dependency>


DBSCAN on Spark is built against Scala 2.10.

Example usage

I have created a sample project showing how DBSCAN on Spark can be used. The following however should give you a good idea of how it should be included in your application.

import org.apache.spark.mllib.clustering.dbscan.DBSCAN

object DBSCANSample {

  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("DBSCAN Sample")
    val sc = new SparkContext(conf)

    val data = sc.textFile(src)

    val parsedData = data.map(s => Vectors.dense(s.split(',').map(_.toDouble))).cache()

    log.info(s"EPS: $eps minPoints: $minPoints")

    val model = DBSCAN.train(
      parsedData,
      eps = eps,
      minPoints = minPoints,
      maxPointsPerPartition = maxPointsPerPartition)

    model.labeledPoints.map(p =>  s"${p.x},${p.y},${p.cluster}").saveAsTextFile(dest)

    sc.stop()
  }
}

License

DBSCAN on Spark is available under the Apache 2.0 license. See the LICENSE file for details.

Credits

DBSCAN on Spark is maintained by Irving Cordova ([email protected]).

dbscan-on-spark's People

Contributors

eliasah avatar irvingc avatar zbytt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbscan-on-spark's Issues

how to predict data

hi! i want to use my model to predict a data,but i get an Error 'scala.NotImplementedError: an implementation is missing' so,i want to ask you how to solve it thank you

The result is difference with sklearn DBSCAN?

It seems that the cluster result of dbscan-on-spark is not right.
I used sklearn to generate 100 data, and used sklearn to train data. The result is different from that of dbscan-on-spark . Here are my experimental data and results.
Is there something wrong with me or the dbscan-on-spark code.
The 100 data is , first and second column is x and y , the third is the output clusterid used dbscan-on-spark with parameter maxPointsPerPartition=12, eps = 0.3 , minPoints = 10 .

-1.0158802720216003,0.147342211484412,0
-1.0146038471276722,0.03688725664273079,0
-0.9891239738704828,0.022213809184072942,0
-0.9638043562674901,0.19370487762607208,0
-0.9528948845748842,0.2606655111008295,0
-0.9333963520783779,0.3087590886721549,2
-0.9075869251302896,0.41694606375014176,2
-0.8897644802049538,0.3652758661784908,2
-0.880180077035953,0.47937007252773894,2
-0.877911880574905,0.586132545648214,2
-0.7753025837331028,0.6042502634413288,2
-0.7707474677539745,0.6849960870555312,2
-0.7311006936368126,0.7227208343955539,2
-0.6983566813753926,0.720816991846437,2
-0.5938859100499629,0.8422336677663952,2
-0.5846965684242103,0.7441899166472538,2
-0.5343754904916749,0.8321019363347932,2
-0.47424038807062,0.8758860679090658,2
-0.3878784206919627,0.915309044770749,2
-0.3289511302371282,0.9573149852131584,2
-0.2959324774808664,0.9325603737985263,2
-0.21870067123075473,0.9970866099086676,2
-0.11691604427847285,0.9778576019025226,2
-0.10195211460944066,0.9709550110857252,2
-0.029517735653622493,0.507649328605902,0
-0.023708638395017974,1.0069445660821037,2
-0.005849731617721753,1.0054354462743158,2
0.004878608148313203,0.33471329223760526,0
0.013060570355586543,0.43901096175191556,0
0.013738246924096587,0.24394131643956118,0
0.015089734435828273,0.3515797069345012,0
0.047196112670520006,0.2206600035440831,0
0.05576443581177526,0.15151041876834784,4
0.07973418606854518,0.054405388341981394,4
0.11072653052995088,0.9518445839926504,2
0.14329032472059547,0.9604927564401988,2
0.15054141777418886,0.03451008884781809,4
0.16128620811774508,-0.09013998774134449,4
0.18846566254628738,-0.05019026114410958,4
0.20397463243581263,0.9737643345398193,2
0.22359870290598133,-0.15409491439129092,0
0.2751214191247985,0.9347761383211116,2
0.287593102624648,-0.24638341424476687,0
0.36088367672771,-0.2411234194792873,3
0.37624208564171424,-0.27216645323731653,3
0.38265901863925844,0.9187882235965241,2
0.3887195306260235,0.9033706015027824,2
0.40914260501491323,-0.3033592048800921,3
0.43346670763470607,0.8748254472162161,2
0.4756292725872716,-0.35619531114932845,3
0.496985556291109,0.8394683116286269,2
0.5240072940588782,-0.415381768593055,3
0.5482086350246445,0.7973898622196185,2
0.5790022726885077,-0.43888757836793596,3
0.6062194604607748,0.7624055296463416,2
0.6168393841819796,-0.4254723492171427,3
0.6776865344598341,0.7288515644281421,2
0.7167688763117702,0.6947432873340993,2
0.7476336573593888,-0.43956394127856613,3
0.7572920472149428,-0.436367630296194,3
0.7740957194369665,0.6125896453298072,2
0.8011753211137118,0.57376379359662,2
0.8417223438915941,0.5526331446967239,2
0.8532880875726063,-0.49795719019316187,3
0.863385424874792,0.4747468867223078,2
0.880275606232329,-0.4927970362051949,3
0.9032177435804262,0.40648142738558235,2
0.9394614201995631,0.37705763226924555,2
0.9446468417742363,0.18276454590281604,2
0.974681291427821,0.04097122039961243,0
0.9767100737539655,-0.02241667025607359,0
0.981331751323225,0.31018095161038134,2
0.9895718078881754,0.1459207265521259,2
0.9924132799217354,0.2407697250631794,2
1.0185336471622448,-0.5338521320792655,3
1.0284991718568057,-0.5059328635382044,3
1.0737327870386202,-0.5064573873771955,3
1.155279215360569,-0.49812019299551963,3
1.2290952646189792,-0.46969221135524847,3
1.2916449149878702,-0.4412009995446211,3
1.3291052136060821,-0.44642696791708286,3
1.4442640860583817,-0.4319727123957646,3
1.4632640963338075,-0.3945052322897577,3
1.476901232024859,-0.3357710068006996,3
1.5790424791153852,-0.2927273617730516,1
1.6839521844358782,-0.2766017069970544,1
1.7288137825879888,-0.24294452147350262,1
1.7460881272956796,-0.19290929215511726,1
1.7596749110664895,-0.1514486544693371,1
1.7891687192218997,-0.10216824559688067,1
1.8414791557100734,0.007705154849708373,1
1.87128705469536,-0.024324243892344288,1
1.8961568159672442,0.14023128237704588,1
1.9140044814018815,0.025879420371462798,1
1.9458748140234137,0.1348425674970267,1
1.9629414250510497,0.2391046347601252,1
1.9968692003241,0.525507288705618,1
2.0040410445424053,0.3872492455105901,1
2.012481427686388,0.4616919308479025,1
2.013488317583532,0.2822368891808566,1

plot the dbscan-on-spark result.

import matplotlib.pyplot as plt
import numpy as np
c = 'C:\\Users\\iamshg\\IdeaProjects\\dbscan-on-spark-master\\src\\test\\resources\\res.txt\\res.csv'
x, y, cluster_id = np.loadtxt(c, delimiter=',', usecols=(0, 1,2),unpack=True)
plt.scatter(x, y, c=cluster_id)
plt.show()

image
plot the sklearn dbscan result

from sklearn.cluster import DBSCAN
X = np.array([(i,j) for i,j in zip(x,y)])
y_pred = DBSCAN(eps=0.3,min_samples=10).fit_predict(X)
plt.scatter(x, y, c=y_pred)
plt.show()

image

so , there are some difference between two image . Is there something wrong with me or the code?

How to run with your src

Hi
I just found your git right now.
The readme for programmer is good. If the user want to run your the compiled jar file. Can you give me a example in command line.
Like,
spark-submit --class org.apache.spark.mllib.clustering.dbscan target/dbscan-on-spark_2.10-0.2.0-SNAPSHOT.jar

Thanks

Only two dimensional

It looks as if this code is built only for two dimensions. What's up with that? Please add some kind of note to make it obvious.

java.lang.StackOverflowError

why are there so many new Points ? Why not use var type variables when construct Point objects to avoid frequent new operations

For String data

Is there a way, that we can use this code to cluster strings?

ArrayIndexOutOfBoundsException: 1 if my vector has only one dimension

When testing with a vector like (0, 0, 10, 10, 10, 0, 0) it fails.
You try to take the "y" component which does not exist.

java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.spark.mllib.linalg.DenseVector.apply(Vectors.scala:597) at dbscan.DBSCANPoint.y(DBSCANPoint.scala:24) at dbscan.DBSCAN.dbscan$DBSCAN$$toMinimumBoundingRectangle(DBSCAN.scala:295) at dbscan.DBSCAN$$anonfun$3.apply(DBSCAN.scala:85) at dbscan.DBSCAN$$anonfun$3.apply(DBSCAN.scala:85) ...

BTW, does this library works with more than 2 dimensions ?

Whether the method is now implemented?

/**

  • Find the appropriate label to the given vector
  • This method is not yet implemented
    */
    def predict(vector: Vector): DBSCANLabeledPoint = {
    throw new NotImplementedError
    }

cluster id is 0

Hello, may I ask if the cluster with cluster number 0 contains all the noise, and these noises are not satisfied with the constraints of eps and minPoints

java.util.NoSuchElementException: key not found while iterating over model for the second time

Hello !
I'm rather new to the RDD approach, but I think that I found rather strange behavior of the code.
It goes as follows :

  1. Train model with DBSCAN.train()
  2. Access model.labeledPoints with an 'action' method
  3. Access model.labeledPoints again, exception is thrown

Problem is easily reprodcible, using your TestSuite:

test("dbscan") {

val data = sc.textFile(getFile(dataFile).toString())
val parsedData = data.map(s => Vectors.dense(s.split(',').map(_.toDouble)))
val model = DBSCAN.train(parsedData, eps = 0.3F, minPoints = 10, maxPointsPerPartition = 250)
// first eager access
model.labeledPoints.foreach(println(_))
// second access
val secondAccess = model.labeledPoints
  .map(p => (p, p.cluster))
  .collectAsMap()
  .mapValues(x => corresponding(x))

}

For the code above, first all the points are printed, and then :

8/01/10 13:00:52 ERROR Executor: Exception in task 2.0 in stage 13.0 (TID 32)
java.util.NoSuchElementException: key not found: (2,3)
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.mllib.clustering.dbscan.DBSCAN$$anonfun$16.apply(DBSCAN.scala:196)
at org.apache.spark.mllib.clustering.dbscan.DBSCAN$$anonfun$16.apply(DBSCAN.scala:192)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
18/01/10 13:00:52 ERROR Executor: Exception in task 0.0 in stage 13.0 (TID 30)
java.util.NoSuchElementException: key not found: (0,2)
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:59)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:59)
at org.apache.spark.mllib.clustering.dbscan.DBSCAN$$anonfun$16.apply(DBSCAN.scala:196)
at org.apache.spark.mllib.clustering.dbscan.DBSCAN$$anonfun$16.apply(DBSCAN.scala:192)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Is that a feature or a bug ?:)
Thanks & have a good day !

Unable to implement dbscan in spark.

I am trying to implement dbscan in spark but I am facing lots of issues. Can anyone please tell me how to implement dbscan in spark-scala. Please help me. I mention the Main class and POM.xml file below. Please tell me if any version is wrong or any dependency is missing.

This is My main class:

package DBScanClustering

import org.apache.spark.mllib.clustering.dbscan.DBSCAN
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.mllib.linalg.Vectors

object MyMain {

def main(args: Array[String]) {

val conf = new SparkConf().setAppName("DBSCAN Sample")
val sc = new SparkContext(conf)      
val data = sc.textFile("hdfs://localhost:8020/DBScan/labeled_data.csv")
val parsedData = data.map(s => Vectors.dense(s.split(',').map(_.toDouble))).cache()

val model = DBSCAN.train(
  parsedData,
  eps = 0.6,
  minPoints = 5,
  maxPointsPerPartition = 8)

model.labeledPoints.map(p => s"${p.x},${p.y},${p.cluster}").saveAsTextFile("hdfs://localhost:8020/DBScan/OPTS")

sc.stop()

}
}

and this is my POM.xml

<modelVersion>4.0.0</modelVersion>

<groupId>com.irvingc.spark</groupId>
<artifactId>dbscan-on-spark_2.10</artifactId>
<version>0.2.0-SNAPSHOT</version>

<name>Distributed DBSCAN on Apache Spark</name>
<url>http://www.irvingc.com/dbscan-on-spark</url>
<packaging>jar</packaging>

<scm>
	<connection>scm:git:https://github.com/irvingc/dbscan-on-spark</connection>
	<developerConnection>scm:git:[email protected]:irvingc/dbscan-on-spark.git</developerConnection>
	<url>https://github.com/irvingc/dbscan-on-spark</url>
</scm>

<developers>
	<developer>
		<id>irvingc</id>
		<name>Irving Cordova</name>
		<email>[email protected]</email>
		<url>http://www.irvingc.com</url>
	</developer>
</developers>
<properties>
	<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
	<scala.version>2.10.4</scala.version>
	<scala.binary.version>2.10</scala.binary.version>
	<scalatest.version>2.2.1</scalatest.version>
	<spark.version>2.1.0</spark.version>
</properties>

<repositories>
	<repository>
		<id>central</id>
		<!-- This should be at top, it makes maven try the central repo first 
			and then others and hence faster dep resolution -->
		<name>Maven Repository</name>
		<url>https://repo1.maven.org/maven2</url>
		<releases>
			<enabled>true</enabled>
		</releases>
		<snapshots>
			<enabled>false</enabled>
		</snapshots>
	</repository>
	<repository>
		<id>dbscan-on-spark-repo</id>
		<name>Repo for DBSCAN on Spark</name>
		<url>http://dl.bintray.com/irvingc/maven</url>
	</repository>
	<repository>
		<id>archery-repo</id>
		<name>Repo for Archery R-Tree</name>
		<url>http://dl.bintray.com/meetup/maven</url>
	</repository>
</repositories>

<dependencies>
<dependency>
		<groupId>org.scala-lang</groupId>
		<artifactId>scala-library</artifactId>
		<version>2.10.4</version>
	</dependency> 
	
 <dependency>
    <groupId>org.scala-lang</groupId>
    <artifactId>scala-compiler</artifactId>
    <version>2.10.4</version>
</dependency>
			
	<dependency>
		<groupId>org.apache.spark</groupId>
		<artifactId>spark-mllib_2.10</artifactId>
		<version>1.3.0</version>
	</dependency>
	<dependency>
		<groupId>com.meetup</groupId>
		<artifactId>archery_2.10</artifactId>
		<version>0.3.0</version>
	</dependency>
org.scalatest scalatest_2.10 3.2.0-SNAP5 test
	<dependency>
	<groupId>com.irvingc.spark</groupId>
	<artifactId>dbscan_2.10</artifactId>
	<version>0.1.0</version>
</dependency>

	 <dependency>     
	 <groupId>org.apache.spark</groupId>   
	   <artifactId>spark-core_2.10</artifactId>
	        <version>1.3.0</version>
	         </dependency>
	         
	         <dependency>
		<groupId>org.apache.spark</groupId>
		<artifactId>spark-core_2.11</artifactId>
		<version>2.2.1</version>
	</dependency>
	
	<dependency>
		<groupId>org.apache.spark</groupId>
		<artifactId>spark-sql_2.11</artifactId>
		<version>2.2.1</version>
	</dependency>
	
</dependencies>

<build>
	<plugins>
		<plugin>
			<groupId>net.alchim31.maven</groupId>
			<artifactId>scala-maven-plugin</artifactId>
			<version>3.2.0</version>
			<executions>
				<execution>
					<goals>
						<goal>compile</goal>
						<goal>testCompile</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.scalastyle</groupId>
			<artifactId>scalastyle-maven-plugin</artifactId>
			<version>0.6.0</version>
			<configuration>
				<verbose>false</verbose>
				<failOnViolation>true</failOnViolation>
				<includeTestSourceDirectory>true</includeTestSourceDirectory>
				<failOnWarning>true</failOnWarning>
				<sourceDirectory>${basedir}/src/main/scala</sourceDirectory>
				<testSourceDirectory>${basedir}/src/test/scala</testSourceDirectory>
				<configLocation>${basedir}/scalastyle-config.xml</configLocation>
				<outputFile>${project.build.directory}/scalastyle-output.xml</outputFile>
			</configuration>
			<executions>
				<execution>
					<goals>
						<goal>check</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.scalatest</groupId>
			<artifactId>scalatest-maven-plugin</artifactId>
			<version>1.0</version>
			<configuration>
				<reportsDirectory>${project.build.directory}/scalatest-reports</reportsDirectory>
				<junitxml>.</junitxml>
			</configuration>
			<executions>
				<execution>
					<id>test</id>
					<goals>
						<goal>test</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
		
		<plugin>
    <artifactId>maven-assembly-plugin</artifactId>
    <version>3.0.0</version>
    <configuration>
      <descriptorRefs>
        <descriptorRef>jar-with-dependencies</descriptorRef>
      </descriptorRefs>
      <archive>
        <manifest>
          <mainClass>DBScanClustering.MyMain</mainClass>     // specify your main class here
        </manifest>
      </archive>
    </configuration>
    <executions>
      <execution>
        <id>make-assembly</id> <!-- this is used for inheritance merges -->
        <phase>package</phase> <!-- bind to the packaging phase -->
        <goals>
          <goal>single</goal>
        </goals>
      </execution>
    </executions>
  </plugin>
	</plugins>
</build>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.