Giter VIP home page Giter VIP logo

dcmonitor's People

Contributors

flowbehappy avatar qleelulu avatar sundy-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dcmonitor's Issues

zk集群和kafka监控问题

请问能配置多个zookeeper集群吗?比如2181的集群和2191的集群
还有zk监控有很多unknown的现实不知道是为什么
kafka监控能支持kafka0.7吗

谢谢

kafka message rate data doesn't show up in UI

Hi,

May I ask for a favor?

Today i deployed this project, and found below error, please advise if I configred wrong or this is a bug? thanks

2016-04-29 12:12:18,149 ERROR [Thread-3] kafka.consumer.TopicCount$ - error parsing consumer json string events
kafka.common.KafkaException: error constructing TopicCount : events
at kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:76)
at kafka.utils.ZkUtils$$anonfun$getConsumersPerTopic$1.apply(ZkUtils.scala:671)
at kafka.utils.ZkUtils$$anonfun$getConsumersPerTopic$1.apply(ZkUtils.scala:670)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.utils.ZkUtils$.getConsumersPerTopic(ZkUtils.scala:670)
at kafka.utils.ZkUtils.getConsumersPerTopic(ZkUtils.scala)
at com.sf.monitor.kafka.KafkaInfos.getActiveTopicMap(KafkaInfos.java:299)
at com.sf.monitor.kafka.KafkaStats.fetchKafkaPartitionInfos(KafkaStats.java:32)
at com.sf.monitor.kafka.KafkaStats.fetchCurrentInfos(KafkaStats.java:24)
at com.sf.monitor.kafka.KafkaInfoFetcher$1.run(KafkaInfoFetcher.java:41)
2016-04-29 12:12:18,149 ERROR [Thread-3] com.sf.monitor.kafka.KafkaInfos - could not get consumers for group messagegroup
kafka.common.KafkaException: error constructing TopicCount : events
at kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:76)
at kafka.utils.ZkUtils$$anonfun$getConsumersPerTopic$1.apply(ZkUtils.scala:671)
at kafka.utils.ZkUtils$$anonfun$getConsumersPerTopic$1.apply(ZkUtils.scala:670)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.utils.ZkUtils$.getConsumersPerTopic(ZkUtils.scala:670)
at kafka.utils.ZkUtils.getConsumersPerTopic(ZkUtils.scala)
at com.sf.monitor.kafka.KafkaInfos.getActiveTopicMap(KafkaInfos.java:299)
at com.sf.monitor.kafka.KafkaStats.fetchKafkaPartitionInfos(KafkaStats.java:32)
at com.sf.monitor.kafka.KafkaStats.fetchCurrentInfos(KafkaStats.java:24)
at com.sf.monitor.kafka.KafkaInfoFetcher$1.run(KafkaInfoFetcher.java:41)
2016-04-29 12:12:20,222 WARN [Thread-3] com.sf.monitor.kafka.KafkaStats - kafka - topic:[events],consumer:[kefu-statistic] - consum lag: current[1904],threshold[200], topic lag illegal!

监控kafka集群的问题

当前没有Active Topics启动会抛如下异常:
.671: [GC [PSYoungGen: 268361K->20976K(271872K)] 292396K->56096K(354816K), 0.0225720 secs] [Times: user=0.03 sys=0.00, real=0.02 secs]
[2015-10-13 17:04:15.781] boot - 13404 ERROR [Thread-3] --- KafkaStats:
java.lang.NullPointerException: null value in entry: topic=null
at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:33)
at com.google.common.collect.ImmutableMap.entryOf(ImmutableMap.java:135)
at com.google.common.collect.ImmutableMap.of(ImmutableMap.java:99)
at com.sf.monitor.kafka.KafkaStats.createPoint(KafkaStats.java:100)
at com.sf.monitor.kafka.KafkaStats.parsePartitionInfos(KafkaStats.java:80)
at com.sf.monitor.kafka.KafkaStats.fetchStormKafkaPartitionInfos(KafkaStats.java:50)
at com.sf.monitor.kafka.KafkaStats.fetchCurrentInfos(KafkaStats.java:25)
at com.sf.monitor.kafka.KafkaInfoFetcher$1.run(KafkaInfoFetcher.java:41)

InfluxDB will be replaced in next release

I'm really tired of InfluxDB!
From my experience, is very unstable, highly resource usage(CPU, disk IO), and the java client is not developer friendly. Since DCMonitor is mean to be a light weight tool, it should be easy to use, maintain and resource restraint. But relying on InfluxDB makes things difficult.
Luckily the history metric storage is only used like a k-v storage with time range and group by, there are lots of storage system can handle this.
Maybe the good old mysql is a nice option.

监控storm消费kafka报异常

如果我配置错误的zkroot路径的情况下报如下异常:
015-10-20 16:26:08.573] boot - 2642 WARN [Thread-3] --- DCMZkUtils: read children of [/storm_kafka_comsumerx] from zookeeper failed!
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /storm_kafka_comsumerx
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191)
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38)
at com.sf.monitor.utils.DCMZkUtils.getZKChildren(DCMZkUtils.java:72)
at com.sf.monitor.kafka.KafkaInfos.getStormKafkaClients(KafkaInfos.java:348)
at com.sf.monitor.kafka.KafkaStats.fetchStormKafkaPartitionInfos(KafkaStats.java:48)
at com.sf.monitor.kafka.KafkaStats.fetchCurrentInfos(KafkaStats.java:25)
at com.sf.monitor.kafka.KafkaInfoFetcher$1.run(KafkaInfoFetcher.java:41)
但是我的异常是这样的:
Oct 20, 2015 3:13:12 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.NullPointerException] with root cause
java.lang.NullPointerException
at com.sf.monitor.kafka.KafkaInfos.getStormKafkaClients(KafkaInfos.java:353)
at com.sf.monitor.controllers.KafkaController.stormkafka(KafkaController.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:938)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:683)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1721)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1679)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
看到源代码发现已经读取到zkroot,报错的代码是KafkaInfos.java的第354行
image

"stormKafkaRoot": "/storm_kafka" error when i ran this project as it is

May be my question is silly, Im new to kafka, i;m trying to understand how it working. When i try to created the /storm_kafka in standalone kafka running system and i'm running above specified steps but it is failing can you please help me to get rid of this issue.

Can you able to provide the details what exactly i'm missing in configuration or running this project.

My configuration as below
i'm using kafka standalone :- kafka_2.12-2.0.0

cd /refresh/home/Software/kafka_2.12-2.0.0
bin/zookeeper-server-start.sh config/zookeeper.properties (Zookeeper default )
bin/kafka-server-start.sh config/server.properties (zookeeper.connect=localhost:2181/storm_kafka)
and created the /storm_kafka folder under root folder.

Topic creation:--
bin/kafka-topics.sh --zookeeper localhost:2181/storm_kafka --create --topic Test1 --partitions 2 --replication-factor 1

org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:413)
at org.I0Itec.zkclient.ZkClient.getChildren(ZkClient.java:409)
at kafka.utils.ZkUtils$.getChildren(ZkUtils.scala:468)
at kafka.utils.ZkUtils.getChildren(ZkUtils.scala)
at com.sf.monitor.kafka.KafkaInfos.getActiveTopicMap(KafkaInfos.java:293)
at com.sf.monitor.kafka.KafkaStats.fetchKafkaPartitionInfos(KafkaStats.java:32)
at com.sf.monitor.kafka.KafkaStats.fetchCurrentInfos(KafkaStats.java:24)
at com.sf.monitor.kafka.KafkaInfoFetcher$1.run(KafkaInfoFetcher.java:41)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1500)
at org.I0Itec.zkclient.ZkConnection.getChildren(ZkConnection.java:99)
at org.I0Itec.zkclient.ZkClient$2.call(ZkClient.java:416)
at org.I0Itec.zkclient.ZkClient$2.call(ZkClient.java:413)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 8 more

How to notify by email once kafka lag is over specific size

Hi shunfei,

I tried to configure the lag notification as below, but it failed, I dont get any email, I am not sure where I missed. Would you please help advise how to config it? thanks

"kafka": {
"warning": true,
"warnDefaultLag": 1000,
"warnLagSpec": {
"test|statistics": 1000
},
"@ignoreConsumerRegex": "set ignoreConsumerRegex to ignore sending warning on those test consumers",
"ignoreConsumerRegex": "^console-consumer-.+$",
"stormKafkaRoot": "/storm_kafka"
},
"notify": {
"doSend": true,
"appName": "kafka",
"url": "http://notify.com",
"emails": ["[email protected]"],
"phones": []
},

监控storm消费kafka报异常

监控storm消费kafka报如下异常
Oct 20, 2015 3:13:12 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.NullPointerException] with root cause
java.lang.NullPointerException
at com.sf.monitor.kafka.KafkaInfos.getStormKafkaClients(KafkaInfos.java:353)
at com.sf.monitor.controllers.KafkaController.stormkafka(KafkaController.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:938)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:620)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:683)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1721)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1679)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

zk地址配置问题

kafka和zk用的是同一个地址 adds
例如kafka的存储路径是 xx.com:2181/file/path

这个地址也被用到了zk的监控中,就解析失败。

需要考虑下kafka数据不是在默认zk路径下的问题

Error on writing on InfluxDB

Hi,
I've installed DCMonitor and InfluxDb 0.9 RC 30
Looking at logs I see:
2015-05-14 18:11:29,369 ERROR [Thread-3] com.sf.influxdb.impl.InfluxDBImpl - database: [dcmonitor], retentionPolicy: [], points: [16]
java.lang.RuntimeException: {"error":"field "size" is type string, mapped as type number"}

2015-05-14 18:11:34,457 ERROR [Thread-3] com.sf.influxdb.impl.InfluxDBImpl - database: [dcmonitor], retentionPolicy: [], points: [16]
java.lang.RuntimeException: {"error":"field "offs" is type string, mapped as type number"}

and so on

Can you please specify which InfluxDB version (RC version) you used?

Thanks

Graph not displayed

Hi,
I've updated my version of DCMonitor, now I'm able to see Druid section, really awesome.
Now I've an issue on Kafka graphs, it's not displayed.
My current version of InfluxDB is 0.9.0-rc30 .
Looking at InfluxDB logs I can see that read and write are done but when running the query an error occur like below:

select size, offs, lag from kafka_metrics where consumer='druid' and topic='buck_bidding' and partition='-1' and time >= '2015-07-23T15:25:32.665Z' and time <= '2015-07-23T15:40:32.665Z' group by topic, consumer
ERR: measurement not found: "dcmonitor"..kafka_metrics (/root/.gvm/pkgsets/go1.4.2/global/src/github.com/influxdb/influxdb/tx.go:60)

looking at tables I didn't see kafka_metrics, instead I can see kafka_consume.

select size, offs, lag from kafka_consume where consumer='druid' and topic='buck_bidding' and partition='-1' and time >= '2015-07-23T15:25:32.665Z' and time <= '2015-07-23T15:40:32.665Z' group by topic, consumer
name: kafka_consume
tags: consumer=druid, topic=buck_bidding
time size offs lag

How can I fix this issue?

Thanks
Maurizio

Druid Metrics not landing up on Prometheus

Hi, We have setup DCMonitor for Druid and configured Prometheus to scrape metrics from DCMonitor. But it seems none of the Druid metrics are landing up in the Prometheus metrics list. Though DCMonitor ui shows all Druid instance running. Please guide.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.