Giter VIP home page Giter VIP logo

kube-iptables-tailer's Introduction

kube-iptables-tailer

Project Status Build Status

kube-iptables-tailer is a service that gives you better visibility on networking issues in your Kubernetes cluster by detecting the traffic denied by iptables and surfacing corresponding information to the affected Pods via Kubernetes events.

kube-iptables-tailer itself runs as a Pod in your cluster, and it keeps watching changes on iptables log file mounted from the host. If traffic from/to a Pod is denied by your iptables rules, iptables will drop the packet and record a log entry on the host with relevant information. kube-iptables-tailer is able to detect these changes, and then it will try locating both the senders and receivers (as running Pods in your cluster) by their IPs. For IPs that do not match any Pods in your cluster, a DNS lookup will be performed to get subjects involved in the packet drops.

As the result, kube-iptables-tailer will submit an event in nearly real-time to the Pod located successfully inside your cluster. The Pod owners can thence be aware of iptables packet drops simply by running the following command:

$ kubectl describe pods --namespace=YOUR_NAMESPACE

...
Events:
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message
  ---------   --------	  -----	  ----                    ----          ------          -------
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (22.222.22.222) on port 5678/TCP.

  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (11.111.11.111) on port 1234/TCP.

NOTE: Content under the sections From, Reason, and Message showing in the above output can be configured in your container spec file. Please refer to the corresponding environment variables below for a more detailed explanation.

Requirements

Installation

Download the source code package:

$ git clone [email protected]:box/kube-iptables-tailer.git

Build the container from the source code (make sure you have Docker running):

$ cd <path-to-the-source-code>
$ make container

Usage

Setup iptables Log Prefix

kube-iptables-tailer uses log-prefix defined in your iptables chains to parse the corresponding packet dropped logs. You can set up the log-prefix by executing the following command (root permission might be required):

$ iptables -A CHAIN_NAME -j LOG --log-prefix "EXAMPLE_LOG_PREFIX: "

Any packets dropped by this chain will be logged containing the given log prefix: 2019-02-04T10:10:12.345678-07:00 hostname EXAMPLE_LOG_PREFIX: SRC=SOURCE_IP DST=DESTINATION_IP ... For more information on iptables command, please refer to this Linux man page.

Mounting iptables Log File

The parent directory of your iptables log file needs to be mounted for kube-iptables-tailer to handle log rotation properly. The service could not get updated content after the file is rotated if you only mount the log file. This is because files are mounted into the container with specific inode numbers, which remain the same even if the file names are changed on the host (usually happens after rotation). kube-iptables-tailer also applies a fingerprint for the current log file to handle log rotation as well as avoid reading the entire log file every time when its content get updated.

Container Spec

We suggest running kube-iptables-tailer as a Daemonset in your cluster. An example of YAML spec file can be found in demo/.

Environment Variables

Required:

  • IPTABLES_LOG_PATH or JOURNAL_DIRECTORY: (string) Absolute path to your iptables log file, or journald directory including the full path.
  • IPTABLES_LOG_PREFIX: (string) Log prefix defined in your iptables chains. The service will only handle the logs matching this log prefix exactly.

Optional:

  • KUBE_API_SERVER: (string) Address of the Kubernetes API server. By default, the discovery of the API server is handled by kube-proxy. If kube-proxy is not set up, the API server address must be specified with this environment variable. Authentication to the API server is handled by service account tokens. See Accessing the Cluster for more info.
  • KUBE_EVENT_DISPLAY_REASON: (string, default: PacketDrop) A brief and UpperCamelCase formatted text showing under the Reason section in the event sent from this service.
  • KUBE_EVENT_SOURCE_COMPONENT_NAME: (string, default: kube-iptables-tailer) A name showing under the From section to indicate the source of the Kubernetes event.
  • METRICS_SERVER_PORT: (int, default: 9090) Port for the service to host its metrics.
  • PACKET_DROP_CHANNEL_BUFFER_SIZE: (int, default: 100) Size of the channel for existing items to handle. You may need to increase this value if you have a high rate of packet drops being recorded.
  • PACKET_DROP_EXPIRATION_MINUTES: (int, default: 10) Expiration of a packet drop in minutes. Any dropped packet log entries older than this duration will be ignored.
  • REPEATED_EVENTS_INTERVAL_MINUTES: (int, default: 2) Interval of ignoring repeated packet drops in minutes. Any dropped packet log entries with the same source and destination will be ignored if already submitted once within this time period.
  • WATCH_LOGS_INTERVAL_SECONDS: (int, default: 5) Interval of detecting log changes in seconds.
  • POD_IDENTIFIER: (string, default: namespace) How to identify pods in the logs. name, label, namespace or name_with_namespace are currently supported. If label, uses the value of the label key specified by POD_IDENTIFIER_LABEL.
  • POD_IDENTIFIER_LABEL: (string) Pod label key with which to identify pods if POD_IDENTIFIER is set to label. If this label doesn't exist on the pod, the pod name is used instead.
  • PACKET_DROP_LOG_TIME_LAYOUT: (string) Golang Time layout used to parse the log time
  • LOG_LEVEL: (string, default: info) Log level. debug, info, warn, error are currently supported.

Metrics

Metrics are implemented by Prometheus, which are hosted on the web server at /metrics. The metrics have a name packet_drops_count and counter with the following tags:

  • src: The namespace of sender Pod involved with a packet drop.
  • dst: The namespace of receiver Pod involved with a packet drop.

Logging

Logging uses the zap library to provide a structured log output.

Contribution

All contributions are welcome to this project! Please review our contributing guidelines to facilitate the process of your contribution getting mereged.

Support

Need to contact us directly? Email [email protected] and be sure to include the name of this project in the subject.

Copyright and License

Copyright 2019 Box, Inc. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

kube-iptables-tailer's People

Contributors

7felf avatar christophlehmann avatar dilyar85 avatar jackkleeman avatar kairen avatar kgtw avatar mjlshen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-iptables-tailer's Issues

How to add iptables logging rules using calico ?

Hi Guys,

nice project! I would like to try it out in my environment, but have now troubles configuring iptables LOG rule.

We use calico for networking, calico default setting are to insert iptables rules before any host ones and to check it regularly, so after we configure needed rules manually, calico pushes them to the end or even removes the rules... anyways nothing is being logged.

So how should iptables logging rules be added using calico?

It might be a good idea to better document it ?

Kind regards,
Vlad

Still maintained?

Is this project still maintained? I see the last few people who have managed PRs here are no longer working at Box. Is this an actively maintained project?

fix_journal_watcher_cgo fails to compile

It looks like #23 introduced a bug in fix_journal_watcher_cgo. Specifically, DefaultPacketDropLogTimeLayout needs to reference the utils package:

make build-cgo

...

# github.com/box/kube-iptables-tailer/drop
drop/journal_watcher_cgo.go:46:13: undefined: DefaultPacketDropLogTimeLayout
godep: go exit status 2
make: *** [Makefile:18: build-cgo] Error 1

Validation error for default demo/daemonset.yaml

If you try to just use it out-of-the-box, spec.selector must be defined first.

kubectl apply -f demo/daemonset.yaml --validate --dry-run

error: error validating "demo/daemonset.yaml": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

Install instructions are outdated

Hello, I tried to install kube-iptables-tailer followiong the instructions in README, but ran into the following problems:

  • The command git clone github.com/box/kube-iptables-tailer fails. The correct command is git clone https://github.com/box/kube-iptables-tailer.git
  • The command cd $GOPATH/src/github.com/box/kube-iptables-tailer fails with "No such file or directory". The correct command is cd kube-iptables-tailer
  • I do not know what "CHAIN_NAME" to use in the command iptables -A CHAIN_NAME -j LOG --log-prefix "EXAMPLE_LOG_PREFIX: "

What's the proper chain to use when adding logging for use with this project?

The setup instructions on the README just have this: iptables -A CHAIN_NAME -j LOG --log-prefix "EXAMPLE_LOG_PREFIX: "

What is typically used as CHAIN_NAME? I assume this should be a chain with a policy of DROP as a chain with the policy of ACCEPT would just result in logging everything that passes through and this project would assume that they're all dropped packets, no?

Also, should this rule be added to the filter table or would there be a reason to add it to the nat table instead?

What's the standard chain name to use? We have a pretty basic Kubernetes setup with the following chains:

*filter
:INPUT ACCEPT [7233:2389351]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [7974:2056167]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]

Is FORWARD typically the best place to add our LOG rule for this project?

Errors will happen when upgrading the libraries

(The purpose of this report is to alert box/kube-iptables-tailer to the possible problems when box/kube-iptables-tailer try to upgrade the following dependencies)

An error will happen when upgrading libraries _coreos/go-systemd_and prometheus/client_golang:

For example----

github.com/coreos/go-systemd

-Latest Version: v22.1.0 (Latest commit b51e752 26 days ago)
-Where did you use it:
https://github.com/box/kube-iptables-tailer/search?l=Go&q=github.com%2Fcoreos%2Fgo-systemd
-Detail:

github.com/coreos/go-systemd/go.mod

module github.com/coreos/go-systemd/v22
go 1.12
require github.com/godbus/dbus/v5 v5.0.3 

github.com/coreos/go-systemd/sdjournal/functions.go

package sdjournal
import (
	"github.com/coreos/go-systemd/v22/internal/dlopen"
	…
) 

This problem was introduced since coreos/go-systemd v22.0.0 . Now you used the version v19. If you try to upgrade coreos/go-systemd to version v22.0.0 and above, you will get an error--- no package exists at "github.com/coreos/go-systemd/v22"

Similar issues can also happen when upgrading libraries prometheus/client_golang .

I investigated the libraries (coreos/go-systemdandprometheus/client_golang) release information and found the root cause of this issue is that----

  1. These dependencies all added Go modules in the recent versions.

  2. They all comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:

A package that has migrated to Go Modules must include the major version in the import path to reference any v2+ modules. For example, Repo github.com/my/module migrated to Modules on version v3.x.y. Then this repo should declare its module path with MAJOR version suffix "/v3" (e.g., module github.com/my/module/v3), and its downstream project should use "github.com/my/module/v3/mypkg" to import this repo’s package.

  1. This "github.com/my/module/v3/mypkg" is not the physical path. So earlier versions of Go (including those that don't have minimal module awareness) plus all tooling (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly See golang/dep#1962, golang/dep#2139.

Note: creating a new branch is not required. If instead you have been previously releasing on master and would prefer to tag v3.0.0 on master, that is a viable option. (However, be aware that introducing an incompatible API change in master can cause issues for non-modules users who issue a go get -u given the go tool is not aware of semver prior to Go 1.11 or when module mode is not enabled in Go 1.11+).
Pre-existing dependency management solutions such as dep currently can have problems consuming a v2+ module created in this way. See for example dep#1962.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher

Solution

1. Migrate to Go Modules.

Go Modules is the general trend of ecosystem, if you want a better upgrade package experience, migrating to Go Modules is a good choice.

Migrate to modules will be accompanied by the introduction of virtual paths(It was discussed above).

This "github.com/my/module/v3/mypkg" is not the physical path. So Go versions older than 1.9.7 and 1.10.3 plus all third-party dependency management tools (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly.

Then the downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide, govendor…).

2. Maintaining v2+ libraries that use Go Modules in Vendor directories.

If box/kube-iptables-tailer want to keep using the dependency manage tools (like dep, glide, govendor, etc), and still want to upgrade the dependencies, can choose this fix strategy.
Manually download the dependencies into the vendor directory and do compatibility dispose(materialize the virtual path or delete the virtual part of the path). Avoid fetching the dependencies by virtual import paths. This may add some maintenance overhead compared to using modules.

As the import paths have different meanings between the projects adopting module repos and the non-module repos, materialize the virtual path is a better way to solve the issue, while ensuring compatibility with downstream module users. A textbook example provided by repo github.com/moby/moby is here:
https://github.com/moby/moby/blob/master/VENDORING.md
https://github.com/moby/moby/blob/master/vendor.conf
In the vendor directory, github.com/moby/moby adds the /vN subdirectory in the corresponding dependencies.
This will help more downstream module users to work well with your package.

3. Request upstream to do compatibility processing.

The coreos/go-systemd have 478 module-unaware users in github, such as: docker/docker-ce, moby/moby, pamarquez/K3S…
https://github.com/search?q=coreos%2Fgo-systemd+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json

Summary

You can make a choice when you meet this DM issues by balancing your own development schedules/mode against the affects on the downstream projects.

For this issue, Solution 1 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.

References

Do you plan to upgrade the libraries in near future?
Hope this issue report can help you ^_^
Thank you very much for your attention.

Best regards,
Kate

Parse error on ICMP (and probably other non IP protocols)

{"level":"error","timestamp":"2021-12-05T14:45:02.006Z","caller":"drop/parser.go:78","msg":"Cannot parse the log line","log":"2021-12-05T14:44:59.341406+00:00 node-10 kernel: [2276706.763083] calico-packet: IN=calid2a5761e0a8 OUT=caliec92613fa1b MAC=ee:ee:ee:ee:ee:ee:96:a1:cb:af:30:69:08:00 SRC=10.2.118.254 DST=10.2.118.201 LEN=84 TOS=0x00 PREC=0x00 TTL=63 ID=6362 DF PROTO=ICMP TYPE=8 CODE=0 ID=92 SEQ=6 ","error":"Missing field=SPT","stacktrace":"github.com/box/kube-iptables-tailer/drop.RunParsing\n\t/go/src/github.com/box/kube-iptables-tailer/drop/parser.go:78\nmain.startParsing\n\t/go/src/github.com/box/kube-iptables-tailer/main.go:95"}

Hello,
As provided by log in this line and this line the parser tries to get SPT DPT from log which is not available in non IP protocols. Which leads to no metrics and no events.

How to use it with canal ?

Hello,
I tried to use your app in a k8s deployment with canal but I encountered some issues.
I cannot build your image behind a proxy (with the proxy set on my docker daemon) I've got go error (below). So I used docker.io/boxinc/kube-iptables-tailer:v0.1.0 instead

I've configured rsyslog, create a daemonset file and service account/ clusterrole for rbac usage, I can share that on your demo dir and in your readme page as a PR if you want.

As I use canal "which is calico and flanel" I need to put the iptables logger on the right chains but how you can do it automatically as searching the right chains is complicated and need root access, and modify the default behavior because when you add manually in a chain is directly overwrites by canal, so no logging..

Maybe a new feature but I need to automatically remove the logger, iptables is very verbose and consume a lot of space disk. Maybe is possible to trig that from a label on a pod to enable and disable the logger on the right chain ?

Thanks.

$ make container
docker build -f Dockerfile -t kube-iptables-tailer:v0.1.0 .
Sending build context to Docker daemon 57.49MB
Step 1/8 : FROM golang:1.11.5 as builder
---> 1454e2b3d01f
Step 2/8 : WORKDIR $GOPATH/src/github.com/box/kube-iptables-tailer
---> Using cache
---> 64b237b0c563
Step 3/8 : COPY . $GOPATH/src/github.com/box/kube-iptables-tailer
---> Using cache
---> 0a76f2e456f2
Step 4/8 : RUN make build
---> Running in 6eba4d68fc6a
rm -f kube-iptables-tailer
#go get github.com/tools/godep
find . -path ./vendor -prune -o -name '*.go' -print | xargs -L 1 -I % gofmt -s -w %
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 $GOPATH/bin/godep go build -o kube-iptables-tailer
/bin/sh: 1: /go/bin/godep: not found
Makefile:21: recipe for target 'build' failed
make: *** [build] Error 127
The command '/bin/sh -c make build' returned a non-zero code: 2
make: *** [Makefile:29: container] Error 2

/go/bin/godep exist

Golang parser time format does not match actual iptables logging time format, causing error to scrape logs

After I created the Daemonset on the remote cluster, the kube-iptables-tailer were running but it throws this error

E0903 11:47:30.085747       1 parser.go:31] Error retrieving log time to check expiration: parsing time "Sep" as "2006-01-02T15:04:05.000000-07:00": cannot parse "Sep" as "2006"

Time format expected by kube-iptables-tailer

2019-02-04T10:10:12.345678-07:00 hostname EXAMPLE_LOG_PREFIX: SRC=SOURCE_IP DST=DESTINATION_IP ...

Actual iptables log time format:

Sep  3 11:38:30 ip-xx-xx-xx-xx kernel: [324352.855587] flannel-drop: ....

This is my environment spec:

Distributor ID:	Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
Cloud Provider: AWS on EC2 Instance

Kubernetes API version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

with kube-proxy installed

The time format seems mismatched, did you somehow customize your iptables logging time format? What is your iptables log time format?

No drops detected with systemd >= 246

When running kube-iptables-tailer on a system with systemd >= 246 no drops are detected.

My workaround is adding an override for systemd-journald and restarting the service:

# cat /etc/systemd/system/systemd-journald.service.d/override.conf
[Service]
Environment="SYSTEMD_JOURNAL_KEYED_HASH=0"
[Journal]
Compress=no

From https://github.com/systemd/systemd/blob/main/NEWS#L1493:

        * systemd-journald gained support for zstd compression of large fields
          in journal files. The hash tables in journal files have been hardened
          against hash collisions. This is an incompatible change and means
          that journal files created with new systemd versions are not readable
          with old versions. If the $SYSTEMD_JOURNAL_KEYED_HASH boolean
          environment variable for systemd-journald.service is set to 0 this
          new hardening functionality may be turned off, so that generated
          journal files remain compatible with older journalctl
          implementations.

Update the ClusterRole in demo/daemonset.yaml to allow creating Events

The ClusterRole in demo/daemonset.yaml does not allow Events to be created. This causes the following kube-iptables-tailer error:

E0828 01:46:25.194683       1 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"test-b4dd57c68-jx9xk.162f4b4c9eba9a10", GenerateName:"", Namespace:"test", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"test", Name:"test-b4dd57c68-jx9xk", UID:"362e4f30-e1d1-4990-91a4-4003114647ec", APIVersion:"v1", ResourceVersion:"16813", FieldPath:""}, Reason:"PacketDrop", Message:"Packet dropped when sending traffic to 1.2.3.4 on port 80/TCP", Source:v1.EventSource{Component:"kube-iptables-tailer", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfca365c4b787010, ext:84567187309, loc:(*time.Location)(0x1b61320)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfca365c4b787010, ext:84567187309, loc:(*time.Location)(0x1b61320)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:kube-system:kube-iptables-tailer" cannot create resource "events" in API group "" in the namespace "test"' (will not retry!)

I would update the ClusterRole, as shown in this diff:

    - apiGroups: ["v1"]
      resources: ["pods"]
      verbs: ["get", "list", "watch"]
+   - apiGroups: [""]
+     resources: ["events"]
+     verbs: ["create"]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.