Giter VIP home page Giter VIP logo

strimzi / strimzi-kafka-operator Goto Github PK

View Code? Open in Web Editor NEW
4.5K 98.0 1.2K 71.71 MB

Apache Kafka® running on Kubernetes

Home Page: https://strimzi.io/

License: Apache License 2.0

Shell 1.07% Makefile 0.23% Java 98.30% Awk 0.01% Dockerfile 0.06% 1C Enterprise 0.01% Groovy 0.26% Mustache 0.08%
kafka kubernetes openshift docker messaging enmasse kafka-connect kafka-streams data-streaming data-stream

strimzi-kafka-operator's Introduction

Strimzi

Run Apache Kafka on Kubernetes and OpenShift

Build Status GitHub release License Twitter Follow Artifact Hub

Strimzi provides a way to run an Apache Kafka® cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project.

Quick Starts

To get up and running quickly, check our Quick Start for Minikube, OKD (OpenShift Origin) and Kubernetes Kind.

Documentation

Documentation for the current main branch as well as all releases can be found on our website.

Roadmap

The roadmap of the Strimzi Operator project is maintained as GitHub Project.

Getting help

If you encounter any issues while using Strimzi, you can get help using:

Strimzi Community Meetings

You can join our regular community meetings:

Resources:

Contributing

You can contribute by:

  • Raising any issues you find using Strimzi
  • Fixing issues by opening Pull Requests
  • Improving documentation
  • Talking about Strimzi

All bugs, tasks or enhancements are tracked as GitHub issues. Issues which might be a good start for new contributors are marked with "good-start" label.

The Dev guide describes how to build Strimzi. Before submitting a patch, please make sure to understand, how to test your changes before opening a PR Test guide.

The Documentation Contributor Guide describes how to contribute to Strimzi documentation.

If you want to get in touch with us first before contributing, you can use:

License

Strimzi is licensed under the Apache License, Version 2.0

Container signatures

From the 0.38.0 release, Strimzi containers are signed using the cosign tool. Strimzi currently does not use the keyless signing and the transparency log. To verify the container, you can copy the following public key into a file:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

And use it to verify the signature:

cosign verify --key strimzi.pub quay.io/strimzi/operator:latest --insecure-ignore-tlog=true

Software Bill of Materials (SBOM)

From the 0.38.0 release, Strimzi publishes the software bill of materials (SBOM) of our containers. The SBOMs are published as an archive with SPDX-JSON and Syft-Table formats signed using cosign. For releases, they are also pushed into the container registry. To verify the SBOM signatures, please use the Strimzi public key:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

You can use it to verify the signature of the SBOM files with the following command:

cosign verify-blob --key cosign.pub --bundle <SBOM-file>.bundle --insecure-ignore-tlog=true <SBOM-file>

Strimzi is a Cloud Native Computing Foundation incubating project.

CNCF ><

strimzi-kafka-operator's People

Contributors

ajborley avatar asorokhtey avatar chonton avatar dependabot[bot] avatar dmuntima avatar frawless avatar fvaleri avatar henryzrncik avatar im-konge avatar jankalinic avatar katheris avatar kornys avatar kyguy avatar michalxo avatar mikeedgar avatar mimaison avatar mstruk avatar ncbaratta avatar paulrmellor avatar ppatierno avatar samuel-hawker avatar scholzj avatar see-quick avatar serrss avatar showuon avatar shubhamrwt avatar sknot-rh avatar tombentley avatar tomncooper avatar zzhu2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

strimzi-kafka-operator's Issues

Kafka standalone installation guide

List of remaining TODOs for the Kafka standalone installation guide:

  • Logging for Zookeeper
  • Logging for Kafka
  • Kafka Connect
  • Sample configuration files

Configurable logging level

Right now, the logging level is hard coded in the images. It would be better to make it configurable (via env vars) so changing the logging level at least wouldn't require a new image.

Kafka should use service DNS names and not Pod IP addresses as advertised host

Kafka is currently using the pod IP address as the advertised host name. I think that is not correct, as the Pod IP address is not necessarily stable. It should be using either the IP address exposed through the headless service or probably even better the hostname exposed through the headless service.

I'm not sure in detail how the different Kafka clients work and how often they recheck the advertised addresses. But having them stable should not hurt, it can only help.

"Kafka version" parameter in OpenShift template

The openshift_tamplate.yaml for Kafka Statefull set seems to contain following parameter which is not used:

- description: Kafka Version
  displayName: Kafka Version
  name: KAFKA_VERSION
  value: "0.11.0.0"

We should clarify its purpose and either remove it or use it. Wa sit supposed to determine the image version?

Renaming services for Kafka and Zookeeper

Both Kafka and Zookeeper have two separate services. The headless service which make sure that DNS records will be created for individual pods and access to them. In ZK this is needed to bootstrap the cluster. In Kafka this is needed to access the partition leaders directly etc.

Second service is a ClusterIP and does load balancing between available servers. For ZK this is used for client access and for Kafka this is used as the bootstrap sever address. IMHO these services are currently a bit unfortunately named. The headless services are always named ...-headless. That means that the DNS hostnames to access the pods look always a bit strange, such as kafka-0.kafka-headless.myproject... or zookeeper-0.zookeeper-headless.myproject.... The names also don't make it clear which service should be used for what.'as well as to simplify the DNS names.

I would suggest to rename the services in following way to make it a bit more clear what should they be used for:

  • kafka -> kafka-bootstrap
  • kafka-headless -> kafka
  • zookeeper -> zookeeper-client
  • zookeeper-headless -> zookeeper

WDYT?

Include `ps` in the images?

I noticed that the images currently lack ps. This can make life slightly harder if you have to oc exec -ti bash -li into the nodes (you have to poke around in /proc instead), and also means that the (kafka|zookeeper)-stop-server.sh scripts wouldn't work (not that they're used). Is it worth installing procps-ng?

Error when publishing to Kafka when brokers scaled to 3

I am trying to use the image with one change ...

Using ADVERTISED_LISTENERS and LISTENERS instead of hostname
Below are the configurations

advertised.listeners="EXTERNAL://stb-openshift..........:30062,INTERNAL://kafka:9094"
listeners="INTERNAL://:9094,EXTERNAL://:9092"

But when scaling up the pods there is error when publishing messages
Error as
"WARN Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)"

Works fine with 1 zookeeper and 1 kafka node

Define a base Docker image

It could be useful having a base Docker image with Kafka only.
In this way the Docker images for Kafka and Kafka Connect deployment can be based on that, with related differences (i.e. scripts to run).
It's useful for saving bandwidth on Docker pull as well.

Adding Kafka Connect metrics

Other than getting metrics from Kafka brokers and Zookeeper, we should add metrics support for Kafka Connect as well. Maybe it's something already provided by the Debezium project ? We should check on the project itself and with @gunnarmorling ?

Enhance configuration possibilities of Kafka, Kafka Connect and Zookeeper

Our Docker images currently accept only few environment variables to affect the configuration. Most of the different configuration options (for both Kafka brokers and Kafka Connect as well as for Zookeeper) are not exposed and are basically fixed to the default values. We should make it possible to configure all the different aspects.

Some ideas how to do it:

  1. "Brute force approach"

We take all environment variables starting with some prefix (e.g. KAFKA_CONFIG_). They will be translated as in:

  • Remove prefix
  • Replace _ with .
  • Change to lowercase
    and dumped into the config file. E.g. $KAFKA_CONFIG_LOG_RETENTION_BYTES would turn into log.retention.bytes

This is trivial to implement and offers a big versatility. On the other hand there is for example no way how to validate the config. And one has to be careful when selecting the prefix so that it doesn't create collisions with environment variables set by OpenShift / Kubernetes. For example a simple prefix of KAFKA_ would have collision with service named kafka and the environment variables such as $KAFKA_SERVICE_PORT_SSL which would get dumped into the config file as well. Also the Deployment / StatefulSet / Pod YAML files which contain 10s of different options looks rather messy.

  1. Templating / generated code

Based on some templating / code generation (given the amount of configuration options this cannot be done manually) we create the configuration file with selected configuration parameters. For example from a simple list such as:

listeners string
log.dir string
log.flush.interval.messages long

We can easily generate shell script which will check some env. variables and validate them.

This would make sure that the configuration file will be valid. But it will be really hard to cover all possible situations as the Kafka configuration can be quite complex and some parameter names are actually dynamically generated, so we would have problems supporting them this way.

  1. Pass whole config sections

Another options which I used in the past is passing whole sections of the config files through environment variables. A complete section will be passed and inserted into the config file. We would need to check that this doesn't overwrite some options which should not be freely configurable (security, placement of files etc.). But that would be feasible.

  1. Config Maps

User will create a configuration, store it in (Kubernetes / OpenShift) ConfigMap or Secret. It will be mounted to a file which will be processed by the scripts in the image. We would need to check that this doesn't overwrite some options which should not be freely configurable (security, placement of files etc.). Seems to be a bit redundant to option 3. since you can mount Config Maps or Secrets also to environment variables.

Define the Pod Management Policy

Currently, out stateful sets use the default OrderedReady podManagementPolicy for both Kafka and Zookeeper.

OrderedReady means that the pods will be started one by one as they become ready. And deleted one by one. This can cause the startup to take longer (because of the readiness initial delay etc.) and shutdown to take longer (waiting for each broker to shutdown etc.). Using a Parallel policy would start and shutdown the stateful sets faster.

However, this policy applies also to scale ups and scale downs. There the OrderedReady would take it through multiple rebalancing stages where as Parallel would do it in one big step. Again the Parallel could be much faster. However in big scale downs it can cause some data loss: imagine running 6 Kafka replicas and scaling down to 3 ... without proper use of rack IDs etc. it can easily be that some topics will have replicas only on the 3 pods which will be taken down and the data will be lost. This can be probably workaround by scaling down always by 1 pod (through the future controller) but starting / stopping / scaling up in big steps.

It seems a bit unfortunate that Kubernetes do not allow specifying different policy for each of these phases.

Zookeeper address/service should not be hard coded

The Kafka images currently always connect to a Zookeeper service named zookeeper. This means the we currently cannot rename the deployments / run multiple deployments in one namespace. The Zookeeper address should be configurable to make things more flexible.

Make it possible to run Zookeeper in HA

Currently we run always only single instance of Zookeeper service. This is valid for both in-memory as well as for statefulset Kafka setups. We should add support for running Zookeeper ensembles with 3 and more nodes using stateful sets.

Use cases for authnz

We're dealing with a number of systems each with their own authentication and authorisation:

  • Kafka
  • ZooKeeper
  • k8s/OpenShift
  • Any authnz that we give barnabas itself

We need use cases to motivate requirements for what we should support.

Create a builder image from Connect image

Most Connect images requires an addition of connectors that are used by the user. There are two approaches available

  • mounting voulme with image jars
  • create an image that derives from Connect image and add plugins to plugin dir

We propose a third way how to solve the problem. The Connect image should be able to support s2i build in form of binary builds. The user can then just use oc new-build --binary=true to create the binary build from Connect image and oc start-build to pass the plugins into the Connect. The net result is an image stream that could be consumed by current Connect template.

The assembly script for this case is just coping source files into plugins directory.

Automated testing

We need automated tests for the configurations we're publishing. At a basic level, we should know that the deployments to OpenShift and Kubernetes actually work and don't produce errors, but we also need system-level testing with failure scenarios. https://fabric8.io/guide/testing.html might be a good place to start.

Kafka template should support configuration of replication factor params

As it is possible to configure number of replicas for Kafka it would be also useful to configure default replication factors for

offsets.topic.replication.factor
transaction.state.log.replication.factor
config.storage.replication.factor
offset.storage.replication.factor
status.storage.replication.factor

default.replication.factor

The default value for the first 5 is 3, for the last one is 1. So I propose to provide two parameters - one for the system topics and one for the default. The reason is that if you dpeloy cluster with less than 3 brokers (typically one for dev environment) it is by definition non-functional as the settings require at leas 3 replicas.

Single-file deployments

Currently we have a file-per-entity for the zookeeper and kafka deployments. This makes instructions and examples (for example in the README.md) more verbose as we have to list and use multiple files. We could merge the entities into a single YAML file to alleviate this.

Better health check for zookeeper

We're currently using echo ruok | nc 127.0.0.1 2181, but the documentation says this will return imok whether the zookeeper instance is in the quorum or not. We could do better with something like echo stat | nc 127.0.0.1 2181 | grep -qE 'Mode: (follower|leader)'

Multitennant ZK ensembles?

It may be possible for a single ZK ensemble to support multiple Kafka clusters. To know whether that would be worthwhile we need a better understanding of:

  1. How heavily utilised the ensemble is by a single cluster of a given size (in particular, peak utilisation), and thus some idea of "sharing ratio".
  2. How we could migrate a Kafka cluster using a multi tennant ensemble to a dedicated one.

kafka_run.sh script not found error

kafka-statefulsets/resources/openshift-template.yaml on line 109 references a script /opt/kafka_2.11-${KAFKA_VERSION}/kafka_run.sh that doesn't exist.

The script should be updated to /opt/kafka/kafka_run.sh

Mismatch between Connect s2i and Connect templates

Connect s2i is able to build a Connect image with a requested plugin(s). The resulting image is stored in an image stream. Connect template on the other hand uses Kuberentes Deployment object that is able to consume plain images, not image streams.

I propose to extend s2i template with same deployment as is in connect template but enrich it with alpha.image.policy.openshift.io/resolve-names: '*' - see https://trello.com/c/BW0FS66Q/845-8-integrate-imagestreams-with-non-origin-resources
The netresult will be that the deployement will be automatically started when the the build is completed.

Future of HostPath provisioner

Currently, we have a hostPath provisioner in kafka-statefulsets/resources. Since minishift and minikube now provide storage on their own this doesn't seem to be needed anymore. We should decide what to do with it.

To me it sounds like:

  • Users without a lot of Kubernetes / OpenShift experience will use minishift / Minikube and not need it
  • Experienced users with some more production like clusters will be able to deal with storage in their own way and will not need this.

So I would suggest to delete it.

Is it okay to make sure it's all right?

kafka-statefulsets/resources/zookeeper.yaml

            - name: clientport
              containerPort: 2181
              protocol: TCP
            - containerPort: 2888
              name: clustering
              protocol: TCP
            - containerPort: 3888
              name: leaderelection
              protocol: TCP

The 1st line is - name: clientport and 2nd line is containerPort: 2181

The 4th line is - containerPort: 2888 and 5th line is name: clustering

The 7th line is - containerPort: 3888 and 8th line is name: leaderelection

Kafka Connect workers should set advertised hostname

Kafka Connect workers need to communicate with each other over the REST interface. The rest.advertised.host.name option should be used to facilitate this through some stable and reliable interface. Basically one of the workers act as a leader and some of the requests (such as creating new connector are simply forwarded to the leader's REST from the non-leader).

There seem to be two options:

  1. Use the Pod IP address. This is not necessarily stable across restarts but it is not clear if this would matter to Connect.
  2. Convert it from deployment to stateful set. This would give the pods stable DNS address.

This doesn't seem to be critical as Kafka Connect seems to work fine (probably currently uses the Pod IP address as default - even in such case this should be formalised in the code).

Fix CI

Currently CI builds for PRs seem to consistently fail. The last one failed with:

Removing intermediate container 9633bde4df15
Successfully built c1369e1e90d5
Error: Cannot perform an interactive login from a non TTY device
make[1]: *** [all] Error 1
make[1]: Leaving directory `/home/travis/build/EnMasseProject/barnabas/kafka-base'
make: *** [kafka-base] Error 2
The command "make" exited with 2.

Update Docker images to Fedora 27

Our Docker images seem to be still using Fedora 26. We should update to Fedora 27 which has been released or consider moving to CentOS which is a bit more enterprise friendly.

Add support for `broker.rack`

Kafka has a concept of racks. They represent group Kafka nodes into different segments based on the probability that they will fail all together. E.g. all machines in single rack will fail when the rack fails - hence rack. But in in public cloud for example the role of the rack will be taken over by the availability zone. The idea is that when a topic has N replicas, the replicas should be spread across different racks to minimise the probability that one disaster event will take down all replicas of particular topic / partition.

In Kubernetes world, the nodes have often tags to describe the availability zone or some similar concept of disaster partition (failure-domain.beta.kubernetes.io/zone). We should:

  • Make sure we have Kafka nodes in each rack (node selectors? pod anti-affinity? In some environment this might be easier as the pods will be unable to move between the partitions because the disks are limited to a single partition (e.g. AWS). In other environments this might be harder)
  • Configure Kafka nodes with the proper broker.rack property

Find better health check for Kafka

As #56 shows (similar discussion was already in #3 ), bin/kafka-broker-api-versions.sh is not a good health check. In a long term, we should try to identify better healthcheck.

Use code generation for the YAML files

Currently we have several YAML files which are very similar in kafka-statefulset and kafka-inmemory. Having some simple code generator would be helpful, so that we can change the YAML definitions really only in single place.

Can't work on kubernetes/minikube

# kubectl apply -f kafka-statefulsets/resources/cluster-volumes.yaml
persistentvolume "pv0001" created
persistentvolume "pv0002" created
persistentvolume "pv0003" created
persistentvolume "pv0004" created
Error from server (Invalid): PersistentVolume "pv0005" is invalid: spec.accessModes: Required value
Error from server (Invalid): PersistentVolume "pv0006" is invalid: spec.accessModes: Required value

after insert kafka-statefulsets/resources/cluster-volumes.yaml by editor

- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv0005
  spec:
    capacity:
      storage: 5Gi

accessModes:

- ReadWriteOnce

   persistentVolumeReclaimPolicy: Recycle
    hostPath:
      path: /tmp/pv0005
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: pv0006
  spec:
    capacity:
      storage: 5Gi

accessModes:

- ReadWriteOnce

    persistentVolumeReclaimPolicy: Recycle
    hostPath:
      path: /tmp/pv0006

this yaml work fine.
but run kubectl apply -f kafka-statefulsets/resources/zookeeper.yaml
error: error converting YAML to JSON: yaml: line 34: did not find expected key
and I swap between name: clientport and containerPort: 2181

            - name: clientport
              containerPort: 2181
              protocol: TCP

The error was same ``error: error converting YAML to JSON: yaml: line 34: did not find expected key`
How to do the work?
Env:

# minikube version
minikube version: v0.22.3
# kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-10-07T00:39:16Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Fix logging

Currently we pull the Kafka distribution from Apache when building the image, and thus inherit the default log4j config, which logs to console and file. Since k8s/openshift will log what gets output to the console, we should fix the log4j.properties to not use a file logger.

Kafka Connect sidecar container

Currently we provide in Barnabas only distributed Kafka Connect cluster. In some cases users might be also interested in running standalone Kafka Connect. For example as a sidecar container which will be loading file records written by other container from the same Pod into Kafka (or from Kafka).

For this it would be great if we could provide a Docker image which would contain standalone Kafka Connect and which could be easily included in Pods as a sidecar.

Future of the "inmemory" setup

I would like to discuss the future of the inmemory setup. Currently there seem to be two main differences between the two setups:

  • inmemory is using deployment instead of stateful set
  • inmemory is using emptyDir as a storage instead of PersistentVolumeClaim

Note: inmemory is actually not in memory as the name might siggest. It is still using disk but not persistent volume.

The problem from my perspective is that the inmemory setup is using significantly different setup from the stateful set and makes all changes more complicated. And its usability is currently anyway bad since it cannot for example scale up.

My suggestion would be to:

  1. Remove the inmemory part completely.

or

  1. Keep it, but change it to stateful set so that it is exactly the same as the stateful set deployment apart from using emptyDir as a storage. This would make it easier to maintain it and support it while adding features for the users (such as scaling up). We would also need one less image.

WDYT?

Requirements for Controller

We expect, eventually, to need some kind of controller which will use the k8s REST API to manage the zookeeper and kafka deployments. This issue is about trying to understand what the requirements for a controller would be (while related to #31 that would be a build-time process, whereas this is about doing something similar at runtime).

Healthcheck doesn't work with advertised hostname using DNS

The PR #52 and #51 don't seem to work together. The problem is that we advertise the hostname based on the headless service. But the default setting of the headless service is that until healthcheck passes the routing is disabled. However the bin/kafka-broker-api-versions.sh is configured through bootstrap server where it gets the advertised address and tries to connect over the advertised address. But that doesn't work until the healthcheck passes.

The solution should be to change the configuration of the headless service in order to resolve the DNs names even before the first heathcheck. Since this is a headless service and not load balancer / ClusterIP this should not have any undesired sideeffects (In a worst case it would change some errors from cannot resolve to cannot connect).

Add Kafka Connect support

In order to use Debezium on OpenShift via Barnabas, support for Kafka Connect will be needed. Debezium comes in form of a set of KC source connectors, so we'd need a way to add one or more of the Debezium connectors to the Connect image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.