Giter VIP home page Giter VIP logo

cluster-api-provider-digitalocean's Introduction

Kubernetes Cluster API Provider DigitalOcean

capicapi

Go Reference Go Report Card


Kubernetes-native declarative infrastructure for DigitalOcean.

What is the Cluster API Provider DigitalOcean

The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management.

The API itself is shared across multiple cloud providers allowing for true DigitalOcean hybrid deployments of Kubernetes. It is built atop the lessons learned from previous cluster managers such as kops and kubicorn.

Project Status

This project is currently a work-in-progress, in an Alpha state, so it may not be production ready. There is no backwards-compatibility guarantee at this point. For more details on the roadmap and upcoming features, check out the project's issue tracker on GitHub.

Launching a Kubernetes cluster on DigitalOcean

Check out the getting started guide for launching a cluster on DigitalOcean.

Features

  • Native Kubernetes manifests and API
  • Support for single and multi-node control plane clusters
  • Choice of Linux distribution (as long as a current cloud-init is available)

Compatibility with Cluster API and Kubernetes Versions

This provider's versions are compatible with the following versions of Cluster API:

Cluster API v1alpha1 (v0.1) Cluster API v1alpha2 (v0.2) Cluster API v1alpha3 (v0.3) Cluster API v1alpha4 (v0.4) Cluster API v1 (v1.0)
DigitalOcean Provider v1alpha1 (v0.1)
DigitalOcean Provider v1alpha1 (v0.2)
DigitalOcean Provider v1alpha2 (v0.3)
DigitalOcean Provider v1alpha3 (v0.4)
DigitalOcean Provider v1alpha4 (v0.5)
DigitalOcean Provider v1 (v1.0)

This provider's versions are able to install and manage the following versions of Kubernetes:

DigitalOcean Provider v1alpha1 (v0.1) DigitalOcean Provider v1alpha1 (v0.2) DigitalOcean Provider v1alpha2 (v0.3) DigitalOcean Provider v1alpha3 (v0.4) DigitalOcean Provider v1alpha4 (v0.5) DigitalOcean Provider v1 (v1.0)
Kubernetes 1.19
Kubernetes 1.20
Kubernetes 1.21
Kubernetes 1.22
Kubernetes 1.23

NOTE: As the versioning for this project is tied to the versioning of Cluster API, future modifications to this policy may be made to more closely align with other providers in the Cluster API ecosystem.

Documentation

Documentation is in the /docs directory.

Getting involved and contributing

More about development and contributing practices can be found in CONTRIBUTING.md.

cluster-api-provider-digitalocean's People

Contributors

alvaroaleman avatar ameukam avatar andrewsykim avatar bavarianbidi avatar cpanato avatar dependabot[bot] avatar detiber avatar gottwald avatar inductor avatar johntylerrupert avatar lldrlove avatar mo-rieger avatar morrislaw avatar ncdc avatar nikhita avatar normanjoyner avatar prajyot-parab avatar praveenghuge avatar prksu avatar rlenferink avatar roberthbailey avatar sbueringer avatar solidnerd avatar srikiz avatar stmcginnis avatar timoreimann avatar varshavaradarajan avatar vincepri avatar whenhellfreezes avatar xmudrii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-api-provider-digitalocean's Issues

Make error handling consistent across all packages

User story

In #87 the error handling got improved for some packages. We should check other packages and make sure the error checking and handling is consistent across all packages.

We could research is it possible to overall improve logging and error handling. For ideas, we can see how other providers handle this.

Acceptance criteria

  • Make error handling consistent across all packages
  • Potentially, find a better way for error handling and logging

Add support for DigitalOcean Custom Images

Introduction

DigitalOcean now supports Custom Images, ensuring users can upload their own images in one of supported formats and use them to provision a Droplet.

User story

User builds an image with Kubernetes installed and pre-configured and use that image to bootstrap a Droplet. Provisioning script now only contains function for initializing a cluster or joining existing cluster.

This is especially important for users running in production, as:

  • Users use better mechanism for building images, rather than using Bash scripts, that may be easier to maintain and version
  • It is easier to test is image working as expected
  • It is easier to share images between users and customers

Prerequisites

  • There is API support for Custom images (API documentation)
  • godo supports working with Custom Images

Acceptance criteria

  • Users can specify custom image to be used rather than only being able to use default images
  • Documentation is updated to state how to use custom images

Migrate to CRDs

Upstream Cluster-API has moved from aggregated API servers to CRDs-based approach. This issue contains task needed to be done in order to switch to CRDs.

Relevant PR: kubernetes-sigs/cluster-api#494

Tasks:

  • Move clusterctl into cmd/clusterctl (#123)
  • Move cloud into pkg (#124)
  • Remove cluster and machine actuators (#125)
  • Update Cluster-API to CRDs version (#126)
  • Update Cluster-API to the latest version once CRDs start working (#126)

clusterctl: improve function for installing docker runtime

Previously, we had methods for choosing the best docker runtime package and version, but those methods are not used anymore as we switched to provider-components.yaml (see #42).

We should somehow extend the function for installing Docker runtime to check what is the best version candidate.

[Testing] Implement E2E tests

We should write end-to-end tests and set up them to automatically run in CircleCI.

The E2E tests can be a simple bash script that is going to:

  1. Generate manifests using generate-yaml.sh script
  2. Run clusterctl to provision master+node cluster. clusterctl should point to existing Kubernetes cluster instead of handling cluster provisioning itself.
  3. Verify number of nodes, are all nodes ready. Simple checks to make sure all components are ready and scheduling works.
  4. Create new node and make sure it comes up
  5. Delete first node and make sure it disappears both from cloud and from kubectl get nodes

The following points must be completed:

  • Set up Minikube in CircleCI to be used as external cluster for clusterctl. Note: Currently we can set up 1.10 cluster but we may have to switch to 1.11.
  • Write bash script to do above mentioned steps
  • Clean-up cloud resources after testing cycle is finished (in both fail and success cases)

Clean-up: We must be sure that cloud resources are deleted even if tests fail. A possibility would be to apply some tag to all resources and later on to delete all resources with that tag.

UMBRELLA: Variants

In future we should support Variants, so users are able to use multiple types of cluster.

Supported cluster types could be:

  1. Unmanaged DigitalOcean cluster. This is what we have right now.
  2. Managed DigitalOcean cluster. DO is going to release their Kubernetes product today. It would be nice to make integration with their product and our cluster-api-provider.

The scope of this issue can include other setups such as, setups with bastion, control planes in different regions...

Requirements

Relevant issues:

Before Variants can be implemented, there are changes that must be made to upstream clusterctl to support specifying Variants.

New ProviderConfig should be implemented for each Variant we plan to support (in our of case, we need 2 ProviderConfigs). Reusing one ProviderConfig is a bad idea as that can lead to confusion.

This issue will be split to multiple issues once we have action items.

clusterctl: put resources in the kube-system namespace instead of default

When creating a new cluster with clusterctl, all resources are deployed in the default namespace, including controllers, but also cluster and machine objects.

This is a bad practice and we should change this ASAP. We should either use the kube-system namespace or create a new namespace for those resources.

Create DigitalOcean Secret properly if cluster is not deployed in kube-system namespace

User story

We're creating a Secret with DigitalOcean API access token in the same namespace as the Cluster object. That Secret is used by Cluster-API Controllers to communicate with the DigitalOcean API.

However, the same Secret is used by DigitalOcean components, including DO Cloud Controller Manager and DO CSI plugin, which are deployed in kube-system namespace by default.

If Cluster is not deployed in kube-system namespace, Secret will not be deployed in kube-system namespace, and CCM and CSI would fail to start.

Acceptance criteria

  • DigitalOcean Secret is created in both kube-system and Cluster namespace.
  • CCM and CSI work as expected when cluster is not deployed in kube-system namespace.

Implement static IP management using Floating IPs

User story

Implement static IP management, allowing users to predefine what IP address should be used by a node.

This should be backed by DigitalOcean Floating IP. Users could reserve an IP and use that IP for some instance.

It is up to be discussed how to implement this.

Acceptance criteria

  • User can choose the Floating IP address to be used for a Machine or for a Cluster

UMBRELLA: clusterctl doesn't work with Minikube > 0.28

Hello,

Trying out the getting started part of the README, clusterctl got stuck on the Creating master in namespace "kube-system" step, the underlying problem was this:

$ k logs -n kube-system clusterapi-controllers-59c87fb589-x4jrf digitalocean-machine-controller -f
I0930 09:39:50.136817       1 leaderelection.go:174] attempting to acquire leader lease...
E0930 09:39:50.151867       1 leaderelection.go:224] error retrieving resource lock kube-system/digitalocean-machine-controller: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/digitalocean-machine-controller: dial tcp 127.0.0.1:8443: connect: connection refused

I worked around this by setting hostNetwork: true in clusterapi-controllers deployment, because otherwise localhost is the lo interface in the pods net namespace and not in the root net namespace where the minikube apiserver runs.

Apply consistent tags on Droplets

User story

When creating a Machine (Droplet) we're applying one tag with the UUID of the machine and tags specified by the user (optional).

The UUID tag is used by the cluster-api controllers to make sure it is operating with the correct machine.

Beside the UUID tag, we should add several more tags annotating that the Droplet is part of the single cluster and that the Droplet is created by the cluster-api provider.

Acceptance criteria

  • Tag annotating the Droplet is part of the single cluster is added
  • Tag annotating the Droplet is created by cluster-api is added

Document alignment with supported Cluster API and Kubernetes versions

Let's add a section to the README that describes which versions of Cluster API work with which versions of this repository.

Let's also add a section that describes which versions of Kubernetes this provider is able to provision.

Example:
This provider's versions are compatible with the following versions of Cluster API:

Cluster API 0.1 Cluster API 0.2
XYZ Provider 0.1
XYZ Provider 0.2

This provider's versions are able to install and manage the following versions of Kubernetes:

Kubernetes 1.11 Kubernetes 1.12 Kubernetes 1.13 Kubernetes 1.14
XYZ Provider 0.1
XYZ Provider 0.2

/kind documentation
/priority important-soon

Add prowjob for e2e test

We have prowjobs for most of the circleci tests except the e2e test because the e2e test is more nuanced.

TODO:

  • Discuss how prowjobs for the e2e test should behave.
  • Create a PR against the kubernetes/test-infra repo to add the prowjob.
  • Verify that the test runs correctly.

/cc @xmudrii

Implement Metrics

In our machine controller we handle some type of metrics.

We should see can we port some of those metrics for cluster-api-provider and expose them like we do for machine-controller.

It is up to be discussed how exactly this should be done and look like.

Build and share Custom Images with Kubernetes installed and configured

Relevant to #92

User story

We could build and test images ourselves and provide them to be used by Cluster-API DO provider users. That would ensure users can easier start with the provider and have everything tested and preconfigured. Users don't have to take care of Bash scripts which would ensure it's easier to operate with clusterctl.

On project maintaining side, potentially we can deprecate Bash scripts (i.e. don't host them ourselves, but still letting users use them), or provide additional features, such as shipping with custom distributions of cloud-init that have better Kubernetes support.

This is already done by some Cluster-API providers, such as AWS, which provides AMIs with installed and pre-configured Kubernetes.

Research status

We have researched about using DigitalOcean Custom Images with Kubernetes installed and configured. So far it worked very well. Bootstrapping was fast, as everything was already in place.

There are some problems with cloud-init not working properly and we're in contact with DigitalOcean about resolving that.

Prerequisites

  • It is possible to share images among users and customers. One of possibilities is to store images in S3, i.e. Spaces, and allow users to import images by URL.

Acceptance criteria

  • We have mechanism for building images
  • We have image built and shared so users can fetch them
  • Documentation is updated to specify how to use pre-built images and which ones are available

Provided operating systems and versions

The operating systems supported could be Ubuntu 16.04, 18.04, CentOS 7 and CoreOS.

Kubernetes version supported could be 1.12 and 1.11.

It is up to decided should we use Docker or containerd.

Building images

The problem that arises here is how to build images.

We could store bash scripts and use that bash scripts to build images. Or, we could use Ansible+Packer to build and test images.

Support cluster-api v1alpha2

CAPI already has v1alpha2 API type. Seems this Cluster API Provider doesn't have it yet.
I interested in update the repository to be supported. I already started doing this and will open a PR when ready. The progress can be tracked here

/cc @cluster-api-do-maintainers

Correctly handle Certificates

User story

In #72 we've added structures to the machine actuator and to bootstrap scripts allowing custom Certificates to be installed.

However, it isn't possible to provide certificates and it's not documented how this works.

To make this feature easier it is important to define how certificates should be provided. An idea is to add appropriate structures to the Cluster providerConfig, such as:

apiVersion: "cluster.k8s.io/v1alpha1"
kind: Cluster
metadata:
  name: $CLUSTER_NAME
  namespace: $NAMESPACE
spec:
    clusterNetwork:
        services:
            cidrBlocks: ["10.96.0.0/12"]
        pods:
            cidrBlocks: ["10.244.0.0/16"]
        serviceDomain: "cluster.local"
    providerConfig:
      value:
        apiVersion: "digitaloceanproviderconfig/v1alpha1"
        kind: "DigitaloceanClusterProviderConfig"
        certificates:
          ca-cert: <base64-encoded-ca-cert>
          ca-key: <base64-encoded-ca-key>

If certificates are not provided by the operator, we have two options:

  1. Let kubeadm generate certificates automatically (this what we currently do),
  2. Generate certificates manually, on operator machine, and provide them to cluster like custom certificates

The second approach would allow us to refactor the GetKubeConfig function to build Kubeconfig file based on certificates, rather than downloading it over SSH. This would improve stability and speed by a large margin.

For example, the similar approach is used by AWS provider.

Acceptance criteria

  • User can specify custom certificates to be used by the cluster
  • Potentially, certificate generation is handled by us rather than by kubeadm

Add e2e test for MachineDeployment api

Detailed Description

We have added an initial e2e test as in this PR #148. The PR is creating a cluster with a single controlplane and one worker that both of them only use Machine api. So the MachineDeployment api is not covered yet.

We need to add e2e test for MachineDeployment api by creating a cluster with a single controlplane use Machine api and multiple worker use MachineDeployment. It would be better if we also add a scaling scenario.

Make sure Private Networking is used correctly by the cluster

User story

On Private Networking enabled clusters it is important to be sure that the Cluster is utilizing Private Networking interface correctly. In other words, all in-cluster communications should be done only over the Private interface.

I've done some tests and it's mostly like that this isn't working as expected.

I had a cluster with Sonobuoy running conformance tests. The following screenshots show the networking stats from master and one of nodes.

Networking utilization on Master:

image

The Private Networking interface utilization is always at zero, while there is always some activity on the public interface.

Networking utilization on one of Nodes:

image

Same as for the master, the Private interface is never being used.

However, I'm not sure is this a good metric of is private networking working as expected, but I find it weird that it's never being used.

Acceptance criteria

  • In-cluster communications are done only over Private Networking if it is enabled
  • If possible and appropriate, use Private Networking whenever possible, such as for proxy masquerading

Potential decisions and changes

  • Should we enforce Private Networking enabled clusters instead of allowing users to choose

Relevant: #35, #34, #70

How to bump cluster-api version ?

Hey,
I would also find it really interesting to update the cluster api in this project. To use kind as bootstraper. Currently I have no really clue how to update the cluster library and test this.

Thanks for your feedback on this

Drain and cordon node before upgrading

User story

When upgrading nodes we're just deleting the Droplet without checking is there anything scheduled. This is very disruptive and can lead to application failures and downtime.

Acceptance criteria

  • Before upgrading a node we should drain and cordon the node

Do not install prips on nodes

User story

On master we install prips to handle getting specific IP address of the subnet. On the nodes we don't have need for that, so we don't need to install prips.

Acceptance criteria

  • Prips is not installed on nodes anymore.

clusterctl: verify and correctly handle private networking

We should verify that when the Private Networking feature is enabled, cluster is using that interface when possible.

Currently, as far as observed, but not tested, the cluster uses Public Networking for all needs. The bandwidth used over Public Networking is charged, while the bandwidth used over Private Networking is free.

We should research how we can allow users to fully utilize the private interface to save the bandwidth.

We also need to be sure that if we change the bootstrap script, it will work correctly if private networking is not enabled.

Depends on #34

Potential places to set the IP address/interface to be used:

  • API Server: --advertise-address flag,
  • Kubelet: IP address used with kubeadm join,
  • Kubelet: --node-ip flag.

UMBRELLA: Cluster Actuator

Issue for discussions about Cluster Actuator.

For now we don't have Cluster Controller/Actuator implemented neither we deploy cluster controller on clusters.

The reasoning behind that is that there are no appropriate resources to be managed by cluster controller at this point.

Potentially, cluster controller could manage:

  • IP addresses - Making sure API endpoints for cluster are correct, as well as could manage Floating IPs.
  • Tags - clean up tags related to the cluster if they're empty.
  • Firewalls - Allow users to create a Firewall to additionally protect a cluster. Problem here is that users would have to manually open ports needed by their applications, as DO CCM doesn't handle that. This is a planned feature for CCM.

Clean-up and improve documentation

We should clean-up documentation and make sure it is up-to-date with latest changes to the project.

The following tasks must be accomplished before closing the issue:

  • Make sure instructions in README are correct and working
  • Revisit project status. In the README we say that project is work-in-progress. As most of things are done, it could make sense to graduate project to alpha or beta.
  • Remove TODO markers that are not relevant anymore or create issues for ones that are relevant.
  • Add READMEs to important packages such as cloud/digitalocean.

The following tasks can be accomplished but are not required:

  • Move development instructions to CONTRIBUTING.md.
  • Add release instructions and release cycle info to CONTRIBUTING.md.
  • Document contributing and code review processes.

[Testing] Implement unit tests and testing framework for DO

We should implement unit tests for components that are possible to test with unit tests. That include helper functions but also some parts of machine-controller and cluster-controller.

The machine-controller and cluster-controller unit tests may require us to mock DigitalOcean API. We could implement a simple DO testing framework that is going to mock parts of API we're using.

machine-controller: handle node updates/upgrades

The Update method of the machine-controller is not implemented. Updating the machines have no effect.

The update process for nodes are less complicated than for masters. It could look like something along:

  1. Create a new node,
  2. Wait for it to become ready,
  3. Cordon old node,
  4. Once it doesn't have any pods on it, delete the machine.

ci: create quay.io robot account for project's circleci

The CircleCI will automatically push new release to quay.io when it is tagged, but at this point that will fail as there are no credentials set. A robot account should be created and then we need to add credentials as a secret environment variable on CircleCI.

Also, we should make sure that the secret environment variables are not enabled on forks.

Drain and cordon node before deleting the Droplet

Relevant to #96

User story

When deleting nodes we're just deleting the Droplet without checking is there anything scheduled. This is very disruptive and can lead to application failures and downtime.

Acceptance criteria

  • Before deleting a node we should drain and cordon the node

machine-controller: master in-place upgrades

The Update method of the machine-controller is not implemented. Updating the machines have no effect.

As of the Master instances, update/upgrade process is bit different than for nodes. To ensure minimal downtime, and easier to implement and follow workflow, master in-place upgrades could be the best solution.

The in-place upgrades means we will execute appropriate commands over SSH that will upgrade the cluster to newer version. We could use kubeadm upgrade for this case.

Implementation details is up to be discussed and done.

Find a better way to handle Flannel CNI

User story

We're deploying Flannel CNI in the Provider Components manifest, as part of the bootstrap script. This is a bad practice, as CNI should not be handled by Cluster API, as per feature model.

At this point, it isn't a problem as the bootstrap script and the provider components manifest are easily customizable, but may be a problem if we move bootstrap scripts out of the repository or when we implement custom images.

Instead, we should use the cluster add-ons if possible or find a better way to handle CNIs.

Using the cluster add-ons could be a great fit, but cluster add-ons are deployed only once cluster is ready, however cluster will never become ready if CNI is not deployed.

This is also important so operators can choose CNI other than Flannel.

Acceptance criteria

  • Flannel CNI can be deployed as Cluster Add-on or there is a better way to handle CNIs.

UMRELLA: Move Cluster API provider for DigitalOcean to Upstream

User story

On Cluster-API meetings held on 8/22 and 9/19, we have discussed about making Cluster API provider for DigitalOcean as an Cluster-API project and moving it from kubermatic to kubernetes-sigs organization.

The feedback overall is positive, but we need to fulfill the ownership requirement. The latest requirement is to have multiple owners from the multiple companies.

It is up to follow on this issue and to discuss how we can continue the process.

Prerequisites

  • Add CONTRIBUTING.md (#91)
  • Establish OWNERS and OWNER_ALIASES files (#107, #112)
  • Add SECURITY_CONTACTS file (#108)
  • Add copyright boiler plate header to all files (#110)
  • Add script to verify boilerplate header (#111)
  • License Auditing
  • Creating an issue on kubernetes/org (kubernetes/org#196)
  • Enabling the Circle CI status contexts as required in kubernetes/test-infra (kubernetes/test-infra#9938)
  • Update prow/config.yaml in kubernetes/test-infra to set merge method (kubernetes/test-infra#9938)
  • Disable integration with Kubermatic internal Prow
  • Ensure relevant webhooks are enabled.
    • cncf-cla webhook
    • prow's webhook
  • Update CONTRIBUTING.md to mention Kubernetes slack channels.
  • Find how to correctly handle Docker images
  • Update README and docs to use new repository URL
  • Update Go import paths to sigs.k8s.io/cluster-api-provider-digitalocean/...
  • Update sigs.yaml file in kubernetes/community (kubernetes/community#2864)

Acceptance criteria

  • Cluster-API provider for DigitalOcean is upstreamed.

Allow HA cluster setups

cluster-api-provider-digitalocean can only bootstrap a single-master cluster now.

This is currently an upstream issue, and update is up to follow.

RBAC error

HI,
It seems to be an issue with RBAC, I follow the getting started doc. I'm getting this error:

E0626 19:30:55.727687 1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha1.Machine: machines.cluster.k8s.io is forbidden: User "system:serviceaccount:do-provider-system:default" cannot list resource "machines" in API group "cluster.k8s.io" at the cluster scope
E0626 19:30:56.017643 1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha1.Cluster: clusters.cluster.k8s.io is forbidden: User "system:serviceaccount:do-provider-system:default" cannot list resource "clusters" in API group "cluster.k8s.io" at the cluster scope

Create a Project and add all relevant resources to that Project automatically

User story

DigitalOcean organizing resources with Projects. Compared to tags, projects allows all relevant resources to be associated with the project, not only Droplets. That include Load Balancers and Block Storage Volumes.

User should be able to specify what Project should be used within the Cluster. If no Project is specified, a new Project should be created for that cluster. That ensure it is easier to operate with the cluster, as other resources will end up in other Projects.

Prerequisites

  • DigitalOcean API supports Projects
  • DigitalOcean Cloud Controller Manager supports Projects
  • DigitalOcean CSI plugin supports Projects

Acceptance criteria

  • Resources created by Cluster API provider are associated with non-default, user-specified or automatically created Project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.