Giter VIP home page Giter VIP logo

stolon-chart's Introduction

Helm chart to install Stolon (HA PostgreSQL cluster)

Stolon is a cloud native PostgreSQL manager for PostgreSQL high availability. It's cloud native because it'll let you keep an high available PostgreSQL inside your containers (kubernetes integration) but also on every other kind of infrastructure (cloud IaaS, old style infrastructures etc...)

Chart is partially based on the statefullset example from the stolon repo

This chart was accepted to the official kubernetes/charts repository and could be installed from there as well, but charts are not interchangeable. https://github.com/helm/charts/tree/master/stable/stolon

Requirements

  • Kubernetes >1.5
  • PV support on the underlying infrastructure
  • Helm 2.2.0 (for conditions and flags support)

TODO:

  • Automate initial stolon cluster creating
  • Do not manage etcd dependency, do not rely on etcd chart
  • Add support for consul backend
  • Add support for kubernetes backend (experimental)
  • Add support for 1.6

Support the project

If you want to support the project, you can do it using BeerPays

Beerpay Beerpay

stolon-chart's People

Contributors

elexy avatar flowkap avatar jeremyxu2010 avatar lwolf avatar negashev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stolon-chart's Issues

upgrade to stolon v0.9

  • upgrade docker image version to stolon 0.9 (pg9.6 vs pg10 as default)
  • add support for etcdv2 and etcdv3 stores
  • add support for --store-prefix

Load only distributed to one of the keepers

I'm not sure if this is expected or not, but I'm using a 3 replica cluster with your chart, and directing my node.js application at it using the service endpoint, ala:

postgres://username:[email protected]/db

I have three web application pods making query requests against this connection string and what I'm seeing is that only one of the keeper pods is taking all of the load of the application. Is this expected? Is it possible to use all three replicas in round robin fashion (or whatever) to share the load?

Ability to configure postgres settings

Hi, Thanks for the great postgre charts....
Just wondering, is it possible to change postgre parameters that contain in stolon configmap annotations?
I want to change the maximum connection paramters.

Thanks

our keeper data is not available, waiting for it to appear

Hello;
I've a bare-metal Kubernetes 1.18 and attempted to install stolon but it gives me following error:

2020-04-11T13:31:58.388Z INFO cmd/keeper.go:976 our keeper data is not available, waiting for it to appear 2020-04-11T13:31:58.461Z ERROR cmd/keeper.go:778 failed to update keeper info {"error": "update failed: failed to get latest version of pod: v1.Pod.ObjectMeta: v1.ObjectMeta.readObjectFieldAsBytes: expect : after object field, but found u, error found in #10 byte of ...|:{},\"k:{\\\"uid\\\":\\\"e0|..., bigger context ...|on-cluster\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"e0956964-6d30-4afb-97dc-1d0ae4500447\\\"}\":{|..."}

I've seen many people have faced with the problem but I couldn't have found any solutions.
Is there a way to acomplish installation or a planned project to make it run?

Thanks & Regards

persistence vs persistentVolume variable naming

In commit 1cd50e4, the chart was changed but the values.yaml wasn't.

I believe it should have a change such as:

-    {{- if .Values.persistentVolume.storageClassName }}
-    {{- if (eq "-" .Values.persistentVolume.storageClassName) }}
+    {{- if .Values.persistence.storageClassName }}
+    {{- if (eq "-" .Values.persistence.storageClassName) }}

-        storageClassName: "{{ .Values.persistentVolume.storageClassName }}"
+        storageClassName: "{{ .Values.persistence.storageClassName }}"```

stolon chart deployment not working

Hi all!
I set up a VM with minikube on one of the hosts I manage and tested with simple pods.
I tried the stolon chart: the installation runs smoothly (no errors reported, running with debug option enabled), but the service pods (proxy, sentinel, keeper) and the psql pods all enter an error state and psql pods are created repeatedly (I counted over 300 pods if left running for some time)
I tries reducing the overall resources consumption by lowering the replicas nr to 1, mem request to 256, cpu to 50m, but no success.
I didn't change anything except the above mentioned number in the chart files.
Logs are uploaded on gist here: https://gist.github.com/solidiris/362b6e5b29559e0a13680a5ded025d41
Have you ever met the same problem? Is there something wrong?

Thankyou

cannot set STKEEPER_PG_LISTEN_ADDRESS to *

What happened: I am deploying the stolon helm chart with istio proxy sidecar injection enabled. As per istio documentation here we need to set the listen address to * so that clients can connect to the database. To do that I set the environment variable STKEEPER_PG_LISTEN_ADDRESS * The keeper however still continues to listen to the $POD_IP.

    Ports:         8080/TCP, 5432/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /bin/bash
      -ec
      # Generate our keeper uid using the pod index
      IFS='-' read -ra ADDR <<< "$(hostname)"
      export STKEEPER_UID="keeper${ADDR[-1]}"
      export POD_IP=$(hostname -i)
      export STKEEPER_PG_LISTEN_ADDRESS=$POD_IP
      export STOLON_DATA=/stolon-data
      chown stolon:stolon $STOLON_DATA
      exec gosu stolon stolon-keeper --data-dir $STOLON_DATA

    State:          Running
      Started:      Wed, 14 Oct 2020 16:26:12 +0530
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAME:                         empirix-stolon-keeper-0 (v1:metadata.name)
      STKEEPER_CLUSTER_NAME:            empirix-stolon
      STKEEPER_STORE_BACKEND:           kubernetes
      STKEEPER_KUBE_RESOURCE_KIND:      configmap
      STKEEPER_PG_REPL_USERNAME:        <set to the key 'pg_repl_username' in secret 'db-replica'>  Optional: false
      STKEEPER_PG_REPL_PASSWORDFILE:    /etc/secrets/stolon-db-replica/pg_repl_password
      STKEEPER_PG_SU_USERNAME:          <set to the key 'pg_su_username' in secret 'db-admin'>  Optional: false
      STKEEPER_PG_SU_PASSWORDFILE:      /etc/secrets/stolon-db-admin/pg_su_password
      STKEEPER_METRICS_LISTEN_ADDRESS:  0.0.0.0:8080
      STKEEPER_DEBUG:                   false
      STKEEPER_PG_LISTEN_ADDRESS:       *
    Mounts:
      /etc/secrets/stolon-db-admin from stolon-secret-db-admin (rw)
      /etc/secrets/stolon-db-replica from stolon-secret-db-replica (rw)
      /stolon-data from data (rw)

What you expected to happen: Setting the STKEEPER_PG_LISTEN_ADDRESS to * should be honoured and not be overwritten by the command

Without this working the database cannot work with istio sidecar injection enabled.

I need to set these two variables, but they are not possible

    - name: STKEEPER_PG_LISTEN_ADDRESS
      value: "*"
    - name: STKEEPER_PG_ADVERTISE_ADDRESS
      value: "$POD_IP"

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Stolon version: helm 1.6.2 stolon 0.16.0
  • Stolon running environment (if useful to understand the bug):
  • Others:

How to run stolon as non-root user inside containers

There are environments where applications inside containers must not be executed as a root user, but as a non-root user.
Currently stolon gets executed as root user inside the container. Is there a way to make it run as non-root?

Non-root user should be used inside all of the containers: keepers, sentinel and proxy containers.

This topic is duplicate with: #29
( Can / should these be merged together. )

Upgrade problems

Label
In off chart is not updated after create
app: {{ template "stolon.name" . }}
https://github.com/helm/charts/blob/4527ebe8a36180ddd16973acddf476abe7264cfe/stable/stolon/templates/proxy-service.yaml#L6

In iwolf upgrade always if new version is deploy

app: {{ template "stolon.proxy.fullname" . }}

app label change with problems on rancher

Installing
Failed to install app stolon-p-nkww4. Error: UPGRADE FAILED: Deployment.apps "stolon-p-nkww4-stolon-proxy" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"release":"stolon-p-nkww4", "stolon-cluster":"kube-stolon", "app":"stolon-p-nkww4-stolon-proxy", "chart":"stolon-0.8.1", "component":"stolon-proxy"}: `selector` does not match template `labels` && Deployment.apps "stolon-p-nkww4-stolon-sentinel" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"chart":"stolon-0.8.1", "component":"stolon-sentinel", "release":"stolon-p-nkww4", "stolon-cluster":"kube-stolon", "stolon-sentinel":"true", "app":"stolon-p-nkww4-stolon-sentinel"}: `selector` does not match template `labels` && StatefulSet.apps "stolon-p-nkww4-stolon-keeper" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"chart":"stolon-0.8.1", "component":"stolon-keeper", "release":"stolon-p-nkww4", "stolon-cluster":"kube-stolon", "app":"stolon-p-nkww4-stolon-keeper"}: `selector` does not match template `labels`

Error: chart metadata (Chart.yaml) missing

Running helm dep build gives Error: chart metadata (Chart.yaml) missing error. im I doing anything wrong ?

Im running below versions :..

[root@kube-master stolon-chart]# helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

[root@kube-master stolon-chart]# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T16:51:36Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Publish 0.2.0 chart to repo

Are there plans to publish the current chart version (0.2.0) on charts.lwolf.org? The latest version there is still 0.1.1.

Data persistance is not working in postgres with chart version

Hi, we have a problem that when we install for the first time and delete it, all the data added is getting lost.

We found that cluster-create-job.yaml file is responsible for creating cluster and it doesnt need to run twice. Is there any way where we can find the cluster status and don't run the create cluster file every time because it resets the cluster and deletes all the data.

Please let me know what is solution for this?

I always set autoCreateCluster: true and autoUpdateClusterSpec: true. I guess this is what creating the issue for me.

Below is the values.yaml is use:

image:
  repository: sorintlab/stolon
  tag: v0.13.0-pg10
  pullPolicy: IfNotPresent

# used by create-cluster-job when store.backend is etcd
etcdImage:
  repository: k8s.gcr.io/etcd-amd64
  tag: 2.3.7
  pullPolicy: IfNotPresent

debug: false

persistence:
  enabled: true
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClassName: ""
  accessModes:
    - ReadWriteOnce
  size: 10Gi

rbac:
  create: true

serviceAccount:
  create: true
  # The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template
  name:

superuserSecret:
  name: ""
  usernameKey: pg_su_username
  passwordKey: pg_su_password

replicationSecret:
  name: ""
  usernameKey: pg_repl_username
  passwordKey: pg_repl_password

superuserUsername: "stolon"
## password for the superuser (REQUIRED if superuserSecret is not set)
superuserPassword:

replicationUsername: "repluser"
## password for the replication user (REQUIRED if replicationSecret is not set)
replicationPassword:

## backend could be one of the following: consul, etcdv2, etcdv3 or kubernetes
store:
  backend: kubernetes
#  endpoints: "http://stolon-consul:8500"
  kubeResourceKind: configmap

pgParameters: {}
  # maxConnections: 1000

ports:
  stolon:
    containerPort: 5432
  metrics:
    containerPort: 8080

job:
  autoCreateCluster: true
  autoUpdateClusterSpec: true

clusterSpec: {}
  # sleepInterval: 1s
  # maxStandbys: 5

keeper:
  replicaCount: 2
  annotations: {}
  resources: {}
  priorityClassName: ""
  service:
    type: ClusterIP
    annotations: {}
    ports:
      keeper:
        port: 5432
        targetPort: 5432
        protocol: TCP
  nodeSelector: {}
  affinity: {}
  tolerations: []
  volumes: []
  volumeMounts: []
  hooks:
    failKeeper:
      enabled: false
  podDisruptionBudget:
    # minAvailable: 1
    # maxUnavailable: 1

proxy:
  replicaCount: 2
  annotations: {}
  resources: {}
  priorityClassName: ""
  service:
    type: ClusterIP
#    loadBalancerIP: ""
    annotations: {}
    ports:
      proxy:
        port: 5432
        targetPort: 5432
        protocol: TCP
  nodeSelector: {}
  affinity: {}
  tolerations: []
  podDisruptionBudget:
    # minAvailable: 1
    # maxUnavailable: 1

sentinel:
  replicaCount: 2
  annotations: {}
  resources: {}
  priorityClassName: ""
  nodeSelector: {}
  affinity: {}
  tolerations: []
  podDisruptionBudget:
    # minAvailable: 1
    # maxUnavailable: 1

cannot install stolon-chart

When I ran the following command, it generated an error. Do I need to do anything else before issue this command?

▶ helm install --name my-release -f values.yaml .
Error: release my-release failed: roles.rbac.authorization.k8s.io "my-release-stolon" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["*"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["*"]}] user=&{system:serviceaccount:kube-system:tiller 66ab3855-5185-11e8-9cac-080027e29233 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

Release.IsInstall not being set to true during helm install

I was referred to Stolon while working on a helm chart for a rails app recently, and the first problem I observed was that postgres was not being initialized on the keeper pods. After reading through some blog comments, I traced the problem to this line:

{{ if .Release.IsInstall }}

When I comment out the line, postgres initializes and starts up normally. I am not sure why IsInstall would not be set to true, as this is the invocation I'm using for helm:

helm install --namespace foobar --name some-release ./my-chart-dir

You asked me to file an issue, and I'm happy to do so. I will follow up with whatever else I can find out about this behavior. It does not seem like Release.IsInstall is behaving as it should.

User cannot get configmaps in the namespace

Hello,
I am deploying this in k8 and getting following errors, any idea what that I am missing?
I am deploying this as non-admin user.

$ kubectl logs stolon-sentinel-7754964b89-8vmv4
2019-01-29T21:22:26.067Z INFO cmd/sentinel.go:1962 sentinel uid {"uid": "dfa105e8"}
2019-01-29T21:22:26.071Z INFO cmd/sentinel.go:80 Trying to acquire sentinels leadership
ERROR: logging before flag.Parse: I0129 21:22:26.071509 1 leaderelection.go:174] attempting to acquire leader lease...
ERROR: logging before flag.Parse: E0129 21:22:26.146967 1 leaderelection.go:224] error retrieving resource lock k8poc-sathya/stolon-cluster-kube-stolon: configmaps "stolon-cluster-kube-stolon" is forbidden: User "system:serviceaccount:k8poc-sathya:default" cannot get configmaps in the namespace "k8poc-sathya"

Etcd cluster in CrashLoopBackOff loop

I've been running a Stolon cluster for about a week (very successfully) but today I noticed that I have lost a etcd pod completely and another is a CrashLoopBackOff cycle:

postgresql               postgresql-etcd-0                                  0/1       CrashLoopBackOff   159        13h
postgresql               postgresql-etcd-2                                  1/1       Running            0          1d
postgresql               postgresql-stolon-keeper-0                         1/1       Running            0          13h
postgresql               postgresql-stolon-keeper-1                         1/1       Running            0          1d
postgresql               postgresql-stolon-keeper-2                         1/1       Running            0          1d
postgresql               postgresql-stolon-proxy-3377369672-4vq28           0/1       Running            0          13h
postgresql               postgresql-stolon-proxy-3377369672-5jsd5           0/1       Running            0          13h
postgresql               postgresql-stolon-proxy-3377369672-qrxm6           0/1       Running            0          1d
postgresql               postgresql-stolon-sentinel-2884560845-fwc9w        1/1       Running            0          1d
postgresql               postgresql-stolon-sentinel-2884560845-r34nv        1/1       Running            0          13h
postgresql               postgresql-stolon-sentinel-2884560845-wgp4q        1/1       Running            0          1d

The logs for postgresql-etcd-0 are the following:

Re-joining etcd member
cat: can't open '/var/run/etcd/member_id': No such file or directory

Have you seen this before? Is there anyway to manually restart the etcd portion of the cluster easily?

Fails to get installed on Rancher v2

Hello,

I try to install Stolon on Rancher v2 and i get following error:

Installing Helm template failed. Error: render error in "stolon/templates/secret.yaml": template: stolon/templates/secret.yaml:12:22: executing "stolon/templates/secret.yaml" at <required "A valid .V...>: error calling required: A valid .Values.superuserPassword entry is required! : exit status 1

It stucks there. This happens even if I try to pass environment variable with the password.

Regards,
Ali Nebi

sentinel can't create events in the deployed namspace - kubernetes backend store

ERROR: logging before flag.Parse: E1121 13:19:14.420233       1 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"stolon-cluster-stolon.1569263488575d71", GenerateName:"", Namespace:"postgres", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"postgres", Name:"stolon-cluster-stolon", UID:"0a997107-ed90-11e8-9510-000d3a5da51c", APIVersion:"v1", ResourceVersion:"927620", FieldPath:""}, Reason:"LeaderElection", Message:"de609be8 became leader", Source:v1.EventSource{Component:"stolon-sentinel", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbef574b498f4a971, ext:149754453, loc:(*time.Location)(0x2196340)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbef574b498f4a971, ext:149754453, loc:(*time.Location)(0x2196340)}}, Count:1, Type:"Normal"}': 'events is forbidden: User "system:serviceaccount:postgres:stolon" cannot create events in the namespace "postgres"' (will not retry!)
'events is forbidden: User "system:serviceaccount:postgres:stolon" cannot create events in the namespace "postgres"' (will not retry!)

"error": "client: etcd cluster is unavailable or misconfigured"

i get the bellow error


2018-01-10T15:10:36.576Z    ERROR   keeper/keeper.go:648    error retrieving cluster data   {"error": "client: etcd cluster is unavailable or misconfigured"}
2018-01-10T15:10:36.576Z    INFO    postgresql/postgresql.go:264    stopping database
pg_ctl: directory "/stolon-data/postgres" does not exist
2018-01-10T15:10:36.596Z    ERROR   keeper/keeper.go:838    error retrieving cluster data   {"error": "client: etcd cluster is unavailable or misconfigured"}
2018-01-10T15:10:41.610Z    ERROR   keeper/keeper.go:838    error retrieving cluster data   {"error": "client: etcd cluster is unavailable or misconfigured"}
2018-01-10T15:10:46.623Z    ERROR   keeper/keeper.go:838    error retrieving cluster data   {"error": "client: etcd cluster is unavailable or misconfigured"}
2018-01-10T15:10:51.639Z    ERROR   keeper/keeper.go:838    error retrieving cluster data   {"error": "client: etcd cluster is unavailable or misconfigured"}

but this is healthy

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

i have my etcd endpoints defined in the valyes.yaml like so
endpoints: "https://148.251.81.30:2379,https://136.243.23.162:2379,https://148.251.23.210:2379"

what else can i check or do about this issue ?
thanks

Cannot connect and proxies not running

I just installed with:

helm install --name pg-ha-test .

this and see the following:

default       po/pg-ha-test-stolon-proxy-fc8c59589-2ksdf       0/1       Running   0          4m
default       po/pg-ha-test-stolon-proxy-fc8c59589-5htht       0/1       Running   0          4m
default       po/pg-ha-test-stolon-proxy-fc8c59589-dqnzt       0/1       Running   0          4m

It appears they are not ready?

Also I cannot connect to the keeper. How do I set up my connection string?

$ psql -h pg-ha-test-stolon-keeper.deis.minikube
psql: could not connect to server: Connection refused
	Is the server running on host "pg-ha-test-stolon-keeper.deis.minikube" (192.168.99.100) and accepting
	TCP/IP connections on port 5432?

From the kubernetes dashboard I see this for the proxy pods:

Readiness probe failed: dial tcp 172.17.0.7:5432: getsockopt: connection refused

I'm using dsnmasq on Mac:

cat /etc/resolver/minikube 
nameserver 127.0.0.1

Any help would be great. Thanks!

Howto Enable SSL for postgres client connections

I am looking for a way to enable ssl certs for postgres client connections.
Looks like it can be done with stolonctl pgparameters.

Would it be in the scope of this project to make that an option in the chart?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.