istio-ecosystem / admiral Goto Github PK
View Code? Open in Web Editor NEWAdmiral provides automatic configuration generation, syncing and service discovery for multicluster Istio service mesh
License: Apache License 2.0
Admiral provides automatic configuration generation, syncing and service discovery for multicluster Istio service mesh
License: Apache License 2.0
Describe the bug
When a service name is accessed in a cluster, it's possible for it to bounce between clusters.
Steps To Reproduce
Deploy Service 1 to Cluster 1 and Cluster 2
Deploy Service 2 to Cluster 2
Make a call from Service 2 to Service 1 using global
name created by admiral
Notice that some of the calls made from Service 2 to Service 1 take very long time. This is because at the istio-ingressgateway
they might be routed back to the original cluster where the request is originating from as that's also a possible destination for that service.
Expected behavior
Service calls using global
names should not bounce between clusters.
Proposed solution:
Create a virtual service that always routes calls to the local instance of a service and attach it to istio-ingressgateway
gateway
Example:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: default.greeting.global-default-vs
namespace: admiral-sync
spec:
exportTo:
- '*'
gateways:
- istio-multicluster-ingressgateway
hosts:
- default.greeting.global
http:
- route:
- destination:
host: greeting.sample.svc.cluster.local
port:
number: 8080
After reading the doc and code in Admiral. It mainly auto-produce ServiceEntry and DestinationRule for routing, and I noticed that the created DestinatioRule contains the new field - Distribute, it decides that weight. So how can Admiral be compatible with low version istio(like v1.2.3) which did not have Distribute field at all.
Is your feature request related to a problem? Please describe.
Migrate the CRD group from admiral.io
to admiralproj.io
Describe the solution you'd like
The CRDs dependency and globaltrafficpolicy should use new group admiralproj.io
Also make sure the fake client generated for these new CRDs works as expected.
Describe the bug
When a SE is created for a deployment/service in it's cluster with another instance running remotely, the SE has only one endpoint
Steps To Reproduce
Create a deployment 1 in two clusters (cluster 1 and cluster 2)
Create a dependent deployment 2 in cluster 1
Create a dependency for deployment 2 on deployment 1
See that SE for deployment 1
has only one endpoint pointing to cluster 1
Expected behavior
SE for deployment 1
has i) one local endpoint pointing to cluster 1 and ii) one remote endpoint pointing to cluster 2
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.
Describe the solution you'd like
A clear and concise description of what you want to happen.
for example,where app A access app B(A us-east to B us-east,A us-west to B us-west ),but now i find flow (url prefix=/api/v1) has problem,i want to shifting app A 50% in us-east to B us-west(but only affect flow which url's prefix eq /api/v1 ),how to achieve by GlobalTrafficPolicy?
thanks!
Is your feature request related to a problem? Please describe.
Admiral should have integration tests to cover the following scenarios:
K8s version + Istio version
Is your feature request related to a problem? Please describe.
While the creation of Dependency CR is optional, we need to provide default behavior in order to make inter-cluster service communication feasible. This will enable service which needs to call to other services in another cluster without adding any configuration
Describe the solution you'd like
ServiceEntry of service to be created in all watched clusters by default
Describe the bug
If there are multiple deployments running, in multiple clusters, with the same Identity, in the same locality, Admiral currently creates a service entry with endpoints pointing to every single one. This weights the cross-cluster calls equally to the in-cluster calls.
Steps To Reproduce
Spin up two clusters in the same AWS region and deploy the same app into both. Examine the service entry, and you'll see one endpoint pointing to *.svc.cluster.local, and one to the ingress-gateway load balancer DNS name.
Expected behavior
If a destination is present in the same cluster as the source, Admiral's default behavior should be to always route the request to the local destination, regardless of other possible destinations with the same identity.
Possibly related to #62
Describe the bug
When a admiral ignores a service which was previously synced, service entry does not get updated in admiral-sync namespace.
Steps To Reproduce
Pick a service which is currently not ignored by admiral (admiral.io/ignorea annotation is false). Update the annotation to be true, admiral now will not sync this service, but corresponding service entry does not get updated in admiral-sync namespace
Expected behavior
Update to deployment annotation should update the service entry accordingly
Describe the bug
After deploying the sample service in the cluster and running the command
"kubectl exec --namespace=sample -it $(kubectl get pod -l "app=webapp" --namespace=sample -o jsonpath='{.items[0].metadata.name}') -c webapp -- curl -v http://default.greeting.global" throws following error:
curl: (6) Could not resolve host: default.greeting.global
Steps To Reproduce
k8 cluster running in AWS with ISTIO version 1.5.2. Installed admiral and followed the steps till installing the sample app and ran the above command.
Expected behavior
The deployed sample app should be resolved by admiral.
Unable to proceed further with multi-cluster deployment as the above steps are not resolving the sample app. Any help much appreciated!
Describe the bug
After setup the sync step by step from README,the connection between cluster not working.
curl only get 503 (upstream connect error or disconnect/reset before headers)
.
If I trying to restart the gatewayingress pod,the gateway will stuck at
Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?)
and won't start up successes until I remove those ServiceEntry config created by Admiral,it looks like some kind of config conflicted or broken,but I can't find any useful log from other istio component. This problem occurred on my two cluster at the same time.
The Istio mesh spanning multiple cluster can work if I trying to setup manually via istio office document.
In the end i can only remove admiral and remove those auto-craft configs and use the manually config way to setup those service. Still looking forward to further development of Admiral, this project looks great.
other notes:
Admiral will crash several times while startup,sorry I forget to save the log. Might related to kubernetes version? I wondering what kubernetes version current Admiral developing on. It looks like Admiral not really work well with kubernetes 1.16.
Steps To Reproduce
Not sure if this can reproduce on similar environment
Cluster 1 rancher deployed kubernetes 1.16.6
Cluster 2 azure aks 1.16.4
All with rancher istio 1.4.3
Deployed sample from README
Expected behavior
Sample deployed should work as excepted.
Admiral uses external IP of istio-ingressgateway to create service entries for cross cluster communication.
Currently, admiral crashes if istio-ingressgateway doesn't have a external IP assigned (in non cloud installations like minikube).
Hi!
As u know, for amazon eks kubeconfig is not enough to get access to the cluster , aws credential and iam-auth binary needed as well
Describe the solution you'd like
Maybe it's misunderstanding , but looks like now there is no option to connect bare eks cluster to admiral using standard EKS auth scheme.
Is your feature request related to a problem? Please describe.
Docs are mostly in the README. It would be good to separate the documentation for easy access and organization.
Describe the solution you'd like
Documentation similar to https://argoproj.github.io/argo-cd/ would be good. This could also host the Admiral API documentation.
Is your feature request related to a problem? Please describe.
Currently, admiral generates names that end in .global
(Ex: stage.greeting.global
) which works for default istio installation but not if we want to customize it to something like .mymesh
(Ex: stage.greeting.mymesh
)
Also, greeting comes from identity
label on service and deployment. Short term we can allow this to be parameterized.
Describe the solution you'd like
The name suffix should be configurable with --name-suffix
start up command option to admiral. Ex: --name-suffix mymesh
The identity label should be configurable with --identity-label
Currently there isn't any documentation on Admiral's features supported for a given Istio version. While we will always stay on latest Istio API, the cluster on which admiral operates might be running on an older Istio version and the feature might not work as expected.
Is your feature request related to a problem? Please describe.
Currently, when a dependency is updated, it doesn't take effect
Describe the solution you'd like
A dependency update should be handled, similar to a dependency add.
Currently, creating tags isn't triggering a release build in circle ci.
We need Makefile
to do the following when a new_tag
is created:
i) Publish an image whenever a new tag (say new_tag
) is created
ii) Package an artifact with the output of make gen-yaml
into admiral-install-{new_tag}.tar.gz
iii) Create a release with the name new_tag
and attach artifact from ii) to this release
Describe the bug
Flow the instruction from README via the admiral-install-v0.1-beta.tar.gz to deploy single cluster demo on kubernetes 1.16.6 and istio 1.4.3(deployed via rancher),the admiral will keep report about the error.
020-02-07T08:35:50.643231Z info Waiting for informer caches to sync
2020-02-07T08:35:50.647657Z warn Failed to refresh configmap state Error: configmaps "se-address-configmap" is forbidden: User "system:serviceaccount:admiral:admiral" cannot get resource "configmaps" in API group "" in the namespace "admiral-sync"
2020-02-07T08:35:50.647701Z info getting kubeconfig from: ""
ERROR: logging before flag.Parse: W0207 08:35:50.647710 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-02-07T08:35:50.648535Z info Initializing default secret resolver
2020-02-07T08:35:50.648547Z info Setting up event handlers
...
...
...
2020-02-07T08:35:50.852004Z info op=Event type=service name=expose-operator-metrics cluster= message=Received, doing nothing
2020-02-07T08:35:50.852777Z error Could not get unique address after 3 retries. Failing to create serviceentry name=default.webapp.global
2020-02-07T08:35:50.852832Z info op=GetMeshPorts type=service name=webapp cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:50.852848Z info op=Event type=deployment name=greeting cluster= message=Received
2020-02-07T08:35:50.852854Z info op=GetMeshPorts type=service name=greeting cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:50.853501Z error Could not get unique address after 3 retries. Failing to create serviceentry name=default.greeting.global
2020-02-07T08:35:50.853516Z info op=GetMeshPorts type=service name=greeting cluster=enmd message=No mesh ports present, defaulting to first port
2020-02-07T08:35:51.776399Z info op=Event type=add name=konga cluster=enmd message=No dependent clusters found
2020-02-07T08:35:51.776579Z info op=Event type=add name=test cluster=enmd message=No dependent clusters found
Steps To Reproduce
Simply flow instruction in README
Expected behavior
Admiral should success generate ServiceEntry.
Note
I fix it via add RBAC role myself,not sure what is the best practice to set it so I give admiral full permission to configmaps in admiral-sync.
This problem might come from this PR which lost the RBAC part
#42
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: admiral-configmap-role
namespace: admiral-sync
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- create
- update
- delete
- patch
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: admiral-configmap-role-binding
namespace: admiral-sync
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: admiral-configmap-role
subjects:
- kind: ServiceAccount
name: admiral
namespace: admiral
and here is another problem here
admiral/admiral/pkg/clusters/serviceentry.go
Lines 47 to 50 in 7b4e5fd
Describe the bug
This is not a bug for usage of admiral per se. I am following the docs: https://istio.io/latest/blog/2020/multi-cluster-mesh-automation/ to understand the idea behind Admiral but encountered the following issue:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-test
namespace: caas-sentinel
spec:
endpoints:
- address: sample-app.caas-sentinel.svc.cluster.local
locality: jpe1/jpe1b
ports:
http: 80
hosts:
- productpage.global
location: MESH_INTERNAL
ports:
- name: http
number: 80
protocol: http
addresses:
- 240.0.0.10
resolution: DNS
the Envoy configuration does not look correct to me:
"lb_endpoints": [
{
"endpoint": {
"address": {
"socket_address": {
"address": "sample-app.caas-sentinel.svc.cluster.local",
"port_value": 80
}
}
},
"load_balancing_weight": 1
}
],
"load_balancing_weight": 1
}
After I changed to using STATIC and actual pod IP, the configuration looks correct. sidecar proxy will do the direct pod load balancing. I am not sure whether this is a bug(probably by Istio) or by design. But it will be great if someone can help to confirm.
Second issue is, with the same service entry above, the mTLS connection to sample-app.caas-sentinel
does not work. I got the upstream connect error or disconnect/reset before headers. reset reason: connection termination
error.
Steps To Reproduce
Istio 1.6
Create above service entry
Turn on target remote service mTLS as shown here: https://istio.io/latest/docs/tasks/security/authentication/authn-policy/
Expected behavior
sidecar proxy should do the pod load balancing instead of calling the service FQDN directly.
mTLS should work with local service.
Thanks a lot for your help!
If the name field is missing in the service port definition then the service entry is generated without a port name and is invalid.
Example
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
identity: nginx
name: nginx
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
The serviceEntry should add ports based on those present in the service it will be pointing to. Currently it is hardcoded to default http ports (and named http).
Thanks to Martin Baillie for the report!
for those at Intuit more familiar with the code, I was thinking these lines should not be hardcoded and should instead be inferred from the respective Service ports and port names?
https://github.com/istio-ecosystem/admiral/blob/master/admiral/pkg/clusters/serviceentry.go#L84-L85
since my Service ports are named grpc I am being bitten by: https://istio.io/docs/reference/config/networking/service-entry/#ServiceEntry-Endpoint
Describe the bug
I follow multi cluster setup sample to setup my AKS cluster
Steps To Reproduce
Simply follow the steps in example. I use istio 1.15.1 and k8s 1.15.10. admiral 0.9
Expected behavior
After complete the step, I get "Hello World! - Admiral!!" and greeting from remote.
Actual behavior
It always comes back from local so I only see "Hello World! - Admiral!!". I also added GTR but it still same.
ServiceEntry is created as expected with correct values.
How do I troubleshoot?
Describe the bug
A clear and concise description of what the bug is.
Currently, admiral updates the istio Sidecar custom resource if configured. This only imports the cluster local endpoint and not the admiral generated CNAME resulting in VirtualService configuration created for a service by the namespace owner would not be imported to the client's namespace.
Steps To Reproduce
host
in workload B's namespaceExpected behavior
A clear and concise description of what you expected to happen.
The VirtualService
configuration is imported when using istio Sidecar custom resource for admiral generated CNAMEs.
Describe the bug
The updateCacheController hook is designed to delete and refresh the cache controllers for all tracked remote clusters. Right now, the goroutines allocated for the cache controllers aren't being released when this happens. Instead, they're being parked (https://golang.org/src/runtime/proc.go). Work is needed to figure out why these goroutines are being parked and resolve that issue. Made #123 to address the symptoms, but needs work to identify and fix the root cause.
Steps To Reproduce
Run admiral with at least one remote cluster (the more there are, the faster the leak) and pprof enabled, and take goroutine dumps periodically. You will see large (and increasing) numbers of parked goroutines and a corresponding memory increase.
Expected behavior
Admiral is able to refresh its controllers without seeing a permanent increase in parked goroutines or memory usage.
Is your feature request related to a problem? Please describe.
Admiral uses identity
label on k8s services and deployments as a hardcoded value (instead of using the identityLabel
field in dependency record) when creating service names. If some organization is using a different label, then they would have to add a new label to all the services and deployments.
Describe the solution you'd like
Users should be able to specify any label as unique/global identifier in a dependency record/k8s service & deployment.
Describe the bug
The multi-cluster setup for the IstioOperator automatically creates an EnvoyFilter that rewrites .global traffic. This conflicts with Admiral, which contains instructions to remove the EnvoyFilter. However, in the declarative setup for the IstioOperator, it is assumed that all configuration changes to Istio should be handled in its declarative spec.
For example, here is our IstioOperator config:
# https://istio.io/latest/docs/setup/additional-setup/cni/#basic-installation
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-controlplane
spec:
# https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#IstioOperatorSpec
addonComponents:
grafana:
enabled: true
istiocoredns:
enabled: true
components:
cni:
enabled: true
egressGateways:
- name: istio-egressgateway
enabled: true
values:
cni:
excludeNamespaces:
- istio-system
- kube-system
logLevel: info
gateways:
istio-egressgateway:
env:
# Needed to route traffic via egress gateway if desired.
ISTIO_META_REQUESTED_NETWORK_VIEW: "external"
global:
controlPlaneSecurityEnabled: true
multiCluster:
enabled: true
Steps To Reproduce
Install Istio 1.6+ using the IstioOperator (I used 1.6.5).
Follow the pre-req steps here: https://github.com/istio-ecosystem/admiral/blob/master/docs/Examples.md
Expected behavior
I think there should be a documented way to do at least one of the following:
Maybe there is a better solution.
For what it's worth, when I slept the IstioOperator controller, removed the EnvoyFilter, and tested Admiral on 1.6, it appeared to work as intended.
Describe the bug
The docker image for admiral is not pushed for release/tag builds
Steps To Reproduce
Create a tag/release and see that the docker hub doesn't have the image with this new tag
Expected behavior
Should see the new tag (release) in docker hub.
Is your feature request related to a problem? Please describe.
Sidecar resource needs to be created/updated in workload's namespace based on its defined dependencies. This will be useful to limit amount of configuration pushed out to the envoy sidecar instances within mesh.
Is your feature request related to a problem? Please describe.
Currently, admiral finds k8s deployments based on the dependency CR to create global CNAMEs in the dependent clusters. Argo rollouts is another way to deploy workloads and admiral doesn't support watching them as a dependent workload.
Describe the solution you'd like
An argo rollout CRD should be treated similar to k8s deployment when identifying the dependent workloads (based of the identifier mechanism used for deployments) when creating global CNAMEs for workloads
Is there anyway to set outlierDetection? It seems to be hardcoded at the moment, but can we use GTP or other RD to define it?
outlierDetection:
baseEjectionTime: 120s
consecutive5xxErrors: 10
interval: 5s
Currently, admiral project uses vendor
for its dependencies. Switch to go modules for better dependency management.
Is your feature request related to a problem? Please describe.
Admiral syncs Istio resources for a given hostname it generates across clusters. However, certain resources are intended for only the namespace where the workload runs, for example client side override for failover testing etc, which doesn't need to be replicated/synced to other clusters.
Describe the solution you'd like
Do not sync Istio resources only if:
exportTo: "."
is set on a Istio resource
Describe the bug
A clear and concise description of what the bug is.
Admiral crashes on GTP add followed by delete operation
Steps To Reproduce
Expected behavior
A clear and concise description of what you expected to happen.
Admiral should not crash.
Describe the bug
Currently, the image for admiral is hardcoded to
Steps To Reproduce
Generate a release (by creating a tag) and then see that the installed artifact downloaded has image: docker.io/admiralproj/admiral:v0.1-alpha
in the file yaml/demosinglecluster.yaml
.
Expected behavior
When a release is published, the image should point to the new tag being released.
Ex: If a tag v0.1
is published the image filed should be image: docker.io/admiralproj/admiral:v0.1
This requires making the docker image tag parameterized under install/admiral/base/deployments.yaml with a default set to latest
deleting deployments doesn't clean up service entries that are created in the admiral-sync namespace.
Implement the global routing CRD that's is referenced in the documentation.
Describe the bug
If the weight is declared, the 10 times of consecutive5xxErrors won't failover to the other region
Steps To Reproduce
apiVersion: admiral.io/v1alpha1
kind: GlobalTrafficPolicy
metadata:
name: gtp-admiral-sample
namespace: sample-admiral
labels:
env: default
identity: webapp-sample-admiral
spec:
policy:
- dns: default.webapp-sample-admiral.global
lbType: 1 #0 represents TOPOLOGY, 1 represents FAILOVER
target:
- region: us-west-2
weight: 10
- region: us-east-1
weight: 90
Expected behavior
If a service returns 10 times 500, it won't get kicked off when GTP Weight(90 / 10 ) applied.
Without GTP, the failover will work with 10 consecutive 500 errors
Is your feature request related to a problem? Please describe.
Right now the example documentation is based on admiral coexisting in one of the participant clusters and the crds in the installation are not well separated to support admiral running in a dedicated cluster
Describe the solution you'd like
The examples installation should have a good separation between what needed to admiral and whats needed in a cluster that admiral monitors. Once we have that, add a section to document the process to run admiral in a production setting.
Hi!
In our installation all cluster api endpoints located behind domain with self-signed certificate.
I tried to add our CA into default system cert store inside admiral image, but it still produce errors about untrusted certificate
Describe the solution you'd like
Ability to add CA into admiral cert store, and some flag like insecure-skip-tls-verify for admiral as well
P.S.
If i specify skip tls option in kubeconfig admiral returns error
"error during create of clusterID: some-cluster Error with GlobalTrafficController controller init: failed to create global traffic controller crd client: specifying a root certificates file with the insecure flag is not allowed"
Test admiral with istio 1.3 release and report/create issues found.
Typo
admiral/admiral/pkg/clusters/registry.go
Lines 277 to 305 in e46dcb0
"custerID" should be "clusterID"
2020-01-08T21:19:42.921401Z info starting service controller custerID: tmp.kubeconfig
Describe the bug
We have some amount of self-hosted clusters that has unique name like test-env dev-env etc.
When admiral creates ServiceEntry it does not respect cluster name and every time in endpoints section i got some.service.svc.cluster.local instead of some.service.svc.your-cluster-name
Steps To Reproduce
Deploy admiral in cluster with unique name in terms of dns resolution
Expected behavior
Make admiral understand which name should be used, or provide some flag to specify it in admiral runtime
README has become a monolith, organize it under docs
folder and publish as github pages.
Describe the bug
If there is "-" in the deployment name, then the env will not be default but lastword
Steps To Reproduce
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-lastword
namespace: sample
labels:
identity: webapp-identity
spec:
replicas: 1
selector:
matchLabels:
app: webapp-lastword
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: webapp-lastword
identity: webapp-identity
spec:
containers:
- command:
- /bin/sleep
- 3650d
image: pstauffer/curl
imagePullPolicy: IfNotPresent
name: webapp-lastword
Expected behavior
The ServiceEntry created is lastword.webapp-identity.global
Create a way for clients to set timeouts, retries, circuit breakers, faults, and time delays that only apply to that specific client of the service and no other clients. This API designs and introduction of a new type will prevent the client from having access to the more sensitive routing, security, and load balancing configuration used by the service and not relying on the service team to make changes on the client’s behalf.
Describe the bug
It doesn't seem to effect functionality, but there are repeated logs of Failed to list *v1alpha1.Rollout: the server could not find the requested resource (get rollouts.argoproj.io)
when Admiral is installed into a cluster that doesn't have Argo Rollouts.
Steps To Reproduce
Follow the demoSingleCluster example in a cluster that lacks the rollouts CRD.
Expected behavior
Admiral shouldn't log errors when argo rollouts aren't present in a cluster.
Is your feature request related to a problem? Please describe.
Currently, admiral doesn't expose any APIs for simple use cases like:
i) Clusters currently being monitored
ii) CRs created and last updated
Describe the solution you'd like
GET (READ) api for clusters being watched
GET (READ) for admiral generated endpoints (CNAMEs) created
I have Admiral server install in us-east-1. And both us-east-1 and us-west-2 added as a remote cluster. Try to apply to GTP, however, it doesn't get the destinationrule updated.
However, the logs show:
time="2020-07-31T07:03:13Z" level=info msg="op=Update type=VirtualService name=default.greeting.global-default-vs cluster=ps2-nonprod-dev-us-east-1, e=Success"
But no locality added in:
Name: default.greeting.global-default-dr
Namespace: admiral-sync
Labels:
Annotations:
API Version: networking.istio.io/v1beta1
Kind: DestinationRule
Metadata:
Creation Timestamp: 2020-07-28T19:51:13Z
Generation: 1
Resource Version: 165744955
Self Link: /apis/networking.istio.io/v1beta1/namespaces/admiral-sync/destinationrules/default.greeting.global-default-dr
UID: 491194c2-8fe8-4e88-afee-2fb627c6c2c2
Spec:
Host: default.greeting.global
Traffic Policy:
Outlier Detection:
Base Ejection Time: 120s
consecutive5xxErrors: 10
Interval: 5s
Tls:
Mode: ISTIO_MUTUAL
Events:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.