Giter VIP home page Giter VIP logo

kne's People

Contributors

alexmasi avatar alshabib avatar anjan-keysight avatar ashutshkumr avatar bormanp avatar bstoll avatar carlmontanari avatar dang100 avatar davnerson-dn avatar defo89 avatar dependabot[bot] avatar frasieroh avatar greg-dennis avatar guoshiuan avatar hellt avatar jasdeep-hundal avatar jinsun-yoo avatar kmo7 avatar liulk avatar marcushines avatar mhines01 avatar mojiiba avatar n0shut avatar nehamanjunath avatar nitinsoniism avatar raballew avatar robshakir avatar sachendras avatar shubh90 avatar wenovus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kne's Issues

Low rates of throughput cEOS

Hi,

I am testing a topology with 10 cEOS (Arista) and regarding througput I am getting low rates over the scenario, has this been tested?, would you know what can be related to such a low througput?.

image image

Add support for OTG (IXIA_TG) node with multiple interfaces

When deploying IXIA_TG as part of KNE topology, here's what previously used to happen:

  • User would describe one IXIA_TG node for each interface (connected to DUT node interface)
  • The number of port pods deployed would be equal to number of IXIA_TG nodes present in topology
  • Additionally one controller pod would be deployed to manage those port pods
  • User couldn't specify controller services to expose
  • Users were limited to using only one interface with name eth1 for each IXIA_TG node
  • controller service address was hardcoded in ONDATRA (reservation logic)
  • Each port would be addressed using their corresponding service address instead of their corresponding interface name
  • Users couldn't configure more than one interface for a given port

The new OTG refactor aims to address all of above shortcomings.

  • Only one OTG node should be deployed for a given KNE topology; this will result in one controller pod.
  • Through that node, users shall be able to expose gNMI/gRPC/HTTPS services
  • Multiple interfaces (connected to DUT node interfaces) can be configured; this will result in one port pod per interface by default.
  • To configure multiple interfaces as part of one port pod, they can be annotated with same group name
  • Each port or interface shall be addressed using interface name used in KNE topology, when pushing OTG configuration

Error pulling private images for nodes deployed by KNE in kind cluster

When,

  • deploying a KNE topology consisting of OTG and arista
  • both OTG and arista images are posted to private registry
  • using a kind cluster

There's currently no way to specify the registry secret for arista (or any other images deployed by KNE).
For ixia-c, creating secret called ixia-pull-secret works.

I'm wondering, may be we should define the name of secret expected by KNE as well ?
And may be, mandate vendor-specific controllers to use the same name for secret ?

Another issue is, pull secrets are only visible for a given namespace (at least by default).
So ideally one would have to create the secret before topology creation, using the same namespace used by the topology.

Need more thoughts on this. Meanwhile, I unblocked my internal work with kind cluster using this change: https://github.com/google/kne/tree/kne-pull-secret

These are the set of commands I execute currently to pull images from private registry:

# ${1} => namespace of KNE topology
set_pull_secrets() {
    inf "Setting pull secrets for name space ${1} ..."
    kubectl create ns ${1} \
    && kubectl create secret -n ${1} docker-registry kne-pull-secret \
        --docker-server=us-central1-docker.pkg.dev \
        --docker-username=_json_key \
        --docker-password="$(cat ${GCLOUD_SVC_ACC_KEY})" \
        --docker-email=${GCLOUD_EMAIL} \
    && kubectl create secret -n ${1} docker-registry ixia-pull-secret \
        --docker-server=us-central1-docker.pkg.dev \
        --docker-username=_json_key \
        --docker-password="$(cat ${GCLOUD_SVC_ACC_KEY})" \
        --docker-email=${GCLOUD_EMAIL}
}

Problems initializing SRLinux

Hi,
I'm deploy the topology 2node-srl-ixr6-with-oc-services.pbtxt, but the containers are unable to stay 'ready' and 'running' as they keep restarting constantly.

Deploying topology:

kne create 2node-srl-ixr6-with-oc-services.pbtxt
I0209 10:00:54.369740 3846339 root.go:119] /home/mw/kne/examples/nokia/srlinux-services
I0209 10:00:54.371543 3846339 topo.go:117] Trying in-cluster configuration
I0209 10:00:54.371573 3846339 topo.go:120] Falling back to kubeconfig: "/home/mw/.kube/config"
I0209 10:00:54.374046 3846339 topo.go:253] Adding Link: srl1:e1-1 srl2:e1-1
I0209 10:00:54.374077 3846339 topo.go:291] Adding Node: srl1:NOKIA
I0209 10:00:54.424631 3846339 topo.go:291] Adding Node: srl2:NOKIA
I0209 10:00:54.459290 3846339 topo.go:358] Creating namespace for topology: "2-srl-ixr6"
I0209 10:00:54.484813 3846339 topo.go:368] Server Namespace: &Namespace{ObjectMeta:{2-srl-ixr6    4b34dc30-d2b2-4340-a901-8967fb08c69e 82945402 0 2024-02-09 10:00:54 +0000 UTC <nil> <nil> map[kubernetes.io/metadata.name:2-srl-ixr6] map[] [] [] [{kne Update v1 2024-02-09 10:00:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:kubernetes.io/metadata.name":{}}}} }]},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,Conditions:[]NamespaceCondition{},},}
I0209 10:00:54.485491 3846339 topo.go:395] Getting topology specs for namespace 2-srl-ixr6
I0209 10:00:54.485510 3846339 topo.go:324] Getting topology specs for node srl1
I0209 10:00:54.485574 3846339 topo.go:324] Getting topology specs for node srl2
I0209 10:00:54.485610 3846339 topo.go:402] Creating topology for meshnet node srl1
I0209 10:00:54.507333 3846339 topo.go:402] Creating topology for meshnet node srl2
I0209 10:00:54.522376 3846339 topo.go:375] Creating Node Pods
I0209 10:00:54.522726 3846339 nokia.go:201] Creating Srlinux node resource srl1
I0209 10:00:54.537059 3846339 nokia.go:206] Created SR Linux node srl1 configmap
I0209 10:00:54.631596 3846339 nokia.go:265] Created Srlinux resource: srl1
I0209 10:00:54.764968 3846339 topo.go:380] Node "srl1" resource created
I0209 10:00:54.765040 3846339 nokia.go:201] Creating Srlinux node resource srl2
I0209 10:00:54.780052 3846339 nokia.go:206] Created SR Linux node srl2 configmap
I0209 10:00:54.910542 3846339 nokia.go:265] Created Srlinux resource: srl2
I0209 10:00:55.028768 3846339 topo.go:380] Node "srl2" resource created
I0209 10:04:15.460792 3846339 topo.go:448] Node "srl1": Status RUNNING

Status of the pods:

k get pods -n 2-srl-ixr6 
NAME   READY   STATUS    RESTARTS     AGE
srl1   0/1     Running   1 (9s ago)   13s
srl2   0/1     Running   1 (9s ago)   13s

k get pods -n 2-srl-ixr6 
NAME   READY   STATUS                  RESTARTS     AGE
srl1   0/1     Init:CrashLoopBackOff   1 (8s ago)   16s
srl2   0/1     Init:CrashLoopBackOff   1 (8s ago)   16s

 k get pods -n 2-srl-ixr6 
NAME   READY   STATUS   RESTARTS   AGE
srl1   0/1     Error    2          32s
srl2   0/1     Error    2          32s

Events for the container srl1:

Events:
 Type     Reason          Age                   From               Message
 ----     ------          ----                  ----               -------
 Normal   Scheduled       6m9s                  default-scheduler  Successfully assigned 2-srl-ixr6/srl1 to k8worker4
 Normal   Killing         5m59s (x2 over 6m5s)  kubelet            Stopping container srl1
 Warning  BackOff         5m56s                 kubelet            Back-off restarting failed container init-srl1 in pod srl1_2-srl-ixr6(600952b1-695d-44c3-95a0-a68ba2f9be5a)
 Normal   SandboxChanged  5m55s (x3 over 6m5s)  kubelet            Pod sandbox changed, it will be killed and re-created.
 Normal   Pulled          5m50s (x3 over 6m8s)  kubelet            Container image "ghcr.io/srl-labs/init-wait:latest" already present on machine
 Normal   Created         5m50s (x3 over 6m8s)  kubelet            Created container init-srl1
 Normal   Started         5m50s (x3 over 6m7s)  kubelet            Started container init-srl1
 Warning  BackOff         5m50s                 kubelet            Back-off restarting failed container srl1 in pod srl1_2-srl-ixr6(600952b1-695d-44c3-95a0-a68ba2f9be5a)
 Normal   Pulled          5m49s (x3 over 6m6s)  kubelet            Container image "ghcr.io/nokia/srlinux" already present on machine
 Normal   Created         5m48s (x3 over 6m6s)  kubelet            Created container srl1
 Normal   Started         5m48s (x3 over 6m6s)  kubelet            Started container srl1

KNE build failure

Running "go build" in the main directory I get the following error:

$ go build
# go.universe.tf/metallb/api/v1beta2
../../../../pkg/mod/go.universe.tf/[email protected]/api/v1beta2/bgppeer_webhook.go:39:27: cannot use &BGPPeer{} (value of type *BGPPeer) as admission.Validator value in variable declaration: *BGPPeer does not implement admission.Validator (wrong type for method ValidateCreate)
		have ValidateCreate() error
		want ValidateCreate() (admission.Warnings, error)

Use VM in KNE

Is the capability to support VM in a KNE topology construction possible?

Not allowed to deploy cluster without controller

Hi All,

Is there any way to deploy the cluster without any controllers? Please.
There is no need for it with Cisco XRd images.

$ cat kind-bridge.yaml
# kind-bridge.yaml cluster config file sets up a kind cluster where default PTP CNI plugin
# is swapped with the Bridge CNI plugin.
# Bridge CNI plugin is required by some Network OSes to operate.
cluster:
  kind: Kind
  spec:
    name: kne
    recycle: True
    version: v0.17.0
    image: kindest/node:v1.26.0
    config: ../../kind/kind-no-cni.yaml
    additionalManifests:
      - ../../manifests/kind/kind-bridge.yaml
ingress:
  kind: MetalLB
  spec:
    manifest: ../../manifests/metallb/manifest.yaml
    ip_count: 100
cni:
  kind: Meshnet
  spec:
    manifest: ../../manifests/meshnet/grpc/manifest.yaml
$ kne deploy kind-bridge.yaml
Error: no controllers specified

Thanks!

Specify address pool for MetalLB

I am trying to deploy KNE on an existing k8s cluster with dockerd and flannel-cni. And nodes in cluster are all VMs. But kne deploy is stuck at below. It seems that KNE can't get the right address pool. So is there a way to specify addresses, just like what it is to be in deploying MetalLB using kubectl apply.

k8s@master:~$ kne deploy kne/deploy/kne/external.yaml

INFO[0001] Applying metallb ingress config
WARN[0001] Failed to create address polling (will retry 5 times)
WARN[0007] Failed to create address polling (will retry 4 times)
WARN[0012] Failed to create address polling (will retry 3 times)
WARN[0017] Failed to create address polling (will retry 2 times)
WARN[0023] Failed to create address polling (will retry 1 times)
Error: Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": EOF

OS: Ubuntu 20.04 server
K8s version: v1.26.3
Thanks

Add check for topology create to validate that images exist before starting cluster

Add at least a warning that says if images are present on the cluster for the images referenced in the topology.
This should be added in the form of an API check to the node implementations.
If the node implementation is provided by a controller the controller should response to a RPC from the cli on if the controller can serve that image version

CI steps need to be enhanced to incorporate a more e2e validation

Every time a collaborator makes a change in KNE, apart from unit test results, there's no other way for them to have confidence on whether something broke.

At very minimum, actions CI should include following:

  • Deploy at least one topology consisting of each node type
  • Deploy topology with some variations (e.g. grouped interfaces, back-to-back interfaces, etc.)
  • Deploy topology with negative use cases (e.g. duplicate interface name per node, duplicate group name, duplicate node name, non-existent vendor, etc.)

It would be nice to include following validations post deployment as part of CI:

  • meshnet CRD is correct
  • services exposed are appropriately reachable (socket connection check)
  • output of KNE topology doesn't break ONDATRA's reservation logic

My initial suggestion would be to trigger these on-pull-request and not on-push.

KNE should set Kernel core pattern and unify the core pattern path on the host

KNE should unify the Kernel core pattern for containers so that containers can mount the host's core pattern path (or Kind Node's core pattern path) in the container at a desired path. This will help containers see the core dumps on the host. NOS CLI commands that display core dumps will work seamlessly. It will be better if containers can have a choice for them to be able to mount the core path on the host to a path of their choice in the container so that the containers don't require any additional change.
Maybe a common Node interface implementation that parses the Kernel core pattern and returns the path that the vendor layer can use. Dumping the core directly in the container appears to be distro dependent hence this unification may help.

KNE should also set the "ulimit -c unlimited" allowing apps in the container to dump core.

embedded controller manifests break coupling to upstream controllers

Hi,
With #221 merged, I am worried about who is going to unify the embedded manifests with the ones from the upstream repo.

For example, the embedded controller manifest for srlinux-controller contains the outdated controller image version

image: ghcr.io/srl-labs/srl-controller:0.3.1

Current version is 0.4.3.

Basically any change in the upstream controller spec will require someone to update the generated manifest for the said controller...

Keeping track of what has changed upstream may be unbearable, why don't we instruct kne deploy to use the upstream installation procedures instead? For example, deploying srl-controller can be as simple as shelling out and calling kubectl apply -k <URL>.

This is of course a brute approach, which requires a cluster to have internet access for kustomize to fetch artifacts from a git repository of srl-controller. But this seems to be what users outside of Google will use all the time. For internal use cases, the checked-in manifest might be the only option, but still, I don't think it should be a default way of deploying 3rd party controllers to KNE.

Unable to transport ISIS, LDP and RSVP packets between PODs in diff k8 nodes - MESHNET

Hi Team

We are running KNE in a K8 cluster with 4 VMs(one controller and 3 workers).

a) We are running into the following issue where we are unable to get ISIS, LDP and RSVP sessions up across PODS that are hosted on different VMs .

Troubleshooting done:

  1. LLDP, BGP protocol sessions do come up across PODS hosted on different VMs.
  2. OTOH, these protocols (ISIS, LDP and RSVP ) do come up when the PODS are hosted on the same VM.
  3. Used Arista cEOS and Cisco 8201 to test this behavior and same result.
  4. Tried flipping between VXLAN and GRPC as the value for “INTER_NODE_LINK_TYPE” in the meshnet deamon and no luck.

Pod reset

Hi,
If it happens that a pod is evicted from the node because it was running out of ephemeral storage resources, how can the pod be configured so that the pod can be restarted using kne

image

kne deployment stuck at metallb

Hi,

The kne deployment is stuck at metallb and does not move past it, its been at this for quite a while.

Please help.!

$ kne deploy deploy/kne/kind-bridge.yaml
I0328 05:26:00.336471    5578 deploy.go:141] Deploying cluster...
I0328 05:26:00.341568    5578 deploy.go:404] kind version valid: got v0.17.0 want v0.17.0
I0328 05:26:00.341600    5578 deploy.go:411] Attempting to recycle existing cluster "kne"...
W0328 05:26:00.416844    5578 deploy.go:52] (kubectl): error: context "kind-kne" does not exist
I0328 05:26:00.421744    5578 deploy.go:436] Creating kind cluster with: [create cluster --name kne --image kindest/node:v1.26.0 --config /home/lab/github/kne/kind/kind-no-cni.yaml]
W0328 05:26:00.855286    5578 deploy.go:52] (kind): Creating cluster "kne" ...
W0328 05:26:00.855353    5578 deploy.go:52] (kind):  • Ensuring node image (kindest/node:v1.26.0) 🖼  ...
W0328 05:26:01.091577    5578 deploy.go:52] (kind):  ✓ Ensuring node image (kindest/node:v1.26.0) 🖼
W0328 05:26:01.091625    5578 deploy.go:52] (kind):  • Preparing nodes 📦   ...
W0328 05:26:05.466629    5578 deploy.go:52] (kind):  ✓ Preparing nodes 📦
W0328 05:26:05.684309    5578 deploy.go:52] (kind):  • Writing configuration 📜  ...
W0328 05:26:06.867260    5578 deploy.go:52] (kind):  ✓ Writing configuration 📜
W0328 05:26:06.867309    5578 deploy.go:52] (kind):  • Starting control-plane 🕹️  ...
W0328 05:26:24.256105    5578 deploy.go:52] (kind):  ✓ Starting control-plane 🕹️
W0328 05:26:24.256152    5578 deploy.go:52] (kind):  • Installing StorageClass 💾  ...
W0328 05:26:25.630065    5578 deploy.go:52] (kind):  ✓ Installing StorageClass 💾
W0328 05:26:26.817403    5578 deploy.go:52] (kind): Set kubectl context to "kind-kne"
W0328 05:26:26.817446    5578 deploy.go:52] (kind): You can now use your cluster with:
W0328 05:26:26.817456    5578 deploy.go:52] (kind): kubectl cluster-info --context kind-kne
W0328 05:26:26.817478    5578 deploy.go:52] (kind): Have a nice day! 👋
I0328 05:26:26.819203    5578 deploy.go:440] Deployed kind cluster: kne
I0328 05:26:26.819266    5578 deploy.go:454] Found manifest "/home/lab/github/kne/manifests/kind/kind-bridge.yaml"
I0328 05:26:27.094859    5578 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/kindnet created
I0328 05:26:27.100779    5578 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0328 05:26:27.109264    5578 deploy.go:49] (kubectl): serviceaccount/kindnet created
I0328 05:26:27.122330    5578 deploy.go:49] (kubectl): daemonset.apps/kindnet created
I0328 05:26:27.126673    5578 deploy.go:145] Cluster deployed
I0328 05:26:27.222933    5578 deploy.go:49] (kubectl): Kubernetes control plane is running at https://127.0.0.1:40179
I0328 05:26:27.223214    5578 deploy.go:49] (kubectl): CoreDNS is running at https://127.0.0.1:40179/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I0328 05:26:27.223265    5578 deploy.go:49] (kubectl): To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0328 05:26:27.226428    5578 deploy.go:149] Cluster healthy
I0328 05:26:27.229605    5578 deploy.go:160] Checking kubectl versions.
I0328 05:26:27.324849    5578 deploy.go:192] Deploying ingress...
I0328 05:26:27.324981    5578 deploy.go:696] Creating metallb namespace
I0328 05:26:27.325012    5578 deploy.go:715] Deploying MetalLB from: /home/lab/github/kne/manifests/metallb/manifest.yaml
I0328 05:26:27.646335    5578 deploy.go:49] (kubectl): namespace/metallb-system created
I0328 05:26:27.664638    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
I0328 05:26:27.673353    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
I0328 05:26:27.684365    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
I0328 05:26:27.695721    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
I0328 05:26:27.703335    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
I0328 05:26:27.710580    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
I0328 05:26:27.723166    5578 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
I0328 05:26:27.731601    5578 deploy.go:49] (kubectl): serviceaccount/controller created
I0328 05:26:27.740044    5578 deploy.go:49] (kubectl): serviceaccount/speaker created
I0328 05:26:27.750851    5578 deploy.go:49] (kubectl): role.rbac.authorization.k8s.io/controller created
I0328 05:26:27.758685    5578 deploy.go:49] (kubectl): role.rbac.authorization.k8s.io/pod-lister created
I0328 05:26:27.765958    5578 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
I0328 05:26:27.772927    5578 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
I0328 05:26:27.782890    5578 deploy.go:49] (kubectl): rolebinding.rbac.authorization.k8s.io/controller created
I0328 05:26:27.794688    5578 deploy.go:49] (kubectl): rolebinding.rbac.authorization.k8s.io/pod-lister created
I0328 05:26:27.801979    5578 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
I0328 05:26:27.808204    5578 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
I0328 05:26:27.818133    5578 deploy.go:49] (kubectl): secret/webhook-server-cert created
I0328 05:26:27.832263    5578 deploy.go:49] (kubectl): service/webhook-service created
I0328 05:26:27.849563    5578 deploy.go:49] (kubectl): deployment.apps/controller created
I0328 05:26:27.865062    5578 deploy.go:49] (kubectl): daemonset.apps/speaker created
I0328 05:26:27.880289    5578 deploy.go:49] (kubectl): validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
I0328 05:26:27.905772    5578 deploy.go:720] Creating metallb secret
I0328 05:26:27.910443    5578 deploy.go:1115] Waiting on deployment "metallb-system" to be healthy

Thanks!

[ENH] Need support for loading YAML/JSON file for topology

Writing .pb.txt by hand is more error-prone (compared to YAML/JSON), because:

  • Adding lists / maps is a little counter-intuitive
  • No stable support for linting in popular editors (found some in VSCode marketplace, but they require the format to be .pbtext)
  • No online tools available for quick validation / formatting

Moreover, the config in YAML seems to be more compact. Support for JSON is optional.

Support for specified interface IP addresses

I created a topology using examples/gobgp/gobgp.pb.txt, but I can't ping through from r1 to r2 (I changed the router-id of both nodes). I noticed that there's hardly a way to specify IP address of pods' interfaces. The interface ip can be specified by adding a Kubernetes object to a cluster with meshnet-cni installed, but it cannot be done in a kne deployed cluster by the same method. Is it because the localip and peerip remain empty strings in this code?

kne/topo/node/node.go

Lines 148 to 155 in 476a6df

links = append(links, topologyv1.Link{
UID: int(ifc.Uid),
LocalIntf: ifcName,
PeerIntf: ifc.PeerIntName,
PeerPod: ifc.PeerName,
LocalIP: "",
PeerIP: "",
})

KNE support for nested "kind" environments.

Evaluating the capability to support kind https://kind.sigs.k8s.io/ (or potentially other k8s implementations) in a KNE topology build.

Use case associated : to be able to test, validate build topologies which includes CNI specific capabilities. In that case, the nested kind cluster must be able to implement off the shelf CNIs connected to emulated network topology (see attached diagram).

image

Create topology in specific namespace

As described here I would like to use KNE and deploy a topology into a specific namespace. Unfortunately, it seems that KNE lacks some convenience features such taking into account the current context set in the kubeconfig files or allowing the definition of the target namespace similar to kubectl create -n ... and always deploys the topology in the namespace defined by the name parameter in the topology file.

So far, the only workaround I have found seems to be modifying in the topology file at runtime as shown below.

namespace=3-node-ceos-with-traffic
kubectl create namespace $namespace
echo "name: \"$namespace\"" >> topology.pb.txt
kne create topology.pb.txt --kubecfg $KUBECONFIG

Interacting with the Topology Manager

I have observed in the repository

  • Topology manager is the public API for allowing external users to manipulate
    the link state in the topology.
  • The topology manager will run as a service in k8s environment.

How can I interact with the Topology Manager in the Kubernetes cluster? How could I dynamically discover topologies?
Is it possible to dynamically retrieve the file with which a scenario was created?.

Port numbers need to be swapped after commit of multiple ports on same internal listener #198

#198 commit fixed the issue of multiple ports on same internal listener, however the inside and outside port values need to be swapped in the code.

service.Outside = uint32(p.TargetPort.IntVal)

Outside is configured as 1000 and inside as 50051
services:{
key: 1000
value: {
name: "gribi"
outside: 1000
inside: 50051
}
}
services:{
key: 1001
value: {
name: "gnmi"
outside: 1001
inside: 50051
}
}

But deployed service shows as this

services: {
key: 1000
value: {
name: "gribi"
inside: 50051
outside: 50051
inside_ip: "10.96.122.136"
outside_ip: "172.18.0.100"
node_port: 31488
}
}
services: {
key: 1001
value: {
name: "gnmi"
inside: 50051
outside: 50051
inside_ip: "10.96.122.136"
outside_ip: "172.18.0.100"
node_port: 30967
}
}

Need support for using gRPC (instead of VXLAN) meshnet link between pods deployed on separate nodes

As of networkop/meshnet-cni@f26c193, one can now opt-in to use gRPC link between pods deployed on separate nodes, like so:

git clone https://github.com/networkop/meshnet-cni.git
cd meshnet-cni && kubectl apply -k manifests/overlays/grpc-link

kne_cli deploy should anchor to the commit noted above and expose a configurable option to switch between VXLAN and gRPC link, until latter is stable enough to be used as a default.

@kingshukdev FYI

Pods in pending state

I installed kne as per document and tried to deploy nokia 2srl-certs topology. It has been on pending state for 15 minutes. am i missing something? this is the message i get;

INFO[0002] Node "r2" resource created
INFO[0002] r1 - generating self signed certs
INFO[0002] r1 - waiting for pod to be running

KNE stdout could be made more readable

KNE has quite helpful logging in place, but when it comes to data structures, they're just dumped as is.

Would be really helpful if those could be pretty-printed.

Cannot connect to routers with host name that do not match a certain regexp

Scrapli cannot establish connection with routers whose client hostname does not match a certain regexp

When establishing a connection with a router (WaitCliReady), scrapli tries to look for the command prompt by running the client's message through a regular expression (scrapli link). For example, scrapli is looking for the message in the red box in the picture below.

image

If the hostname does not match the regular expression, from scrapli's perspective, the client never issued the command prompt, and scrapli will hang infinitely until it "sees" the command prompt.
Specifically, in my case, root@testname> would successfully match the expression, but [email protected]> would not.

The regular expression is hardcoded in scrapli, and cannot be fixed from this reposity. (Pattern Link)

KinD cluster deploy issue - timeout - kindnet Init:ImagePullBackOff

Any advise ?


root@ubuntu2:~/kne# kne deploy deploy/kne/kind-bridge.yaml
I0330 02:04:27.956818 15078 deploy.go:141] Deploying cluster...
I0330 02:04:28.136665 15078 deploy.go:404] kind version valid: got v0.17.0 want v0.17.0
I0330 02:04:28.136751 15078 deploy.go:411] Attempting to recycle existing cluster "kne"...
W0330 02:04:28.785704 15078 deploy.go:52] (kubectl): error: context "kind-kne" does not exist
I0330 02:04:28.790215 15078 deploy.go:436] Creating kind cluster with: [create cluster --name kne --image kindest/node:v1.26.0 --config /root/kne/kind/kind-no-cni.yaml]
W0330 02:04:29.309393 15078 deploy.go:52] (kind): Creating cluster "kne" ...
W0330 02:04:29.309451 15078 deploy.go:52] (kind): • Ensuring node image (kindest/node:v1.26.0) 🖼 ...
W0330 02:04:39.030434 15078 deploy.go:52] (kind): ✓ Ensuring node image (kindest/node:v1.26.0) 🖼
W0330 02:04:39.030476 15078 deploy.go:52] (kind): • Preparing nodes 📦 ...
W0330 02:04:54.563861 15078 deploy.go:52] (kind): ✓ Preparing nodes 📦
W0330 02:04:54.810156 15078 deploy.go:52] (kind): • Writing configuration 📜 ...
W0330 02:04:55.984071 15078 deploy.go:52] (kind): ‚úì Writing configuration üìú
W0330 02:04:55.984101 15078 deploy.go:52] (kind): • Starting control-plane 🕹️ ...
W0330 02:05:12.591261 15078 deploy.go:52] (kind): ✓ Starting control-plane 🕹️
W0330 02:05:12.591287 15078 deploy.go:52] (kind): • Installing StorageClass 💾 ...
W0330 02:05:16.025463 15078 deploy.go:52] (kind): ‚úì Installing StorageClass üíæ
W0330 02:05:17.530321 15078 deploy.go:52] (kind): Set kubectl context to "kind-kne"
W0330 02:05:17.530343 15078 deploy.go:52] (kind): You can now use your cluster with:
W0330 02:05:17.530347 15078 deploy.go:52] (kind): kubectl cluster-info --context kind-kne
W0330 02:05:17.530355 15078 deploy.go:52] (kind): Thanks for using kind! üòä
I0330 02:05:17.532867 15078 deploy.go:440] Deployed kind cluster: kne
I0330 02:05:17.532919 15078 deploy.go:454] Found manifest "/root/kne/manifests/kind/kind-bridge.yaml"
I0330 02:05:17.945531 15078 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/kindnet created
I0330 02:05:17.953701 15078 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0330 02:05:17.961998 15078 deploy.go:49] (kubectl): serviceaccount/kindnet created
I0330 02:05:17.972441 15078 deploy.go:49] (kubectl): daemonset.apps/kindnet created
I0330 02:05:17.977346 15078 deploy.go:145] Cluster deployed
I0330 02:05:18.061496 15078 deploy.go:49] (kubectl): Kubernetes control plane is running at https://127.0.0.1:33603
I0330 02:05:18.061527 15078 deploy.go:49] (kubectl): CoreDNS is running at https://127.0.0.1:33603/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I0330 02:05:18.061537 15078 deploy.go:49] (kubectl): To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I0330 02:05:18.063741 15078 deploy.go:149] Cluster healthy
I0330 02:05:18.094258 15078 deploy.go:160] Checking kubectl versions.
WARNING: version difference between client (1.24) and server (1.26) exceeds the supported minor version skew of +/-1
I0330 02:05:18.192486 15078 deploy.go:192] Deploying ingress...
I0330 02:05:18.205668 15078 deploy.go:696] Creating metallb namespace
I0330 02:05:18.205720 15078 deploy.go:715] Deploying MetalLB from: /root/kne/manifests/metallb/manifest.yaml
I0330 02:05:18.508827 15078 deploy.go:49] (kubectl): namespace/metallb-system created
I0330 02:05:18.521972 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
I0330 02:05:18.530773 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
I0330 02:05:18.544036 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
I0330 02:05:18.557350 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
I0330 02:05:18.566570 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
I0330 02:05:18.575258 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
I0330 02:05:18.593987 15078 deploy.go:49] (kubectl): customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
I0330 02:05:18.601097 15078 deploy.go:49] (kubectl): serviceaccount/controller created
I0330 02:05:18.606749 15078 deploy.go:49] (kubectl): serviceaccount/speaker created
I0330 02:05:18.614287 15078 deploy.go:49] (kubectl): role.rbac.authorization.k8s.io/controller created
I0330 02:05:18.622656 15078 deploy.go:49] (kubectl): role.rbac.authorization.k8s.io/pod-lister created
I0330 02:05:18.634749 15078 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
I0330 02:05:18.642580 15078 deploy.go:49] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
I0330 02:05:18.652723 15078 deploy.go:49] (kubectl): rolebinding.rbac.authorization.k8s.io/controller created
I0330 02:05:18.663793 15078 deploy.go:49] (kubectl): rolebinding.rbac.authorization.k8s.io/pod-lister created
I0330 02:05:18.671275 15078 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
I0330 02:05:18.682155 15078 deploy.go:49] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
I0330 02:05:18.708048 15078 deploy.go:49] (kubectl): secret/webhook-server-cert created
I0330 02:05:18.738683 15078 deploy.go:49] (kubectl): service/webhook-service created
I0330 02:05:18.758494 15078 deploy.go:49] (kubectl): deployment.apps/controller created
I0330 02:05:18.777772 15078 deploy.go:49] (kubectl): daemonset.apps/speaker created
I0330 02:05:18.798662 15078 deploy.go:49] (kubectl): validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
I0330 02:05:18.908225 15078 deploy.go:720] Creating metallb secret
I0330 02:05:18.945038 15078 deploy.go:1115] Waiting on deployment "metallb-system" to be healthy
...
...

root@ubuntu2:# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-787d4945fb-lmdgp 0/1 Pending 0 6m8s
kube-system coredns-787d4945fb-m89zq 0/1 Pending 0 6m8s
kube-system etcd-kne-control-plane 1/1 Running 0 6m21s
kube-system kindnet-dcgtk 0/1 Init:ImagePullBackOff 0 6m8s
kube-system kube-apiserver-kne-control-plane 1/1 Running 0 6m21s
kube-system kube-controller-manager-kne-control-plane 1/1 Running 0 6m21s
kube-system kube-proxy-rb7wg 1/1 Running 0 6m8s
kube-system kube-scheduler-kne-control-plane 1/1 Running 0 6m21s
local-path-storage local-path-provisioner-c8855d4bb-jss7v 0/1 Pending 0 6m8s
metallb-system controller-8bb68977b-rjqc8 0/1 Pending 0 6m8s
root@ubuntu2:#

External-IP access

I am trying to access from an external host to the services through its EXTERNAL IP but I can't, how could I access these services outside the host where the topology is deployed?

image

ixiatg-op-system has intermittent kne deploy health check failure

Alex Bortok asked me to create an issue for this which I reported to him today. It is happening on the 0.0.1-4557 no-license required ixia release. May/may not happen on other releases.

  • kne deploy deploy/kne/kind-bridge.yaml command leads to below intermittently:
    I1201 10:50:18.795513 9726 deploy.go:1283] Waiting on deployment "ixiatg-op-system" to be healthy
    Error: failed to check if controller is healthy: context canceled before "ixiatg-op-system" healthy

+++ On the deployment VM:
oot@controller:~/kne# git log -1
commit c8895cb (HEAD -> main, origin/main, origin/HEAD)
Author: jasdeep-hundal [email protected]
Date: Thu Nov 30 14:51:40 2023 -0800

Update KENG configmap for release 0.1.0-81 (#465)

root@controller:~/kne#

root@controller:~/kne# cat /root/kne/manifests/keysight/ixiatg-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ixiatg-release-config
namespace: ixiatg-op-system
data:
versions: |
{
"release": "0.0.1-9999",
"images": [
{
"name": "controller",
"path": "ghcr.io/open-traffic-generator/keng-controller",
"tag": "0.1.0-81"
},
{
"name": "gnmi-server",
"path": "ghcr.io/open-traffic-generator/otg-gnmi-server",
"tag": "1.13.2"
},
{
"name": "traffic-engine",
"path": "ghcr.io/open-traffic-generator/ixia-c-traffic-engine",
"tag": "1.6.0.100"
},
{
"name": "protocol-engine",
"path": "ghcr.io/open-traffic-generator/ixia-c-protocol-engine",
"tag": "1.00.0.339"
},
{
"name": "ixhw-server",
"path": "ghcr.io/open-traffic-generator/keng-layer23-hw-server",
"tag": "0.13.2-2"
}
]
}

+++ The loaded IXIA containers:
root@controller:~/kne# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ghcr.io/open-traffic-generator/licensed/ixia-c-controller 0.0.1-4557 f243c3b3c3f6 3 days ago 189MB
ncptx 22.3X80-202310010340.0-EVO adf87a0a65ca 2 months ago 5.06GB
ncptx latest adf87a0a65ca 2 months ago 5.06GB
ghcr.io/open-traffic-generator/ixia-c-traffic-engine 1.6.0.85 649b50fbb3a8 2 months ago 270MB
ghcr.io/open-traffic-generator/licensed/ixia-c-protocol-engine 1.00.0.331 1292a35c6102 2 months ago 447MB
ghcr.io/open-traffic-generator/ixia-c-gnmi-server 1.12.7 68405a8c5d99 2 months ago 170MB
kindest/node v1.26.0 6d3fbfb3da60 11 months ago 931MB

+++ The actual ixiatg-configmap.yaml for the 4557 release (which is not in the KNE repo -- as expected):
cat /root/ixiatg-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ixiatg-release-config
namespace: ixiatg-op-system
data:
versions: |
{
"release": "0.0.1-4557",
"images": [
{
"name": "controller",
"path": "ghcr.io/open-traffic-generator/licensed/ixia-c-controller",
"tag": "0.0.1-4557"
},
{
"name": "gnmi-server",
"path": "ghcr.io/open-traffic-generator/ixia-c-gnmi-server",
"tag": "1.12.7"
},
{
"name": "traffic-engine",
"path": "ghcr.io/open-traffic-generator/ixia-c-traffic-engine",
"tag": "1.6.0.85"
},
{
"name": "protocol-engine",
"path": "ghcr.io/open-traffic-generator/licensed/ixia-c-protocol-engine",
"tag": "1.00.0.331"
},
{
"name": "ixhw-server",
"path": "ghcr.io/open-traffic-generator/ixia-c-ixhw-server",
"tag": "0.12.5-1"
}
]
}

Specifying a command for HOST nodes in a topology file does not override a default value

There is a default value for command parameter for HOST nodes: /bin/sh -c sleep 2000000000000. This value should be replaced if a command is present in a topology definition, but instead, both are being merged. For example:

nodes: {
    name: "host1"
    type: HOST
    config: {
        image: "ubuntu-host:latest"
        command: "/start.sh"
        args: "-s"
    }
}

results in the following POD spec

    Command:
      /start.sh
      /bin/sh
      -c
      sleep 2000000000000
      /start.sh
    Args:
      -s
      -s

Alternatively, if a command is defined via an array like this:

nodes: {
    name: "host2"
    type: HOST
    config: {
        image: "ubuntu-host:latest"
        command: ["/start.sh", "-s"]
    }
}

we get

    Command:
      /start.sh
      -s
      /bin/sh
      -c
      sleep 2000000000000
      /start.sh
      -s

The issue seems to be originating at https://github.com/google/kne/blob/main/topo/node/host/host.go#L26. From Merge func documentation the expected behavior matches what is observed:

The elements of every list field in src is appended to the corresponded list fields in dst.

Issue with cluster ip's not being reachable for pods after meshnet initialization

Starting meshnetd daemon
time="2021-09-11T22:52:25Z" level=info msg="Trying in-cluster configuration"
time="2021-09-11T22:52:25Z" level=info msg="Starting meshnet daemon..."
time="2021-09-11T22:52:25Z" level=info msg="GRPC server has started on port: 51111"
time="2021-09-11T22:55:38Z" level=info msg="Retrieving bar's metadata from K8s..."
time="2021-09-11T22:55:38Z" level=info msg="Reading pod bar from K8s"
time="2021-09-11T22:55:38Z" level=error msg="Failed to read pod bar from K8s"
time="2021-09-11T22:55:38Z" level=error msg="finished unary call with code Unknown" error="topologies.networkop.co.uk "bar" not found" grpc.code=Unknown grpc.method=Get grpc.service=meshnet.v1beta1.Local grpc.start_time="2021-09-11T22:55:38Z" grpc.time_ms=5.82 peer.address="[::1]:33110" span.kind=server system=grpc
time="2021-09-11T22:55:38Z" level=info msg="[transport] transport: loopyWriter.run returning. connection error: desc = "transport is closing"" system=system
time="2021-09-11T22:55:51Z" level=info msg="Retrieving bar's metadata from K8s..."
time="2021-09-11T22:55:51Z" level=info msg="Reading pod bar from K8s"
time="2021-09-11T22:55:51Z" level=error msg="Failed to read pod bar from K8s"
time="2021-09-11T22:55:51Z" level=error msg="finished unary call with code Unknown" error="topologies.networkop.co.uk "bar" not found" grpc.code=Unknown grpc.method=Get grpc.service=meshnet.v1beta1.Local grpc.start_time="2021-09-11T22:55:51Z" grpc.time_ms=2.693 peer.address="[::1]:33230" span.kind=server system=grpc
time="2021-09-11T22:55:51Z" level=info msg="[transport] transport: loopyWriter.run returning. connection error: desc = "transport is closing"" system=system
time="2021-09-11T22:58:20Z" level=info msg="Retrieving vm-1's metadata from K8s..."
time="2021-09-11T22:58:20Z" level=info msg="Reading pod vm-1 from K8s"
time="2021-09-11T22:58:20Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=Get grpc.service=meshnet.v1beta1.Local grpc.start_time="2021-09-11T22:58:20Z" grpc.time_ms=5.426 peer.address="[::1]:34388" span.kind=server system=grpc
time="2021-09-11T22:58:20Z" level=info msg="Setting vm-1's SrcIp=172.18.0.2 and NetNs=/var/run/netns/cni-f825ebc5-2799-5e50-d6ee-4181b13b71f4"
time="2021-09-11T22:58:20Z" level=info msg="Reading pod vm-1 from K8s"
time="2021-09-11T22:58:20Z" level=info msg="Update pod status vm-1 from K8s"

Refactor topo to use node.Implementation as the interface to vendor specific implemenations

Currently we have an "extra" level of indirection from

Topo -> Node -> Vendor Node

This was orginally done to allow for reuse between Node and to keep k8s clients out of the vendor specific implementation details. This is no longer the case as for any controller based implementations the vendor has to talk to k8s directly anyway

We need to refactor this to remove the unneeded layer which will make future code maintenance easier

Error no adress pooling when deploy external-compose

Cluster version: 1.27
CNI: Flannel up-to-date and working
All nodes are working and i can deploy anything in the cluster and workers.
The hairpin mode is enable in the flannel config.
I have not an ingress controller but i don't see anything in the guide set-up.
I have CRI (Dockerd, plugin of mirantis and docker)

After do an apply, i got the issue about can't route to host as i shown in the image:
image

Bonus: If i ignore all failurePolicy of the webhookconfiguration or if i delete it and create the metal-lb indepent of the deploy of external-compose.yaml i can deploy all the controllers and cni in the file but it doesnt deploy in the right network and keep everytime in init and the host get in the status: Error.

Add a flag to kne cli root that can dump k8 logs on failure

It would be useful to have an optional flag that applies mainly to kne deploy and kne create that dumps the kubectl logs for a number of things on cmd failure.

Maybe:
- kubectl get pods -A
- kubectl logs for each pod thats not healthy
- kubectl describe for each pod thats not healthy

Node creation should happen concurrently

Here's a rough flow of sequence of operations:

  1. Get meshnet topology spec for each node, in sequence
  2. Review each spec to ensure peer node name consists of actual pod name for a given peer node (since node != pod)
  3. Create meshnet CR for each spec, in sequence
  4. Delegate node creation to vendor specific implementation, in sequence
  5. Loop through all nodes and call vendor specific implementation to ensure the corresponding node is ready, in sequence

4. and 5. is specifically a problem because operation on a given node blocks the operation on subsequent nodes. This causes significant slowness as we increase the number of nodes in a topology.

It should be feasible to enhance them to operate concurrently (one goroutine for each node).

There's also the opportunity to make 1. and 3. concurrent but I do not have enough data to confirm that it causes significant slowness.

allow omitting cluster spec from the deployment yaml

Hi @alexmasi @marcushines

To allow automated KNE cluster installation on the standalone k8s cluster (non-kind), it is required for KNE deployment routine to allow users use the deployment yaml that contains just ingress/cni/controllers spec.

Currently, when users remove cluster object from a deployment yaml the deploy command fails

[reth@mvsrlstbsim01 kne]$ kne deploy kb1.yml
INFO[0000] Reading deployment config: "/home/reth/kne/deploy/kne/kb1.yml"
Error: cluster type not supported:

By allowing removing the cluster (or specyfing cluster type as external) we will enable KNE users to deploy KNE prerequisites automatically, while using their own k8s cluster

/cc @bpandipp

Add timeout as for kne deploy

During kne deploy kne/deploy/kne/kind-bridge.yaml, sometimes due to slowness it gets timed out. At present there is a no way to provide a timeout value. please provide that.

Missing quickstart (not the installation steps)

This repository doesn't have a quickstart guide, which is pretty important for quick onboarding in terms of understanding what the topology means. I did notice there are installation steps in deploy/ dir but they're not "quick" enough.

Ideally it should consist of 3 main components:

  1. A minimal topology configuration, providing quick overview on nodes and links,
    • preferably in yaml, since users wouldn't have to learn .prototxt syntax
    • preferably consisting of node images available for free, requiring minimal configuration (and no permission )
  2. Installations steps:
    • The only manual steps required should be for installing Go (and setting path to go bins) and docker
    • Install KNE without need for cloning it: go install github.com/google/kne/kne_cli@latest (or use a git tag)
    • kne_cli deploy should ideally setup kind, meshnet and metallb. It should also copy the kubectl and kubeconfig from kind container to host (to ensure we're always using compatible kubectl)
  3. Creation or deletion of topology

Support for Cisco 8808 docker image

Hi Team,

I can see KNE supports cisco model 8201 and model ‘xrd’. Can it support a dockerized version of Cisco 8808 model with the device type + models available today (or) this needs to added?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.