Giter VIP home page Giter VIP logo

ingress-gce's Introduction

GLBC

GitHub release Go Report Card

GLBC is a GCE L7 load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API.

Overview

See here for high-level concepts on Ingress in Kubernetes.

For GCP-specific documentation, please visit here (core use-cases) and here (other cool features).

Releases

Please visit the changelog for both high-level release notes and a detailed changelog.

Documentation

Please visit our docs for more information on how to run, contribute, troubleshoot and much more!

GKE Version Mapping

The table below describes what version of Ingress-GCE is running on GKE. Note that these versions are simply the defaults.

Format: k8s version -> glbc version ('+' indicates that version or above)

   * 1.12.7-gke.16+ -> v1.5.2
   * 1.13.7-gke.5+ -> v1.6.0
   * 1.14.10-gke.31+ -> 1.6.2
   * 1.14.10-gke.42+ -> 1.6.4
   * 1.15.4-gke.21+ -> 1.7.2
   * 1.15.9-gke.22+ -> 1.7.3
   * 1.15.11-gke.15+ -> 1.7.4
   * 1.15.12-gke.3+ -> 1.7.5
   * 1.16.8-gke.3+ -> 1.9.1
   * 1.16.8-gke.12+ -> 1.9.2
   * 1.16.9-gke.2+ -> 1.9.3
   * 1.16.10-gke.6+ -> 1.9.7
   * 1.17.6-gke.11+ -> 1.9.7
   * 1.18.4-gke.1201+ -> 1.9.7
   * 1.16.13-gke.400+ -> 1.9.8
   * 1.17.9-gke.600+ -> 1.9.8
   * 1.18.6-gke.500+ -> 1.9.8
   * 1.18.6-gke.4800+ -> 1.9.9
   * 1.18.10-gke.1500+ -> 1.10.8
   * 1.18.10-gke.2300+ -> 1.10.9
   * 1.18.12-gke.1200+ -> 1.10.13
   * 1.18.18-gke.1200+ -> 1.10.15
   * 1.18.19-gke.1400+ -> 1.11.1
   * 1.18.20-gke.5100+ -> 1.11.5
   * 1.19.14-gke.1900 -> 1.11.5
   * 1.20.10-gke.301 -> 1.11.5
   * 1.21.3-gke.210 -> 1.13.4

ingress-gce's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ingress-gce's Issues

GLBC ingress: only handle annotated ingress

From @tamalsaha on December 14, 2016 21:49

Hi,
We are running a HAProxy based ingress in our clusters. But for a few service, we would like to run GLBC ingress. I did not see any way to tell ingress controllers, which Ingress resource they can handle. Can Ingress controllers can only handle ingress that has a specific annotation applied to it (similar to how schedulers do it):

"ingress.alpha.kubernetes.io/controller": glbc

Here is a way I could see being implemented. Glbc controller adds a new flag --ingress-controller.

If --ingress-controller flag value is empty, then glbc should handle any Ingress that has no annotation "ingress.alpha.kubernetes.io/controller" or annotation set to "" string.

If --ingress-controller flag is not empty, then only handle Ingress that has
"ingress.alpha.kubernetes.io/controller" : "".

Thanks.

Copied from original issue: kubernetes/ingress-nginx#59

Ingress creates wrong firewall rule after `default-http-backend` service was clobbered.

From @MrHohn on December 8, 2016 18:46

From kubernetes/kubernetes#36546.

During development, found ingress firewall test failed instantly on my cluster. Turned out due to some reasons, the default-http-backend service was deleted and recreated, and it was then allocated a different nodePort. Because ingress controller records this nodePort at the beginning but never refresh(ref1 and ref2), it always create firewall rule with the stale nodePort after.

@bprashanth

Copied from original issue: kubernetes/ingress-nginx#48

ingress path confusing

  paths:
  - path: /posts/*
    backend:
      serviceName: post-service
      servicePort: 8585

Requests to /posts are sent to the default backend (404!), while sub-path requests are forwarded accordingly.

There needs to be sufficient documentation around this so that we know how to configure the ingress forwarding. I've tried adding 2 separate paths one for /posts, and /posts/* and that didn't work either.

How can I send all requests for /posts to my post-service backend?

GCE: ingress only shows the first backend's healthiness in `backends` annotation

From @MrHohn on September 20, 2017 1:43

From kubernetes/enhancements#27 (comment).

We attach backends annotation to ingress object after LB creation:

  ...
  backends:		{"k8s-be-30910--7b4223ab4c1af15d":"UNHEALTHY"}

And from the implementation:
https://github.com/kubernetes/ingress/blob/937cde666e533e4f70087207910d6135c672340a/controllers/gce/backends/backends.go#L437-L452

Using only the first backend's healthiness to represent the healthiness for all backends seems incorrect.

cc @freehan

Copied from original issue: kubernetes/ingress-nginx#1395

GLBC: Ingress can't be properly created: Insufficient Permission

From @bbzg on July 16, 2017 11:36

I recently upgraded to kubernetes 1.7 with RBAC on GKE, and I am seeing this problem:

  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  6h		6m		75	loadbalancer-controller			Warning		GCE :Quota	googleapi: Error 403: Insufficient Permission, insufficientPermissions

I have double-checked my quotas, and they are all green.

I have also tried granting the Node service account Project > Editor permissions, and I have added the Node service account to the cluster-admin ClusterRole, just in case it had anything to do with that (which it should not, right?).

GKE Cluster logs (slightly redacted):

{
 insertId:  "x"   
 jsonPayload: {
  apiVersion:  "v1"    
  involvedObject: {
   apiVersion:  "extensions"     
   kind:  "Ingress"     
   name:  "ingress-testing"     
   namespace:  "default"     
   resourceVersion:  "425826"     
   uid:  "x"     
  }
  kind:  "Event"    
  message:  "googleapi: Error 403: Insufficient Permission, insufficientPermissions"    
  metadata: {
   creationTimestamp:  "2017-07-15T12:54:37Z"     
   name:  "ingress-testing.x"     
   namespace:  "default"     
   resourceVersion:  "53520"     
   selfLink:  "/api/v1/namespaces/default/events/ingress-testing.14d1822c5ed30595"     
   uid:  "x"     
  }
  reason:  "GCE :Quota"    
  source: {
   component:  "loadbalancer-controller"     
  }
  type:  "Warning"    
 }
 logName:  "projects/x/logs/events"   
 receiveTimestamp:  "2017-07-15T19:11:59.117152623Z"   
 resource: {
  labels: {
   cluster_name:  "app-cluster"     
   location:  ""     
   project_id:  "x"     
  }
  type:  "gke_cluster"    
 }
 severity:  "WARNING"   
 timestamp:  "2017-07-15T19:11:54Z"   
}

I have tried figuring out what the cause might be, but have not found anything that was applicable.

What can I do to get Ingress working again in my cluster?

Thanks!

Copied from original issue: kubernetes/ingress-nginx#975

Document how to avoid 502s

From @esseti on September 20, 2017 9:22

Hello,
i've a problem with the ingress and the fact that the 502 page pops up when there are "several" request. I've a JMeter spinning 10 threads for 20 times, and I get more than 50 times the 502 over 2000 calls in total (less than 0,5%).

reading the readme it says
it says that this error is probably due to

The loadbalancer is probably bootstrapping itself.

but the loadbalancer is already there, so does it means that all the pods serving that url are busy? is there a way to avoid the 502 waiting for a pod to be free?

if not, is there a way to personalize the 502 page? because I expose APIs in JSON format, and I would like to show a JSON error rather than a html page.

Copied from original issue: kubernetes/ingress-nginx#1396

TLS certificate validations causes tls creation to fail

Due to lack output couldn't track what's going wrong.
The chain like provided in attachment causes tls backend creation to be silently skipped
example.pem.gz
It's ok with self-signed certificate, but constantly fails with this chain, or any chain issued by our PKI.
If we try to manually add forwarding rules and https-proxy, it's ok, also.

Use GCE load balancer controller with backend buckets

From @omerzach on February 28, 2017 1:20

We're happily using the GCE load balancer controller in production to route traffic to a few different services. We'd like to have some paths point at backend buckets in Google Cloud Storage instead of backend services running in Kubernetes.

Right now if we manually create this backend bucket and then configure the load balancer to point certain paths at it the UrlMap is updates appropriately but almost immediately reverted to its previous setting, presumably because the controller sees it doesn't match the YAML we initially configured the Ingress with.

I have two questions:

  1. Is there any immediate workaround where we could continue using the Kubernetes controller but manually modify the UrlMap to work with a backend bucket?
  2. Would a pull request to add support for backend buckets in the load balancer controller be something the community is interested in? We're willing to invest engineering effort into this, though none of our team has experience with Go or the Kubernetes codebase so we might need a bit of guidance.

(For some context, we'd like to do something like this: https://cloud.google.com/compute/docs/load-balancing/http/using-http-lb-with-cloud-storage)

Copied from original issue: kubernetes/ingress-nginx#353

support for session affinity

The GCE loadbalancer has support for session affinity, either by client-ip or a generated cookie. As far as i know there is no way to specify this in an ingress manifest.

It would be very useful to have this functionality.

Disabled HttpLoadBalancing, unable to create Ingress with glbc:0.9.1

From @tonglil on February 14, 2017 1:42

Update

I was able to create the Ingress after this comment: kubernetes/ingress-nginx#267 (comment)

Does this mean in order to run your own GCE Ingress, you have to always set this file? No information is provided about this in the docs.

Original Issue

  1. I disabled the cluster's addon via:
gcloud container clusters update tony-test --update-addons HttpLoadBalancing=DISABLED
  1. Then kubectl apply -f rc.yaml this: https://github.com/kubernetes/ingress/blob/master/controllers/gce/rc.yaml

  2. Then I apply the following config:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: echo-app
  name: echo-app
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echo-app
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.4
        name: echo-app
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 10
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 1
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-app-tls
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  backend:
    serviceName: echo-app
    servicePort: 88
---
apiVersion: v1
kind: Service
metadata:
  name: echo-app
spec:
  type: NodePort
  selector:
    app: echo-app
  ports:
    - name: http
      port: 88
      protocol: TCP
      targetPort: 8080
  1. What I get when kubectl describe ingress echo-app-tls:
Name:                   echo-app-tls
Namespace:              default
Address:
Default backend:        echo-app:88 (10.254.33.15:8080)
Rules:
  Host  Path    Backends
  ----  ----    --------
  *     *       echo-app:88 (10.254.33.15:8080)
Annotations:
  backends:     {"k8s-be-30659--d785be79bbf6d463":"UNHEALTHY"}
  url-map:      k8s-um-default-echo-app-tls--d785be79bbf6d463
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  2m            2m              1       {loadbalancer-controller }                      Normal          ADD     default/echo-app-tls
  1m            <invalid>       16      {loadbalancer-controller }                      Warning         GCE     instance not found
  1m            <invalid>       16      {loadbalancer-controller }                      Normal          Service default backend set to echo-app:30659

I can let it wait for >1 hour and it is the same.

Copied from original issue: kubernetes/ingress-nginx#267

Support multiple addresses (including IPv6)

The ingress spec supports specifying an ip address with the kubernetes.io/ingress.global-static-ip-name annotation, but the ingress-gce controller assumes that it is an ipv4 IP address.

GCLB supports specifying both an ipv4 and ipv6 IPs as per: https://cloud.google.com/compute/docs/load-balancing/http/cross-region-example.

Are there plans to support ipv6?
I tried to find an existing issue, but didnt and hence am filling this. Feel free to close as duplicate if there is an existing issue.

cc @bowei @nicksardo @csbell

Why does GCE ingress defer promoting static IP

From @tonglil on March 8, 2017 17:40

Something I have thought about for a while but haven't been able to reason about: https://github.com/kubernetes/ingress/blob/master/controllers/gce/loadbalancers/loadbalancers.go#L603-L608.

This seems like both :80 and :443 needs to be set before an IP can be promoted to static. Is there a reason why? I've git blamed and the past doesn't show a good reason for it either: kubernetes-retired/contrib@a2f82d5#diff-0ca32fb01d30b7ffd4f37d77adcce2e5R483.

I think the ip should either: always be promoted to static, or by a separate annotation to do so when the ingress creates an IP.

It's a bad UX currently when the user has to run another gcloud command/go into the console to promote the address when they only want HTTPS (via kubernetes.io/ingress.allow-http: "false").

Thoughts?

Copied from original issue: kubernetes/ingress-nginx#405

[GLBC] Changing front-end configuration does not remove unnecessary target proxies/ssl-certs

From @tonglil on March 20, 2017 17:59

Porting this issue from contrib over: kubernetes-retired/contrib#1517.

There is no enforcement of the annotation kubernetes.io/ingress.allow-http: "false" when it is set to false, after previously being set to true or unset.

See during the edge hop:

No deletion/cleanup enforcement happens if it is set to false.

Copied from original issue: kubernetes/ingress-nginx#468

multiple TLS certs are not correctly handled by GCE (no SNI support)

From @ensonic on July 12, 2017 10:9

I have setup and ingress for 3 microservices under 3 subdomains, each having their own cert.
When I startup the ingree I see this in the l7-lb-controller log:
W0712 10:01:30.733403 1 tls.go:58] Ignoring 2 certs and taking the first for ingress default/tls-termination

IMHO that cannot work and indeed I get a single cert applied to all 3 subdomains and as expected e.g browsers complain about the mismatch. I would expect the Host header to be used to select the appropriate cert.

This is how the config looks like for 2 hosts, example.com is just used for illustration

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tls-termination
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
  - hosts:
    - api.example.com
    secretName: api-tls
  - hosts:
    - www.example.com
    secretName: www-tls
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: api
          servicePort: 80
  - host: www.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: www
          servicePort: 80

Using a single cert covering all subdomains is maybe doable, but would not be nice, since the services should not need to know about each other.

Copied from original issue: kubernetes/ingress-nginx#952

[GLBC] Garbage collection runs too frequently - after every sync

From @nicksardo on April 28, 2017 22:5

GC runs at the end of every sync for ever ingress. If there are many ingress objects, this results in many GCP calls. It's even more troublesome in the case of federated clusters.

We should perform a loadbalancer-specific GC on delete notification, but only run cluster-wide GC on a less frequent basis, such as resyncPeriod (set to 10min).
cc @madhusudancs @csbell

Copied from original issue: kubernetes/ingress-nginx#674

[GLBC] LB garbage collection orphans named ports in instance groups

From @nicksardo on May 8, 2017 18:54

The GLBC does not remove the named port from an instance group when a backend is being deleted.

If users are frequently created/deleting services and ingresses, instance groups will become polluted with old node ports. Eventually, users will hit a max limit.

Exceeded limit 'MAX_DISTINCT_NAMED_PORTS' on resource 'k8s-ig--aaaaaaaaaaaa'

Temporary workaround

region=us-central1
cluster_id=$(kubectl get configmaps ingress-uid -o jsonpath='{.data.uid}' --namespace=kube-system)
ports=$(gcloud compute backend-services list --global --format='value(port,port)' |  xargs printf 'port%s:%s,')
for zone in b c f; do
  gcloud compute instance-groups unmanaged set-named-ports k8s-ig--$cluster_id --zone=$region-$zone --named-ports=$ports
done

Modify the region and list of zone suffix in the script.

Copied from original issue: kubernetes/ingress-nginx#695

Invalid value for field 'namedPorts[*].port': '0'

I'm trying to create a new ingress controller but I'm getting this error:

googleapi: Error 400: Invalid value for field 'namedPorts[12].port': '0'. Must be greater than or equal to 1, invalid

Then I checked the other ingresses, they still work but I'm getting the same exact error. The new ingress does not work at all.

I found this answer but I have not port0 in my ports. I notice I have exactly 12 named ports in my instance group, and I'm guessing the array namedPorts is a zero-based so accessing the 12 element might be causing the issue.

I'm not exactly sure what triggered it, but I updated to 1.8.2 recently.

This is my ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.gcp.kubernetes.io/pre-shared-cert: certificate
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/ingress.allow-http: "false"
    kubernetes.io/ingress.global-static-ip-name: static-ip-name
  generation: 1
  labels:
    app: core
    chart: core-0.1.0
    heritage: Tiller
    release: core
  name: core
  namespace: develop
spec:
  backend:
    serviceName: core
    servicePort: 80
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: core
          servicePort: 80
status:
  loadBalancer: {}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.2-gke.0", GitCommit:"52ea03646e64b35a5b092ab32bb529400c296aa6", GitTreeState:"clean", BuildDate:"2017-10-24T23:31:18Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

Any ideas?

it may be related to #43

controllers/gce/README.md doc review

From @ensonic on July 12, 2017 9:54

https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#the-ingress

Though glbc doesn't support HTTPS yet, security configs would also be global.

You probably want to say that it does not support https when communicating with the backends. There is a chapter on TLS termination below.


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#creation

kubectl create -f rc.yaml
replicationcontroller "glbc" created

This seems to be outdated:

kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/gce/rc.yaml
service "default-http-backend" created
replicationcontroller "l7-lb-controller" created

This also means that the log commadns are outdated and should be updated to e.g. kubectl logs --follow l7-lb-controller-fw4ps l7-lb-controller

Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel

There is no HTTPLoadbalancing panel, but there is this page:
https://pantheon.corp.google.com/networking/loadbalancing/loadBalancers/list


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#updates

Say you don't want a default backend ...

If you omit the default backend you seem to get some implicit default backend, which is always unhealthy - since it returns 404. Having a default that has a readyness check would be nice so that GLBC would actually use it.


https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md#paths

some yaml is shown as plaintext.


There is probably more, lets fix it iteratively.

Copied from original issue: kubernetes/ingress-nginx#951

[GLBC] Expose GCE backend parameters in Ingress object API

From @itamaro on February 7, 2017 10:9

When using GCE Ingress controller, the GCE Ingress controller (GLBC) provisions GCE backends with a bunch of default parameters.

It would be great if it was possible to tweak the parameters that are currently "untweakable" from the Ingress object API (AKA from my YAML's).

Specific use case: GCE backends are provisioned with a default timeout of 30 seconds, which is not sufficient for some long requests. I'd like to be able to control the timeout per-backend.

Copied from original issue: kubernetes/ingress-nginx#243

Large file upload fails after 30 seconds

I am using GLBC on a Google Cloud cluster - Kubernetes v1.8. In this instance, the ingress is for the Nexus repository manager and is defined as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-nexus-https
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.allow-http: "false"
  labels:
    app: ops-nginx
spec:
  tls:
  - hosts:
    - mycorp.com
    secretName: testssl
  backend:
    serviceName: my-nexus
    servicePort: 8081

If I then attempt to upload a file to Nexus via curl, it runs for about 30 seconds then fails as follows:

curl -i -k -T bigfile https://mycorp.com/repository/testbigfiles/

* GnuTLS recv error (-110): The TLS connection was non-properly terminated.
* Closing connection 0
curl: (56) GnuTLS recv error (-110): The TLS connection was non-properly terminated.

Logs in Nexus indicate client just dropped:

Caused by: org.eclipse.jetty.io.EofException: Early EOF
	at org.eclipse.jetty.server.HttpInput$3.noContent(HttpInput.java:791)
	at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:157)
	at org.sonatype.nexus.common.hash.MultiHashingInputStream.read(MultiHashingInputStream.java:66)
	at com.google.common.io.CountingInputStream.read(CountingInputStream.java:63)
	at java.security.DigestInputStream.read(DigestInputStream.java:161)
	at java.io.FilterInputStream.read(FilterInputStream.java:133)
	at java.io.FilterInputStream.read(FilterInputStream.java:107)
	at com.google.common.io.ByteStreams.copy(ByteStreams.java:106)
	at org.sonatype.nexus.blobstore.file.internal.SimpleFileOperations.create(SimpleFileOperations.java:60)
	at org.sonatype.nexus.blobstore.file.FileBlobStore.lambda$0(FileBlobStore.java:287)
	at org.sonatype.nexus.blobstore.file.FileBlobStore.tryCreate(FileBlobStore.java:350)

I did not expect this behavior. This does not happen if I use the Type=LoadBalancer L4 load balancer. Thinking it might have been the L7's health check, I examined it in the cluster dashboard, but it is reporting healthy.

      "ingress.kubernetes.io/backends": "{\"k8s-be-31391--74104f0a9bad5b66\":\"HEALTHY\"}",

Any help is appreciated.

Some links in 'ingress' repo GCE release notes are 404

[GLBC] Expose GCE backend parameters in Ingress object API

From @itamaro on February 7, 2017 10:9

When using GCE Ingress controller, the GCE Ingress controller (GLBC) provisions GCE backends with a bunch of default parameters.

It would be great if it was possible to tweak the parameters that are currently "untweakable" from the Ingress object API (AKA from my YAML's).

Specific use case: GCE backends are provisioned with a default timeout of 30 seconds, which is not sufficient for some long requests. I'd like to be able to control the timeout per-backend.

Copied from original issue: kubernetes/ingress-nginx#243

Ingress Healthcheck Configuration

From @freehan on May 15, 2017 21:25

On GCE, ingress controller sets up default healthcheck for backends. The healthcheck will point to the nodeport of backend services on every node. Currently, there is no way to describe detail configuration of healthcheck in ingress. On the other side, each application may want to handle healthcheck differently. To bypass this limitation, on Ingress creation, ingress controller will scan all backend pods and pick the first ReadinessProbe it encounters and configure healthcheck accordingly. However, healthcheck will not be updated if ReadinessProbe was updated. (Refer: kubernetes/ingress-nginx#582)

I see 3 options going forward with healthcheck

  1. Expand the Ingress or Service spec to include more configuration for healthcheck. It should include the capabilities provided by major cloud providers, GCP, AWS...

  2. Keep using readiness probe for healthcheck configuration,
    a) Keep today's behavior and communicate clearly regarding the expectation. However, this still breaks the abstraction and declarative nature of k8s.
    b) Let ingress controller watch the backend pods for any updates for ReadinessProbe. This seems expensive and complicated.

  3. Only setup default healthcheck for ingresses. Ingress controller will only ensure the healthcheck exist periodically, but do not care about its detail configuration. User can configure it directly thru the cloud provider.

I am in favor of option 3). There are always more bells and whistles on different cloud providers. The higher layer we go, the more features we can utilize. For L7 LB, there is no clean simple way to describe every intention. So is the case for health check. To ensure a smooth experience, k8s still sets up the basics. For advance use cases, user will have to configure it thru the cloud provider.

Thoughts? @kubernetes/sig-network-misc

Copied from original issue: kubernetes/ingress-nginx#720

Add tests for multi cluster ingresses

Tasks:

  • Add an e2e test to ensure that instance groups annotation is added to an ingress with the multicluster ingress class annotation.
  • Ensure that the instance groups annotation is added for each zone in a multi zonal cluster.

cc @nicksardo @G-Harmon

[GLBC] GCE resources of non-snapshotted ingresses are not deleted

From @nicksardo on March 13, 2017 20:34

For the GLBC to know about load balancer GCE resources, it must first have snapshotted an ingress object that created those resources. This happens when an ingress object is deleted while the GLBC is offline or starting up.

The following are test logs from gce-gci-latest-upgrade-etcd:

Successful Test #334 : glbc.log

I0313 19:04:50.100770       5 utils.go:166] Syncing e2e-tests-ingress-upgrade-pjrvv/static-ip
... # GLBC has listed existing ingress objects and knows about the `static-ip` ingress
E0313 19:04:55.101688       5 utils.go:168] Requeuing e2e-tests-ingress-upgrade-pjrvv/static-ip, err Waiting for stores to sync
... # Ingress is being requeued because other stores have not finished syncing
I0313 19:04:55.107613       5 utils.go:166] Syncing e2e-tests-ingress-upgrade-pjrvv/static-ip
I0313 19:04:55.107669       5 controller.go:295] Syncing e2e-tests-ingress-upgrade-pjrvv/static-ip
... # Ingresses are listed from apiserver
I0313 19:04:55.594861       5 loadbalancers.go:165] Creating loadbalancers [0xc42067a2d0]
# Following the above line, the ingress obj will be added to the L7.snapshotter (cache)
I0313 19:04:55.755621       5 controller.go:128] Delete notification received for Ingress e2e-tests-ingress-upgrade-pjrvv/static-ip
# Ingress watcher notices the ingress object was deleted

Failed Test #332 : glbc.log

I0313 17:11:53.401594       5 utils.go:166] Syncing e2e-tests-ingress-upgrade-9shgs/static-ip
... 
I0313 17:11:54.570150       5 controller.go:128] Delete notification received for Ingress e2e-tests-ingress-upgrade-9shgs/static-ip
... # Watcher notices the ingress object has been deleted; however, we have not yet snapshotted the ingress object
E0313 17:11:58.401711       5 utils.go:168] Requeuing e2e-tests-ingress-upgrade-9shgs/static-ip, err Waiting for stores to sync
...
I0313 17:11:58.401773       5 controller.go:295] Syncing e2e-tests-ingress-upgrade-9shgs/static-ip
... # Ingresses are listed from apiserver but because the ingress was deleted, the list is empty
I0313 17:11:58.401852       5 loadbalancers.go:165] Creating loadbalancers []
# Ingress object is not created 

The L7Pool GC func deletes resources of ingresses stored in the l7.snapshotter that are not mentioned by name in the arg slice. Because the test ingress was never stored in the snapshot cache, the GCE resources are never deleted.

The failed test log also contains multiple blocks of the following:

I0313 17:12:27.161699       5 backends.go:336] GCing backend for port 32397
I0313 17:12:27.161767       5 backends.go:233] Deleting backend k8s-be-32397--179702b6ab620fd3
...
E0313 17:12:27.390153       5 utils.go:168] Requeuing e2e-tests-ingress-upgrade-9shgs/static-ip, err Error during sync <nil>, error during GC googleapi: Error 400: The backend_service resource 'k8s-be-32397--179702b6ab620fd3' is already being used by 'k8s-um-e2e-tests-ingress-upgrade-9shgs-static-ip--179702b6ab620', resourceInUseByAnotherResource

The GLBC knows about the extraneous backends because the BackendPool uses the CloudListingPool. This implementation calls a List func to reflect the current state of GCE.

Copied from original issue: kubernetes/ingress-nginx#431

Is there nginx-controller like session affinity support

I've been trying to find a solid answer on how to use (if it is even possible) session affinity with a GCE ingress resource.

My understanding is that nginx supports this by going around the service's vip and directly to the pods, updating the nginx configuration as it needs to when the pods change -- since while service's support clientip affinity this would be based on the nginx ip and not the client making the http request so the service must be bypassed.

Can the GCE ingress do similar or is there an alternative way to get affinity?

Add e2e testing

From @bprashanth on November 10, 2016 18:48

kubernetes-retired/contrib#1441 (comment)

e2e testing: If we could figure out a way to setup an e2e builder that runs https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress.go#L58 for every commit just like the cadvisor repo https://github.com/google/cadvisor, that would be great. I'm sure our test-infra people would be more than willing to help with this problem (file an issue like kubernetes/test-infra#939, but more descriptive maybe :)

@porridge fyi

Copied from original issue: kubernetes/ingress-nginx#5

GCE health check does not pick up changes to pod readinessProbe

From @ConradIrwin on April 11, 2017 4:22

Importing this issue from kubernetes/kubernetes#43773 as requested.

Kubernetes version (use kubectl version): 1.4.9

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

I created an ingress with type GCE

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: admin-proxy
  labels:
    name: admin-proxy
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
    - secretName: ingress-admin-proxy
  backend:
    serviceName: admin-proxy
    servicePort: 80

This set up the health-check on the google cloud backend endpoint to call "/" as documented.

Unfortunately my service doesn't return 200 on "/" and so I added a readinessCheck to the pod as suggested by the documentation.

What you expected to happen:

I expected the health check to be automatically updated.

Instead I had to delete the ingress and re-create it in order for the health-check to update.

How to reproduce it (as minimally and precisely as possible):

  1. Create a deployment with no readiness probe.
  2. Create an ingress pointing to the pods created by that deployment.
  3. Add a readiness probe to the deployment.

Anything else we need to know:

Copied from original issue: kubernetes/ingress-nginx#582

examples/websocket/server.go: concurrent write to socket

In the handleWSConn function, the goroutine calls c.WriteMessage and the other for loop also calls c.WriteMessage. But according to Gorilla WS doc, WriteMessage is not goroutine safe. We need to use a mutex to protect concurrent writes.

I saw a couple of goroutine panics because of this while doing some light stress tests.

GCE: respect static-ip assignment via update

From @bprashanth on January 31, 2017 23:4

Currently if you create an ingress then take its static-ip and assign the annotation, the controller is not smart enough to not delete the IP. To make this work:

  1. Gate Static IP cleanup on a call to getEffectiveIP(): https://github.com/kubernetes/ingress/blob/master/controllers/gce/loadbalancers/loadbalancers.go#L828 (just like we do here https://github.com/kubernetes/ingress/blob/master/controllers/gce/loadbalancers/loadbalancers.go#L548)
  2. Pipe the runtimeInfo into GC, instead of just the name (https://github.com/kubernetes/ingress/blob/master/controllers/gce/controller/controller.go#L324, https://github.com/kubernetes/ingress/blob/master/controllers/gce/controller/controller.go#L299)
  3. Store the updated runtimeInfo in the L7 struct (), before calling delete()

This way the call to getEffectiveIP will observe the annotation value right before deleting the IP

Copied from original issue: kubernetes/ingress-nginx#196

Point gce ingress health checks at the node for onlylocal services

From @bprashanth on November 18, 2016 22:35

We now have a new beta annotation on Services, external-traffic (http://kubernetes.io/docs/user-guide/load-balancer/#loss-of-client-source-ip-for-external-traffic). With this annotation set to onlyLocal NodePort Services only proxy to local endpoints. If there are no local endpoints, iptables is configured to drop packets. Currently sticking an onlyLocal Service behind an Ingress works, but does so in a suboptimal way.

The issue is, currently, the best way to configure lb health checks is to set high failure threshold so we detect nodes with bad networking, but not flake on bad endpoints. With this approach, if all endpoints evacuate a node, it'll take eg: 10 health checks*10 seconds per health check = 1.5 minutes to mark that node unhealthy, but the node will start DROPing packets for the NodePort immediately. If we pointed the lb health check at the healthcheck-nodeport (a nodePort that's managed by kube-proxy), it would fail in < 10s even with the high thresholds described above.

@thockin

Copied from original issue: kubernetes/ingress-nginx#19

[GLBC] Surface event when front-end not created

From @pijusn on May 4, 2017 11:2

I recently ran following Shell script with given Kubernetes definitions and after some time, GLBC was created but it did not get any front-end created.

set -ex

read -p "Press any key to start..." -sn1

gcloud config set project echo-test-166405
gcloud container clusters create echo-cluster --machine-type="g1-small" --num-nodes=1

echo "Cluster created"

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/C=LT"
kubectl create secret tls echoserver-tls --key /tmp/tls.key --cert /tmp/tls.crt

# Note: At this point I just waited a minute to make sure it's all OK.
echo "Self-generated certificate created"
read -p "Press any key to continue... " -sn1

kubectl apply -f echoserver/00-namespace.yaml
kubectl apply -f echoserver/deployment.yaml
kubectl apply -f echoserver/service.yaml
kubectl apply -f echoserver/ingress-tls.yaml

Here are all YAML files merged into one:

apiVersion: v1
kind: Namespace
metadata:
  name: echoserver
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: echoserver
  namespace: echoserver
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - image: gcr.io/google_containers/echoserver:1.0
        imagePullPolicy: Always
        name: echoserver
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver
  namespace: echoserver
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
  type: NodePort
  selector:
    app: echoserver
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoserver
  namespace: echoserver
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
  - hosts:
    - echo.pijusn.eu
    secretName: echoserver-tls
  rules:
  - host: echo.pijusn.eu
    http:
      paths:
      - backend:
          serviceName: echoserver
          servicePort: 80

I expected HTTPS front-end to be created. It was a fresh project, fresh cluster so no quotes were kicking in.

After I removed kubernetes.io/ingress.allow-http: "false" it did create an HTTP front-end but still did not create HTTPS one.

This seems like an issue. Also, if you have ideas where to look for an error message or something (why it failed to create it) - please share.

Copied from original issue: kubernetes/ingress-nginx#686

Wrong health check with end-to-end https scheme

Hi,
I'm trying to set https end-to-end (GCP LB to K8S Pod). On the Pod I have gunicorn that doesn't support HTTP and HTTPS in the same time, but only one. So I must use HTTPS everywhere.

I set this in the deployment object:

        readinessProbe:
          httpGet:
            scheme: HTTPS
            path: /heartbeat/
            port: 443
          initialDelaySeconds: 5
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 10

I add this to the service object:

  annotations:
    service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}'

Unfortunately, the health check is set to check / on HTTP, and the backend service is on HTTP.

Note: The same configuration with HTTP works well (the LB is up and running).

Did I miss something?

Regards.

High glbc CPU usage

From @jmn on January 27, 2017 1:53

I noticed in Stackdriver monitoring that glbc is using about 30% CPU constantly in my cluster. @thockin said on Slack that this might be related to misconfigured GCE ingress. However I looked through all my Ingresses and they are all class nginx. Anyone got a clue or a suggestion as to troubleshoot further?

One thing which I noticed is in my Ingress "kube-lego", which is configured automatically by kube-lego, there is a part "status" which looks like this:

status:
  loadBalancer:
    ingress:
    - ip: 130.211....
    - ip: 146.148....

The second IP adress adress is my nginx loadbalancer, however: the first IP adress is unknown to me, I currently do not know where that is from.

Copied from original issue: kubernetes/ingress-nginx#183

Specify IP addresses the Ingress controller is listening on

From @cluk33 on January 17, 2017 13:47

We are running k8s on bare metal. It would be great to specify the IP addresses the nginx ingress controller is listening on.

This would enable us

  1. to route traffic for specific IPV4 IPs to k8s
  2. also route (http) traffic for IPV6 addresses to k8s.

AFAIK 1) can be achieved by using a service in front of the ingress ctrl with external (IPV4) addresses. But currently we do not see any possibility to achieve 2).

Might be related to #131 .

Thanks a lot!

Copied from original issue: kubernetes/ingress-nginx#137

support for proper health checks

The GCE loadbalancer perfoms health checks to see if backend are fit to take traffic. These health checks seem to be derived from the readinessProbe defined in the pod the service is pointing to where the service is taken from the ingress.

However, a pod can expose multiple ports where each port can be exposed by a service, but there can be only one readinessProbe on a pod. Also the semantics of each exposed service doe not have to match the semantics of the readinessProbe.

It would make more sense if the LB-healthcheck would be defined either in the ingress or in the service.

e2e test leaves garbage around

From @porridge on December 22, 2016 9:54

e2e-down.sh explicitly removes some containers, and at least one exits after some time, but there are plenty others which stay around apparently indefinitely.

I'm not sure whether this is a known deficiency of hypercube or we're just using it improperly.

porridge@beczulka:~/Desktop/coding/go/src/k8s.io/ingress$ docker ps 
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS               NAMES
ac39a0a5730e        gcr.io/google_containers/kube-dnsmasq-amd64:1.4          "/usr/sbin/dnsmasq --"   9 minutes ago       Up 9 minutes                            k8s_dnsmasq.bee611d9_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_107a8b1a
4b26865f46da        gcr.io/google_containers/exechealthz-amd64:1.2           "/exechealthz '--cmd="   13 minutes ago      Up 13 minutes                           k8s_healthz.3613f95_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_5ee00e74
224be6a6f7bc        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD.a6b39ba7_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_3e36dd7a
e85eab303994        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD.2225036b_kubernetes-dashboard-v1.4.0-96im4_kube-system_8ff1e7e5-c82a-11e6-9097-24770389c384_ff54a5b2
dd4bc152a110        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/hyperkube apiserver"   6 days ago          Up 6 days                               k8s_apiserver.213c742_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_29321e20
cfbb7e04222e        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/setup-files.sh IP:1"   6 days ago          Up 6 days                               k8s_setup.2cde3c3c_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_baf0cb65
219ad2ef7f1c        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/copy-addons.sh mult"   6 days ago          Up 6 days                               k8s_kube-addon-manager-data.270e200b_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_636a1308
a2fac3257ece        gcr.io/google_containers/kube-addon-manager-amd64:v5.1   "/opt/kube-addons.sh"    6 days ago          Up 6 days                               k8s_kube-addon-manager.ed858faf_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_2ccd5348
189bb0cf3d59        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 6 days ago          Up 6 days                               k8s_POD.d8dbe16c_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_45247fc2
d971a4ae6c4a        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 6 days ago          Up 6 days                               k8s_POD.d8dbe16c_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_286fe6e0
porridge@beczulka:~/Desktop/coding/go/src/k8s.io/ingress$ 

Copied from original issue: kubernetes/ingress-nginx#80

GCE: WebSocket: connection is broken CloseEvent {isTrusted: true, wasClean: false, code: 1006, reason: "", type: "close", …}

From @unfii on August 29, 2017 13:28

Last ingress GCE.

in ingress logs.

Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                    -------------   --------        ------  -------
  58m           58m             1       loadbalancer-controller                 Normal          ADD     default/idecisiongames
  58m           58m             1       loadbalancer-controller                 Normal          ADD     default/idecisiongames
  57m           57m             1       loadbalancer-controller                 Warning         GCE     googleapi: Error 409: The resource 'projects/dm-apps-beta/global/sslCertificates/k8s-ssl-default-idecisiongames
--be0f1d8aa9717d3c' already exists, alreadyExists
  57m           57m             1       loadbalancer-controller                 Normal          CREATE  ip: 130.211.32.230
  57m           57m             1       loadbalancer-controller                 Normal          CREATE  ip: 130.211.32.230
  57m           4m              17      loadbalancer-controller                 Warning         GCE     No node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
  labels:
    app: idg-server
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "test"
spec:
  tls:
  - secretName: test
  backend:
    serviceName: idg-server
    servicePort: 3000

And also default deployment from
https://github.com/kubernetes/ingress/tree/master/examples/deployment/gce

In console we see.

WebSocket: connection is broken CloseEvent {isTrusted: true, wasClean: false, code: 1006, reason: "", type: "close", …}

How to fix it?

Copied from original issue: kubernetes/ingress-nginx#1262

GLBC: Ingress can't be properly created: Insufficient Permission

From @bbzg on July 16, 2017 11:36

I recently upgraded to kubernetes 1.7 with RBAC on GKE, and I am seeing this problem:

  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  6h		6m		75	loadbalancer-controller			Warning		GCE :Quota	googleapi: Error 403: Insufficient Permission, insufficientPermissions

I have double-checked my quotas, and they are all green.

I have also tried granting the Node service account Project > Editor permissions, and I have added the Node service account to the cluster-admin ClusterRole, just in case it had anything to do with that (which it should not, right?).

GKE Cluster logs (slightly redacted):

{
 insertId:  "x"   
 jsonPayload: {
  apiVersion:  "v1"    
  involvedObject: {
   apiVersion:  "extensions"     
   kind:  "Ingress"     
   name:  "ingress-testing"     
   namespace:  "default"     
   resourceVersion:  "425826"     
   uid:  "x"     
  }
  kind:  "Event"    
  message:  "googleapi: Error 403: Insufficient Permission, insufficientPermissions"    
  metadata: {
   creationTimestamp:  "2017-07-15T12:54:37Z"     
   name:  "ingress-testing.x"     
   namespace:  "default"     
   resourceVersion:  "53520"     
   selfLink:  "/api/v1/namespaces/default/events/ingress-testing.14d1822c5ed30595"     
   uid:  "x"     
  }
  reason:  "GCE :Quota"    
  source: {
   component:  "loadbalancer-controller"     
  }
  type:  "Warning"    
 }
 logName:  "projects/x/logs/events"   
 receiveTimestamp:  "2017-07-15T19:11:59.117152623Z"   
 resource: {
  labels: {
   cluster_name:  "app-cluster"     
   location:  ""     
   project_id:  "x"     
  }
  type:  "gke_cluster"    
 }
 severity:  "WARNING"   
 timestamp:  "2017-07-15T19:11:54Z"   
}

I have tried figuring out what the cause might be, but have not found anything that was applicable.

What can I do to get Ingress working again in my cluster?

Thanks!

Copied from original issue: kubernetes/ingress-nginx#975

GCE: improve default backend handling

From @bprashanth on January 10, 2017 10:43

Today the lb pool manages the default backend. This makes for some confusing situations.

  1. Don't get the default backend in main, instead pipe the --default-backend arg through the cluster-manager down into 3 pools: firewall, backend, lb
  2. Create the default backend in Sync() of backend pool, when there are nodePorts, so users that don't care about ingress don't get a default backend
  3. Delete the default backend in GC() of the backend pool, when there are 0 nodePorts
  4. When the lb pool wants to know the default backend, it re-gets the service passed through --default-backend, and reconstructs its name through the namer. If the backend pool hasn't created it, throw an error event.
  5. Do the same thing from the firewall pool

Copied from original issue: kubernetes/ingress-nginx#120

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.