Giter VIP home page Giter VIP logo

k8s-multicluster-ingress's Introduction

kubemci

GoReportCard Widget Coveralls Widget GoDoc Widget Slack Widget

[DEPRECATED] This has now been deprecated in favor of Ingress for Anthos. Ingress for Anthos is the recommended way to deploy multi-cluster ingress.

kubemci is a tool to configure Kubernetes ingress to load balance traffic across multiple Kubernetes clusters.

This is a Google Cloud Platform beta tool, suitable for limited production use cases: https://cloud.google.com/kubernetes-engine/docs/how-to/setup-multi-cluster-ingress

Getting started

You can try out kubemci using the zone printer example.

Follow the instructions as detailed here.

To create an HTTPS ingress, follow the instructions here.

More information

We have a video explaining what kubemci is intended for. It also shows a demo of setting up a multicluster ingress.

We also have an FAQ for common questions.

Contributing

See CONTRIBUTING.md for instructions on how to contribute.

You can also checkout existing issues for ways to contribute.

Feedback

If you are using kubemci, we would love to hear from you! Tell us how you are using it and what works and what does not: #117

Caveats

  • Users will be need to specify a unique NodePort for their multicluster services (that should be available across all clusters). This is a pretty onerous requirement, required because health checks need to be the same across all clusters.

  • This will only work for clusters in the same GCP project. In future, we can integrate with Shared VPC to enable cross project load balancing.

  • Load balancing across clusters in the same region will happen in proportion to the number of nodes in each cluster, instead of number of containers.

  • Since ILBs and ingress share the same instance groups (IGs), there is a race condition where deleting ILBs can cause the IG supposed to be used for multicluster ingress to be deleted. This will be fixed in the next ingress controller forced sync (every 10 mins). The same race condition exists in single cluster ingress as well.

  • Users need to explicitly update all their existing multicluster ingresses (by running kubemci create ingress), if they add nodes from a new zone to a cluster. This is required so that the tool can update backend service and add a new instance group to it.

k8s-multicluster-ingress's People

Contributors

ahmetb avatar anneal avatar cgrant avatar csbell avatar g-harmon avatar glindstedt avatar kuang-byte avatar madhusudancs avatar makocchi-git avatar nikhiljindal avatar normanjoyner avatar perotinus avatar rramkumar1 avatar ryane avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-multicluster-ingress's Issues

handle default GCP resource attribute values uniformly

Kubemci doesn't care about all the fields of all the GCP resources that it creates. Some fields are not specified, so the default value is used. This works great for initial-creation. When someone runs kubemci again, we check the settings for the resources and determine if the existing one "matches" the desired one. The question is, how to best handle fields that we did not explicitly specify?
Sometimes I added these extra fields to the "desired" resource (e.g HealthCheck), sometimes (e.g. BackendService) I only compared fields I was interested in (bad).

Here's the breakdown of how each resource is handled.
Explicitly compares fields (not desirable, we should fix this):
-BackendService https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/blob/master/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go

Added extra fields to "desired"
-ForwardingRule: Fine as is (ed433cf)
-HealthCheck-> Needs fixing. Remove "kind". "proxyHeader" TBD.

No extra fields needed:
-TargetProxy
-UrlMap

Firewall Rules: in-progress (PR#77). (see next paragraph)

In the end, @nikhiljindal and I thought we should:
-Add fields to the "desired" where it made sense to require the default value. (e.g. a firewall rule must be an INGRESS (not EGRESS) rule to make sense)
-Ignore fields where we decided it was okay to do so.
With this scheme, when a new field is added, by default we will not ignore its value. The tool will complain about not wanting to overwrite the resource without --force, until we make a code update.

[Help] Required 'compute.zones.list' permission

Hi,

When running the following command (see zone-printer example for more info):

kubemci create zone-printer --ingress=ingress/nginx.yaml --gcp-project=$PROJECT --kubeconfig=./zpkubeconfig

I get:

Error: Error in creating load balancer: error in creating cloud interface: unexpected response listing zones: googleapi: Error 403: Required 'compute.zones.list' permission for 'projects/$PROJECT', forbidden

Could someone help me? I have added the scopes present in https://cloud.google.com/compute/docs/reference/beta/zones/list when creating the instances and it does not seem to work.

Only using a default network named 'default' works

The firewall rule fails to be created because our default network is not named 'default.' This would be fixed by either of these 2 options:

  1. Adding an argument to allow specifying the network
  2. Retrieving the default network from GCE

Delegate creating firewall rule to ingress-gce

Similar to instance groups, we can let the ingress-gce controller running in each cluster to manage the firewall rules in those clusters.

Firewall rules are independent in each cluster and do not require any shared logic between controllers in those clusters.

The advantage is that ingress-gce controllers running in clusters have information about default network name and whether XPN is on and hence can make a more informed decision about creating the firewall rule.

cc @nicksardo @bowei @csbell @madhusudancs @G-Harmon thoughts?

Need better error message when a named port is missing from an instance group

Currently, we get this output:

$ go run cmd/kubemci/kubemci.go create mci-lb-1 --ingress=base-ingress.yaml --kubeconfig=config --gcp-project=gharm-kubernetes --force
Ingress already exists; moving on.
Ingress already exists; moving on.
Ensuring health checks
Ensuring health check for port: {30476 HTTP ns0/nginx0 {0 80 }}
Health check mci1-hc-30476--mci-lb-1 exists already. Checking if it matches our desired health check
Updating existing health check mci1-hc-30476--mci-lb-1 to match the desired state
Updating existing health check mci1-hc-30476--mci-lb-1 to match the desired state
Health check mci1-hc-30476--mci-lb-1 updated successfully
Determining instance groups for cluster gharm-kubernetes_e2e-test-gharm
Determining instance groups for cluster gke_gharm-kubernetes_us-central1-a_cluster-2
Fetching instance group: us-central1-a k8s-ig--d1ef47f6a30c57e5
Fetched instance group: us-central1-a/k8s-ig--d1ef47f6a30c57e5 got named ports: port: &{port31115 31115 [] []} port: &{port31751 31751 [] []}
Ensuring backend services
Ensuring backend service for port: {30476 HTTP ns0/nginx0 {0 80 }}
Ensuring url map
Ensuring http target proxy
Ensuring target http proxy
Target proxy mci1-tp--mci-lb-1 exists already. Checking if it matches our desired target proxy
Desired target proxy exists already
Ensuring http forwarding rule
forwarding rule mci1-fw--mci-lb-1 exists already. Checking if it matches our desired forwarding rule mci1-fw--mci-lb-1
Desired forwarding rule exists already
Ensuring firewall rule
Firewall rule mci1-fr--mci-lb-1 exists already. Checking if it matches our desired firewall rule
Desired firewall rule exists already
Error in creating load balancer: 2 errors occurred:

  • Error missing corresponding named port on the instance group in ensuring backend service for port {30476 HTTP ns0/nginx0 {0 80 }}

  • error 1 error occurred:

  • unexpected: No backend service found for service: nginx0, must have been an error in ensuring backend services in computing desired url map


It should at least say which instance group has the problem.

Supporting multiple TLS certs

As far as I can tell, this does not support having multiple TLS certs. My coworker @fastest963 and I are willing to put some time in to add support for this. I see there is some discussion on this topic here kubernetes/ingress-gce#46. Are there any known hurdles for this or any suggestions on getting started?

Handle update targetproxy for non-urlmap link changes

@G-Harmon rightly pointed out in #92 that we do not handle the situation when the difference in existing and desired target proxies is something other than the urlmap link (for ex description).

We assume that if they differ, it must be the urlmap link and hence update that only.
We need to fix this.

Add a timeout flag

The tool keeps trying indefinitely. We should add a timeout flag so that it gives up after some time.

If kubemci changes a service, ingress-gce won't notice that

@nicksardo reported a potential bug in the interaction between kubemci and ingress-gce:
"When a service change is observed, it normally scans through all ingresses to determine whether it should enqueue the ingress for sync.
It only checks GCEIngress, not GCEMultiClusterIngress"
In this code:
https://github.com/kubernetes/ingress-gce/blob/master/pkg/controller/controller.go#L192-L205

I don't think we change the service, but I wanted @nikhiljindal or @csbell to verify this.

go vet is failing

go vet is failing with the current code

$ go vet
src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/cmd/create.go:204: wrong number of args for format in Errorf call: 1 needed but 2 args
src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:156: wrong number of args for format in Errorf call: 1 needed but 2 args
exit status 1
src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer_test.go:99: missing argument for Errorf("%v"): format reads arg 2, have only 1 args
src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer_test.go:119: arg port for printf verb %s of wrong type: int64
exit status 1
src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:59: missing argument for Errorf("%s"): format reads arg 1, have only 0 args
exit status 1

Better input validation

We should do better validation upfront.
We also assume a few things, like it is upto the user to ensure that there is a default backend service in the ingress spec and that there exists a service in all clusters and that they have the same node ports in all clusters.
We should check for these things.
We can always perform the checks or hide them behind a --validate flag.

Concrete list:

  • Ingress spec should have a default backend service
  • Services mentioned in the ingress spec should exist in all clusters in the same namespace
  • A particular service should have the same nodeport in all clusters
  • All clusters should be of version > 1.8.1

Allow creation of Ingresses in default namespace

As it stands, one must include a namespace in the ingress yaml file. We should detect the case where no namespace is provided, and use the default namespace.

The tool currently gives this error if no namespace is specified:
Error in creating load balancer: error in creating ingress in cluster my-cluster-name: an empty namespace may not be set during creation

Support dry-run mode

We should add a --dry-run flag that when set outputs what will happen if the command is run without that flag. It should not make any changes at all.

Provide binary release

Hello,

In order to ease the test of kubemci and increase number of feedbacks could you please provide a binary release ?

Thanks in advance,
Cheers

Create GCE resources in parallel

There is room for optimization. It can take a few seconds to create certain GCP resources and we could improve the end-to-end lb creation time by creating GCP resources in parallel.

`kubemci create` info text for named ports should print formatted named ports

kubemci create info text prints Go struct that contains pointers for named ports.

Fetched instance group: us-east4-a/k8s-ig--72ba3ab13eeabf69, got named ports: []*compute.NamedPort{(*compute.NamedPort)(0xc4203d57c0), (*compute.NamedPort)(0xc4203d5810)}Ensuring backend services

We should instead print a formatted text.

Do you use kubemci command line tool? Tell us!

This is not an issue so much as a lightweight way of gathering information on who is using the kubemci command line tool. This is mostly to satisfy our curiosity, but might also help us decide how to evolve the project.

So, if you use kubemci for something, please chime in here and tell us more!

Cleanup resources if create fails

Forked from #7 (comment)

Example scenario is when user runs the create command and health checks are created successfully, but creating backend services fails. User will then have these dangling health checks that are not useful.
User can re-run the create command to try again or can run the delete command and that will cleanup all these dangling resources.

We can also introduce a flag --cleanup-on-failure that if set to true will cleanup resources if create fails, without the user having to run delete explicitly. If there is enough demand, we can also explore setting it to true by default.

Filling this issue to keep track.

Reserve a static ip if user has not specified one

An ingress spec needs the mc-ingress class in order to invoke the correct functionality in glbc. Currently the docs don't reflect this and the tool just proceeds with the default class. I think we should consider setting the class as part of the tool, much like we will want to set the namespace to default rather than leaving it empty.

Allow users to remove a cluster from an existing multicluster ingress

Scenario:
User creates a multicluster ingress in clusters A, B and C. Now user wants to remove cluster C so that the multicluster ingress should be restricted to clusters A and B only.
If user runs kubemci create command again with clusters.yaml containing clusters A and B only, then GCLB will be updated correctly to send traffic only to clusters A and B, but the ingress resource will still exist in cluster C.

Current solution:
Delete the multicluster ingress all together (from all clusters) and recreate it in clusters A and B.

This is bad since this leads to downtime. Service will be unavailable while the multicluster ingress is being deleted and recreated.
We need a better solution.

cc @csbell @G-Harmon @madhusudancs @mdelio

Error checking ingress yaml should occur before LB creation

We should do all the (local) error checking in the user's yaml before attempting to do anything. It's a poor experience latency-wise but still leaves the user wondering if anything was actually created in the background before reaching the error.

Allow replacing the ingress resource with --force

If a change is made to the ingress definition and then create is run again with the --force flag, the ingress resource in the clusters is not updated. A specific issue that arises is that if you add a new backend service with a different nodeport, the instance groups (that are managed by the ingress-gce controller) are not updated with the required named ports.

I have a local fix that I might polish into a PR but I figured it's good to track the issue in case I don't have time for that.

get-status improvements

Some more information we can add to kubemci get-status:

  • name and namespace of ingress in each cluster
  • create and delete errors associated with that multi cluster load balancer. So when create command ends, before returning to the user, it will store all the errors, that user can then retrieve using kubemci get-status

kubemci -h doesn't print any help information

There's kubemci {delete,create} -h but simply 'kubemci -h' prints this which is not useful to the user:

Usage of ./kubemci:
  -alsologtostderr
        log to standard error as well as files
  -cloud-provider-gce-lb-src-cidrs value
        CIDRs opened in GCE firewall for LB traffic proxy & health checks (default 130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22)
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -test.bench regexp
        run only benchmarks matching regexp
  -test.benchmem
        print memory allocations for benchmarks
  -test.benchtime d
        run each benchmark for duration d (default 1s)
  -test.blockprofile file
        write a goroutine blocking profile to file
  -test.blockprofilerate rate
        set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
  -test.count n
        run tests and benchmarks n times (default 1)
  -test.coverprofile file
        write a coverage profile to file
  -test.cpu list
        comma-separated list of cpu counts to run each test with
  -test.cpuprofile file
        write a cpu profile to file
  -test.memprofile file
        write a memory profile to file
  -test.memprofilerate rate
        set memory profiling rate (see runtime.MemProfileRate)
  -test.mutexprofile string
        write a mutex contention profile to the named file after execution
  -test.mutexprofilefraction int
        if >= 0, calls runtime.SetMutexProfileFraction() (default 1)
  -test.outputdir dir
        write profiles to dir
  -test.parallel n
        run at most n tests in parallel (default 32)
  -test.run regexp
        run only tests and examples matching regexp
  -test.short
        run smaller test suite to save time
  -test.timeout d
        fail test binary execution after duration d (0 means unlimited)
  -test.trace file
        write an execution trace to file
  -test.v
        verbose: print additional output
  -v value
        log level for V logs
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

Testing improvements

List of tasks:

  • More extensive unit tests, especially for failure cases.
  • Code coverage: Measure code coverage for the repo and comment on open PRs with how much code coverage are they changing.
  • e2e tests: Add a script to run e2e tests and then run it periodically on test-infra.
  • Automated tests - unit and e2e tests that run on each PR in review

cc @csbell @G-Harmon

kubemci [create|delete|getstatus] -h doesn't mention lbname

The lbname flag is not listed in the help for the subcommands. It prints out:

Create a multicluster ingress.

        Takes an ingress spec and a list of clusters and creates a multicluster ingress targetting those clusters.

Usage:
  kubemci create [flags]

But the last line should say kubemci create <lbname> [flags] or something similar.

Same for all the commands

Fix kubemci list format

Example output:

List of multicluster ingresses created:
Name; IP; Clusters
Name: drqzufjltj	IP: 35.190.81.146	Clusters: kube-gcp-project

We shouldnt be repeating the headings twice

cc @G-Harmon

go fmt is failing

go fmt is failing with the current code.

$ go fmt
diff /usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/cmd/create_test.go gofmt//usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/cmd/create_test.go
--- /tmp/gofmt506445242 2017-11-08 22:17:56.192319894 -0800
+++ /tmp/gofmt743323089 2017-11-08 22:17:56.192319894 -0800
@@ -150,7 +150,7 @@
        if len(clients) != len(expectedClients) {
                t.Errorf("unexpected set of clients, expected: %v, got: %v", expectedClients, clients)
        }
-       for k, _ := range expectedClients {
+       for k := range expectedClients {
                if clients[k] == nil {
                        t.Errorf("unexpected set of clients, expected: %v, got: %v", expectedClients, clients)
                }
diff /usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/firewallrulesyncer_test.go gofmt//usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/firewallrulesyncer_test.go
--- /tmp/gofmt227174888 2017-11-08 22:17:56.212320027 -0800
+++ /tmp/gofmt783677991 2017-11-08 22:17:56.212320027 -0800
@@ -49,7 +49,7 @@
                        Protocol: "HTTP",
                        SvcName:  types.NamespacedName{Name: kubeSvcName},
                },
-       }, map[string][]string{"cluster1": []string{igLink}})
+       }, map[string][]string{"cluster1": {igLink}})
        if err != nil {
                t.Fatalf("expected no error in ensuring firewall rule, actual: %v", err)
        }
@@ -93,7 +93,7 @@
                        Protocol: "HTTP",
                        SvcName:  types.NamespacedName{Name: kubeSvcName},
                },
-       }, map[string][]string{"cluster1": []string{igLink}})
+       }, map[string][]string{"cluster1": {igLink}})
        if err != nil {
                t.Fatalf("expected no error in ensuring firewall rule, actual: %v", err)
        }
diff /usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go gofmt//usr/local/google/home/nikhiljindal/.gvm/pkgsets/go1.8/global/src/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go
--- /tmp/gofmt948726802 2017-11-08 22:17:56.232320161 -0800
+++ /tmp/gofmt997676617 2017-11-08 22:17:56.232320161 -0800
@@ -410,7 +410,7 @@
                return nil, fmt.Errorf("could not find client to send requests to kubernetes cluster")
        }
        // Return the client for any cluster.
-       for k, _ := range clients {
+       for k := range clients {
                return clients[k], nil
        }
        return nil, nil

Per-cluster ingress creation should be idempotent

I created ingresses in my clusters as part of service creation. I had all my manifest files in a directory and I ran kubectl create -f manifests/. Now when I run kubemci as:

./kubemci create hello --ingress=manifests/ingress.yaml --gcp-project=myproject --kubeconfig=mcikubeconfig 

I get the following error:

Error in creating load balancer: 3 errors occurred:

* Error in creating ingress in cluster cluster1: ingresses.extensions "nginx" already exists
* Error in creating ingress in cluster cluster2: ingresses.extensions "nginx" already exists
* Error in creating ingress in cluster cluster3: ingresses.extensions "nginx" already exists

kubemci create is supposed to be idempotent and this error is unexpected.

[Help wanted] Documentation for newbies

Please, add some brief instructions, dependencies and requirements to compile kubemci (golang, GOPATH, GOBIN. etc).

That would be very helpful for those who not familiar with golang. Thanks in advance.

go lint is failing

go lint is failing with the current code:

$ go lint
/k8s-multicluster-ingress/app/kubemci/cmd/cmd.go:30:1: exported function NewCommand should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:72:6: exported type CreateOptions should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:92:1: exported function NewCmdCreate should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:128:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:132:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:135:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/create.go:138:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/create_test.go:153:9: should omit 2nd value from range; this loop is equivalent to `for k := range ...`
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:41:6: exported type DeleteOptions should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:57:1: exported function NewCmdDelete should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:91:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:95:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:98:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/delete.go:101:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/getstatus.go:35:6: exported type GetStatusOptions should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/getstatus.go:45:1: exported function NewCmdGetStatus should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/getstatus.go:76:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/getstatus.go:80:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/cmd/version.go:39:1: exported function NewCmdGetVersion should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/cmd/version.go:60:21: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go:33:6: type name will be used as backendservice.BackendServiceSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go:39:1: exported function NewBackendServiceSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go:49:1: comment on exported method BackendServiceSyncer.EnsureBackendService should be of the form "EnsureBackendService ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go:70:1: exported method BackendServiceSyncer.DeleteBackendServices should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/backendservicesyncer.go:139:10: if block ends with a return statement, so drop this else and outdent its block
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/fake_backendservicesyncer.go:24:6: exported type FakeBackendService should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/fake_backendservicesyncer.go:32:6: exported type FakeBackendServiceSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/fake_backendservicesyncer.go:37:1: comment on exported function NewFakeBackendServiceSyncer should be of the form "NewFakeBackendServiceSyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/fake_backendservicesyncer.go:45:1: exported method FakeBackendServiceSyncer.EnsureBackendService should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/fake_backendservicesyncer.go:60:1: exported method FakeBackendServiceSyncer.DeleteBackendServices should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/backendservice/interfaces.go:31:6: type name will be used as backendservice.BackendServiceSyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/fake_firewallrulesyncer.go:21:6: exported type FakeFirewallRule should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/fake_firewallrulesyncer.go:27:6: exported type FakeFirewallRuleSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/fake_firewallrulesyncer.go:32:1: comment on exported function NewFakeFirewallRuleSyncer should be of the form "NewFakeFirewallRuleSyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/fake_firewallrulesyncer.go:40:1: exported method FakeFirewallRuleSyncer.EnsureFirewallRule should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/fake_firewallrulesyncer.go:49:1: exported method FakeFirewallRuleSyncer.DeleteFirewallRules should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/firewallrulesyncer.go:35:6: type name will be used as firewallrule.FirewallRuleSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/firewallrulesyncer.go:43:1: exported function NewFirewallRuleSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/firewallrulesyncer.go:67:1: exported method FirewallRuleSyncer.DeleteFirewallRules should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/firewallrule/interfaces.go:22:6: type name will be used as firewallrule.FirewallRuleSyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:23:6: exported type FakeForwardingRule should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:30:6: exported type FakeForwardingRuleSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:35:1: comment on exported function NewFakeForwardingRuleSyncer should be of the form "NewFakeForwardingRuleSyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:43:1: exported method FakeForwardingRuleSyncer.EnsureHttpForwardingRule should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:43:36: method EnsureHttpForwardingRule should be EnsureHTTPForwardingRule
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:53:1: exported method FakeForwardingRuleSyncer.DeleteForwardingRules should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/fake_forwardingrulesyncer.go:58:1: exported method FakeForwardingRuleSyncer.GetLoadBalancerStatus should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:36:6: type name will be used as forwardingrule.ForwardingRuleSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:43:1: exported function NewForwardingRuleSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:56:32: method EnsureHttpForwardingRule should be EnsureHTTPForwardingRule
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:83:1: exported method ForwardingRuleSyncer.DeleteForwardingRules should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:96:1: exported method ForwardingRuleSyncer.GetLoadBalancerStatus should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:106:26: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/forwardingrulesyncer.go:110:26: error strings should not end with punctuation
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/forwardingrule/interfaces.go:22:6: type name will be used as forwardingrule.ForwardingRuleSyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/fake_healthchecksyncer.go:22:6: exported type FakeHealthCheck should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/fake_healthchecksyncer.go:27:6: exported type FakeHealthCheckSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/fake_healthchecksyncer.go:32:1: comment on exported function NewFakeHealthCheckSyncer should be of the form "NewFakeHealthCheckSyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/fake_healthchecksyncer.go:40:1: exported method FakeHealthCheckSyncer.EnsureHealthCheck should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/fake_healthchecksyncer.go:52:1: exported method FakeHealthCheckSyncer.DeleteHealthChecks should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:33:2: comment on exported const DefaultHealthCheckInterval should be of the form "DefaultHealthCheckInterval ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:49:6: type name will be used as healthcheck.HealthCheckSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:54:1: exported function NewHealthCheckSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:84:1: exported method HealthCheckSyncer.DeleteHealthChecks should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:111:6: func getJsonIgnoreErr should be getJSONIgnoreErr
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/healthchecksyncer.go:137:10: if block ends with a return statement, so drop this else and outdent its block
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/healthcheck/interfaces.go:26:6: type name will be used as healthcheck.HealthCheckSyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go:56:6: type name will be used as loadbalancer.LoadBalancerSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go:80:1: exported function NewLoadBalancerSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go:80:105: func parameter gcpProjectId should be gcpProjectID
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go:307:44: method parameter igUrl should be igURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/loadbalancer/loadbalancersyncer.go:413:9: should omit 2nd value from range; this loop is equivalent to `for k := range ...`
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:18:2: exported const FakeTargetProxySelfLink should have comment (or a comment on this block) or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:21:6: exported type FakeTargetProxy should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:26:6: exported type FakeTargetProxySyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:31:1: comment on exported function NewFakeTargetProxySyncer should be of the form "NewFakeTargetProxySyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:39:1: exported method FakeTargetProxySyncer.EnsureHttpTargetProxy should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:39:33: method EnsureHttpTargetProxy should be EnsureHTTPTargetProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/fake_targetproxysyncer.go:47:1: exported method FakeTargetProxySyncer.DeleteTargetProxies should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/interfaces.go:18:6: type name will be used as targetproxy.TargetProxySyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:28:6: exported type TargetProxySyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:28:6: type name will be used as targetproxy.TargetProxySyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:34:1: exported function NewTargetProxySyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:46:29: method EnsureHttpTargetProxy should be EnsureHTTPTargetProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:58:1: exported method TargetProxySyncer.DeleteTargetProxies should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:74:29: method ensureHttpProxy should be ensureHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:76:2: var desiredHttpProxy should be desiredHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:79:2: var existingHttpProxy should be existingHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:101:29: method updateHttpTargetProxy should be updateHTTPTargetProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:101:51: method parameter desiredHttpProxy should be desiredHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:118:29: method createHttpTargetProxy should be createHTTPTargetProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:118:51: method parameter desiredHttpProxy should be desiredHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:134:6: func targetHttpProxyMatches should be targetHTTPProxyMatches
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:134:29: func parameter desiredHttpProxy should be desiredHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:134:47: func parameter existingHttpProxy should be existingHTTPProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/targetproxy/targetproxysyncer.go:140:29: method desiredHttpTargetProxy should be desiredHTTPTargetProxy
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:24:2: const FakeUrlSelfLink should be FakeURLSelfLink
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:24:2: exported const FakeUrlSelfLink should have comment (or a comment on this block) or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:27:6: exported type FakeURLMap should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:33:6: exported type FakeURLMapSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:38:1: comment on exported function NewFakeURLMapSyncer should be of the form "NewFakeURLMapSyncer ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:46:1: exported method FakeURLMapSyncer.EnsureURLMap should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/fake_urlmapsyncer.go:56:1: exported method FakeURLMapSyncer.DeleteURLMap should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/interfaces.go:24:6: type name will be used as urlmap.URLMapSyncerInterface by other packages, and that stutters; consider calling this SyncerInterface
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/urlmapsyncer.go:40:6: type name will be used as urlmap.URLMapSyncer by other packages, and that stutters; consider calling this Syncer
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/urlmapsyncer.go:47:1: exported function NewURLMapSyncer should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/urlmap/urlmapsyncer.go:87:1: exported method URLMapSyncer.DeleteURLMap should have comment or be unexported
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:22:1: comment on exported function GetZoneAndNameFromIGUrl should be of the form "GetZoneAndNameFromIGUrl ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:23:30: func parameter igUrl should be igURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:29:2: var zoneUrl should be zoneURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:38:1: comment on exported function GetZoneAndNameFromInstanceUrl should be of the form "GetZoneAndNameFromInstanceUrl ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:39:6: func GetZoneAndNameFromInstanceUrl should be GetZoneAndNameFromInstanceURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:39:36: func parameter instanceUrl should be instanceURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:45:2: var zoneUrl should be zoneURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:54:1: comment on exported function GetNameFromUrl should be of the form "GetNameFromUrl ..."
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils.go:55:6: func GetNameFromUrl should be GetNameFromURL
/k8s-multicluster-ingress/app/kubemci/pkg/gcp/utils/utils_test.go:23:3: struct field igUrl should be igURL

kubemci commands have a litany of testing flags

This is due to k8s.io/ingress-gce/pkg/loadbalancers having a dependency on the testing package. This needs to be fixed, and that repo revendored.

  • Remove k8s.io/ingress-gce/pkg/loadbalancers dependency on testing package.
  • Revendor k8s.io/ingress-gce/pkg/loadbalancers

e2e tests should deploy to a separate namespace

To make it easier to do cleanup of a botched run and have separation from existing deployments the e2e tests should deploy to their own namespace, or possibly allow for configuring which namespace to use.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.