Giter VIP home page Giter VIP logo

kube-vip-cloud-provider's Introduction

kube-vip

High Availability and Load-Balancing

Build and publish main image regularly

Overview

Kubernetes Virtual IP and Load-Balancer for both control plane and Kubernetes services

The idea behind kube-vip is a small self-contained Highly-Available option for all environments, especially:

  • Bare-Metal
  • Edge (arm / Raspberry PI)
  • Virtualisation
  • Pretty much anywhere else :)

NOTE: All documentation of both usage and architecture are now available at https://kube-vip.io.

Features

Kube-Vip was originally created to provide a HA solution for the Kubernetes control plane, over time it has evolved to incorporate that same functionality into Kubernetes service type load-balancers.

  • VIP addresses can be both IPv4 or IPv6
  • Control Plane with ARP (Layer 2) or BGP (Layer 3)
  • Control Plane using either leader election or raft
  • Control Plane HA with kubeadm (static Pods)
  • Control Plane HA with K3s/and others (daemonsets)
  • Service LoadBalancer using leader election for ARP (Layer 2)
  • Service LoadBalancer using multiple nodes with BGP
  • Service LoadBalancer address pools per namespace or global
  • Service LoadBalancer address via (existing network DHCP)
  • Service LoadBalancer address exposure to gateway via UPNP
  • ... manifest generation, vendor API integrations and many more...

Why?

The purpose of kube-vip is to simplify the building of HA Kubernetes clusters, which at this time can involve a few components and configurations that all need to be managed. This was blogged about in detail by thebsdbox here -> https://thebsdbox.co.uk/2020/01/02/Designing-Building-HA-bare-metal-Kubernetes-cluster/#Networking-load-balancing.

Alternative HA Options

kube-vip provides both a floating or virtual IP address for your kubernetes cluster as well as load-balancing the incoming traffic to various control-plane replicas. At the current time to replicate this functionality a minimum of two pieces of tooling would be required:

VIP:

  • Keepalived
  • UCARP
  • Hardware Load-balancer (functionality differs per vendor)

LoadBalancing:

  • HAProxy
  • Nginx
  • Hardware Load-balancer (functionality differs per vendor)

All of these would require a separate level of configuration and in some infrastructures multiple teams in order to implement. Also when considering the software components, they may require packaging into containers or if they’re pre-packaged then security and transparency may be an issue. Finally, in edge environments we may have limited room for hardware (no HW load-balancer) or packages solutions in the correct architectures might not exist (e.g. ARM). Luckily with kube-vip being written in GO, it’s small(ish) and easy to build for multiple architectures, with the added security benefit of being the only thing needed in the container.

Troubleshooting and Feedback

Please raise issues on the GitHub repository and as mentioned check the documentation at https://kube-vip.io.

Contributing

Thanks for taking the time to join our community and start contributing! We welcome pull requests. Feel free to dig through the issues and jump in.

⚠️ This project has issue compiling on MacOS, please compile it on linux distribution

Star History

Star History Chart

kube-vip-cloud-provider's People

Contributors

ashleydumaine avatar daper avatar dependabot[bot] avatar flawedmatrix avatar jinxcappa avatar lubronzhan avatar m198799 avatar pnxs avatar stevesloka avatar thebsdbox avatar timosluis avatar wusendong avatar xprt64 avatar yaocw2020 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kube-vip-cloud-provider's Issues

ipv6 vip can not use out of k8s

k8s :1.21.10
docker :20.10.12
kernel: 5.4.182-1.el7.elrepo.x86_64
image :kube-vip-cloud-provider:v0.0.3

this is master machine

[root@hybxvuca01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.152.194.149 harbor.uat.crpcg.com
10.153.184.109 api.k8s.local
10.153.184.98 devops-db.bx.crpcg.com
10.153.184.98 devops-db.bx.crpharm.com
10.153.184.106 hybxvuca01.crpcg.com
2406:440:5400:80:0:70:0:1005 hybxvuca01.crpcg.com
2406:440:5400:80:0:70:0:1004 hybxvuca02.crpcg.com
10.153.184.107 hybxvuca02.crpcg.com
10.153.184.102 apollo.uat.crpcg.com
10.153.184.98 devops-db.bx.crpcg.com
10.153.184.98 devops-db.bx.crpharm.com
2406:440:5400:80:0:70:0:1062 demo.k8s.com
2406:440:5400:80:0:70:0:1062 nginx-demo.uat.crpcg.com
[root@hybxvuca01 ~]# ip a s ens224
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:50:56:81:42:ec brd ff:ff:ff:ff:ff:ff
inet6 2406:440:5400:80:0:70:0:1005/116 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe81:42ec/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@hybxvuca01 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.101.164.119 10.153.184.104 80:31517/TCP,443:31726/TCP 21d
ingress-nginx-controller-admission ClusterIP 10.106.121.95 443/TCP 21d
ingress-nginx-controller-ipv6 LoadBalancer fd00::ecf5 2406:440:5400:80:0:70:0:1062 80:30290/TCP,443:30168/TCP 3d20h
[root@hybxvuca01 ~]# curl -6 demo.k8s.com

<title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style>

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

this is same network macheine

10.153.184.102 apollo.uat.crpcg.com
10.153.184.102 apollo-config.uat.crpcg.com
2406:440:5400:80:0:70:0:1062 demo.k8s.com
[root@hybxvuca01 ~]# ip a s ens224
48: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:81:b3:32 brd ff:ff:ff:ff:ff:ff
inet6 2406:440:5400:80:0:70:0:1050/116 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe81:b332/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@hybxvuca01 ~]# ping6 2406:440:5400:80:0:70:0:1062

can not ping this ip

Panic for version parsing after upgrade to v0.0.9

panic: version string "" doesn't match expected regular expression: "^v(\d+\.\d+\.\d+)"

goroutine 1 [running]:
k8s.io/component-base/metrics.parseVersion({{0x0, 0x0}, {0x0, 0x0}, {0x1f44b17, 0x0}, {0x1c97daf, 0xb}, {0x0, 0x0}, ...})
        /go/pkg/mod/k8s.io/[email protected]/metrics/version_parser.go:47 +0x274
k8s.io/component-base/metrics.newKubeRegistry({{0x0, 0x0}, {0x0, 0x0}, {0x1f44b17, 0x0}, {0x1c97daf, 0xb}, {0x0, 0x0}, ...})
        /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:320 +0x119
k8s.io/component-base/metrics.NewKubeRegistry()
        /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:335 +0x78
k8s.io/component-base/metrics/legacyregistry.init()
        /go/pkg/mod/k8s.io/[email protected]/metrics/legacyregistry/registry.go:29 +0x1d

issue with missing version in the kube-vip-cloud-provider:v0.0.9 image

Hello I'm using a tool to setup my K8s cluster which rely on the kube-vip-cloud-provider.
We have an ansible task runing the following command:

kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml

Until last week it was working fine, but this morning I noticed that the pod was failing to get the "running" state.

Here is the pod's error :

panic: version string "" doesn't match expected regular expression: "^v(\d+\.\d+\.\d+)"                                                                                                                            │
│                                                                                                                                                                                                                    │
│ goroutine 1 [running]:                                                                                                                                                                                             │
│ k8s.io/component-base/metrics.parseVersion({{0x0, 0x0}, {0x0, 0x0}, {0x1f44b17, 0x0}, {0x1c97daf, 0xb}, {0x0, 0x0}, ...})                                                                                          │
│     /go/pkg/mod/k8s.io/[email protected]/metrics/version_parser.go:47 +0x274                                                                                                                                  │
│ k8s.io/component-base/metrics.newKubeRegistry({{0x0, 0x0}, {0x0, 0x0}, {0x1f44b17, 0x0}, {0x1c97daf, 0xb}, {0x0, 0x0}, ...})                                                                                       │
│     /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:320 +0x119                                                                                                                                       │
│ k8s.io/component-base/metrics.NewKubeRegistry()                                                                                                                                                                    │
│     /go/pkg/mod/k8s.io/[email protected]/metrics/registry.go:335 +0x78                                                                                                                                        │
│ k8s.io/component-base/metrics/legacyregistry.init()                                                                                                                                                                │
│     /go/pkg/mod/k8s.io/[email protected]/metrics/legacyregistry/registry.go:29 +0x1d                                                                                                                          │
│ Stream closed EOF for kube-system/kube-vip-cloud-provider-578d9b7bf7-z6t4f (kube-vip-cloud-provider) `

When I used the previous version of kube-vip-cloud-provider manifest (v0.0.8), in my kubectl apply command:

https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/v0.0.8/manifest/kube-vip-cloud-controller.yaml

Then I was able to have a running pod. It seems to me that the issue come from this recent commit : 71f730d

I could be wrong but the error looks like the "version" ARG was not passed when the new docker image was built, which caused the error above when trying to run the image defined in the kube-vip-cloud-controller.yaml manisfest.

Thanks

Granular RBAC

The current ccm RBAC permissions is a little bit wide open

This is the modified RBAC I use with kube-vip-ccm (I have only tested the k8s service VIP functionality using ARP only)

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: kube-vip-cloud-controller-role
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "create"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["kube-vip-cloud-controller"]
    verbs: ["update"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "create"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    resourceNames: ["kube-vip-cloud-controller"]
    verbs: ["update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-cloud-controller-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kube-vip-cloud-controller-role
subjects:
- kind: ServiceAccount
  name: kube-vip-cloud-controller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: system:kube-vip-cloud-controller-role
rules:
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["watch", "list", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["services/status"]
    verbs: ["patch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubevip"]
    verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-vip-cloud-controller-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-vip-cloud-controller-role
subjects:
- kind: ServiceAccount
  name: kube-vip-cloud-controller
  namespace: kube-system

Happy to do a PR is this is reasonable

A few questions

Hi, thank you for this maintaining this project.
I've just implemented kube-vip + cloud-provider instead of metallb to try creating a HA frontend for my cluster.
I have a few questions, sort of an FAQ to ask as I wasn't able to find clear answers to those in the documentation and I want to understand kube-vip well as it is going to be an entry point to the cluster control plane, as such, need to be able to troubleshoot it with deeper understanding.

  1. When running a command with -w, it works fine but after some time of watching, I get the following error. 192.168.88.0 is the VIP
an error on the server ("unable to decode an event from the watch stream: read tcp 192.168.0.147:53040->192.168.88.0:6443: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.") has prevented the request from succeeding
  1. Kubernetes exposes a default service called kubernetes which endpoints point to the nodes hosting the control plane. I'm confused why we can't just use that service and change it to type LoadBalancer instead of having to use kube-vip. Is that even possible (haven't tried yet). If so, what would the benefits of kube-vip be over that approach?
  2. Is is possible to expose a service on an IP outside of the provided range/cidr providing that loadBalancerIP: "x.x.x.x" is provided in the service spec? I tried that and it doesn't seem to work.
  3. MetalLB has the ability to 'request' an IP of a specific range by specifying in the spec, do you think this will be possible in kube-vip anytime soon?
  4. Why is the cloud controller needed and the command argument --services in kube-vip? Do I understand right that the cloud controller just assigns the IP and kube-vip then picks it up from the service and starts listening to it?
  5. In BGP mode, I don't think the UPNP and DHCP features can be utilized. Is it worth updating the docs to explicitly state that?
  6. Regarding point 6, Kubernetes supports loadBalancerClass now (https://kubernetes.io/docs/concepts/services-networking/_print/#load-balancer-class), meaning we should be able to run 2 instances of kube-vip. Theoretically one could be in ARP mode to allow usage of DHCP/UPNP mode. Will this feature be supported?
  7. If UPNP was to be used, would all services get published through it or can this be controlled per service?
  8. Lastly, in UPNP mode, would external IP be discovered and written back to the service spec?

Thanks for your time
Mateusz

Loadbalancer IP won't change after deployment

If you deploy a LoadBalancer without a loadBalancerIP it properly assigns an IP from the range defined. But if you later add a loadBalancerIP it registers an event that looks like it is changing but it never actually does.

Vice versa if you deploy with a loadBalancerIP and then later remove it again appears to register an event that the IP is changing but never actually changes it.

For example here I try deploying the coredns chart set to use a LoadBalancer service type but I add and remove the loadBalancerIP option (cleaned extra data for brevity)

❯ helm upgrade --install -f values.yaml coredns coredns/coredns
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ k get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   4s
kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ helm upgrade --install -f values.yaml coredns coredns/coredns
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ k get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   46s
kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ k describe svc coredns-coredns
Name:                     coredns-coredns
Namespace:                default
Labels:                   app.kubernetes.io/instance=coredns
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=coredns
                          helm.sh/chart=coredns-1.16.7
                          implementation=kube-vip
                          ipam-address=192.168.2.100
Annotations:              meta.helm.sh/release-name: coredns
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/instance=coredns,app.kubernetes.io/name=coredns
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.33.1
IPs:                      10.43.33.1
IP:                       192.168.2.2
LoadBalancer Ingress:     192.168.2.100
Port:                     udp-53  53/UDP
TargetPort:               53/UDP
NodePort:                 udp-53  31509/UDP
Endpoints:                10.42.0.16:53,10.42.1.73:53,10.42.2.146:53
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                   Age               From                Message
  ----     ------                   ----              ----                -------
  Normal   LoadbalancerIP           49s               service-controller  -> 192.168.2.100
  Normal   EnsuringLoadBalancer     5s (x3 over 49s)  service-controller  Ensuring load balancer
  Warning  UnAvailableLoadBalancer  5s (x3 over 49s)  service-controller  There are no available nodes for LoadBalancer
  Normal   EnsuredLoadBalancer      5s (x3 over 49s)  service-controller  Ensured load balancer
  Normal   LoadbalancerIP           5s                service-controller  192.168.2.100 -> 192.168.2.2
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ helm upgrade --install -f values.yaml coredns coredns/coredns
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ k describe svc coredns-coredns
Name:                     coredns-coredns
Namespace:                default
Labels:                   app.kubernetes.io/instance=coredns
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=coredns
                          helm.sh/chart=coredns-1.16.7
                          implementation=kube-vip
                          ipam-address=192.168.2.101
Annotations:              meta.helm.sh/release-name: coredns
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/instance=coredns,app.kubernetes.io/name=coredns
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.33.1
IPs:                      10.43.33.1
IP:                       192.168.2.101
LoadBalancer Ingress:     192.168.2.100
Port:                     udp-53  53/UDP
TargetPort:               53/UDP
NodePort:                 udp-53  31509/UDP
Endpoints:                10.42.0.16:53,10.42.1.73:53,10.42.2.146:53
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type     Reason                   Age                 From                Message
  ----     ------                   ----                ----                -------
  Normal   LoadbalancerIP           2m23s               service-controller  -> 192.168.2.100
  Normal   LoadbalancerIP           99s                 service-controller  192.168.2.100 -> 192.168.2.2
  Normal   EnsuringLoadBalancer     0s (x5 over 2m23s)  service-controller  Ensuring load balancer
  Warning  UnAvailableLoadBalancer  0s (x5 over 2m23s)  service-controller  There are no available nodes for LoadBalancer
  Normal   EnsuredLoadBalancer      0s (x5 over 2m23s)  service-controller  Ensured load balancer
  Normal   LoadbalancerIP           0s                  service-controller  192.168.2.2 ->
  Normal   LoadbalancerIP           0s                  service-controller  -> 192.168.2.101
☸ starfleet (default) in homelab/k8s/coredns on  main [?]
❯ k get svc
NAME              TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
coredns-coredns   LoadBalancer   10.43.33.1   192.168.2.100   53:31509/UDP   2m39s
kubernetes        ClusterIP      10.43.0.1    <none>          443/TCP        27h

Through out that only 192.168.2.100 ever responded

First and last IPs from CIDR block is ignored

I have block of 4 IPs as cidr-global, but kube-vip-cloud-provider is throwing error after using 2nd and 3rd addresses:
Error syncing load balancer: failed to ensure load balancer: no addresses available in [envoy-gateway-system] cidr [{IP}/30]

Feature request: IPv6 address support

Kubernetes has added full support for dual stack services; it would be nice if kube-vip-cloud-provider could handle and assign ipv6 addresses as well as ipv4.

see https://kubernetes.io/docs/concepts/services-networking/dual-stack/

There are a couple of ways you could approach this; I think the easiest way would be to look at the clusterIPs array and assign one address for each class of ip that is in the list. I was able to get a semi-hacked version of the MetalLB controller working this way (from metallb/metallb#727), but they don't have it in the mainstream tree yet and it hasn't been updated recently.

Here is an example service from that, with ip addresses tweaked:

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: external
    meta.helm.sh/release-namespace: ingress-haproxy
    metallb.universe.tf/address-pool: external
  creationTimestamp: "2021-09-08T23:58:49Z"
  labels:
    app.kubernetes.io/instance: external
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: haproxy-ingress
    app.kubernetes.io/version: v0.13.3
    helm.sh/chart: haproxy-ingress-0.13.3
  name: external-haproxy-ingress
  namespace: ingress-haproxy
  resourceVersion: "223919524"
  uid: 7ef512e2-9a16-405b-a5a3-9d9892df5d2b
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.96.10.221
  clusterIPs:
  - 10.96.10.221
  - xxxx:xxxx:1000:21::20:f86f
  externalTrafficPolicy: Local
  healthCheckNodePort: 31319
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - name: http-80
    nodePort: 32117
    port: 80
    protocol: TCP
    targetPort: http
  - name: https-443
    nodePort: 31303
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/instance: external
    app.kubernetes.io/name: haproxy-ingress
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: xx.xx.xx.160
    - ip: xxxx:xxxx:1000:21::40:0

It's worth noting that the loadBalancerIP field does not support an array of addresses, so there is no way in the spec to request more than one specific ip address -- some people get around that by creating two services (one ipv4, one ipv6), or using a custom annotation, e.g. the metalLB PR I referenced uses metallb.universe.tf/load-balancer-ips: "10.0.0.20,1000::20"

I am available for discussion in the kubernetes slack in the #kube-vip room -- happy to help as I'm able, though I'm super swamped with a bunch of things right now and I'm not super familiar with golang.

Enhancement request - shared IP for TCP and UDP load balancer services

Hi,

I'm currently using kube-vip and metallb, I'd like to switch over to using kube-vip for my load balancers (love the DHCP option) but I need to be able to have TCP and UDP services sharing an IP address (with metallb I can annotate with metallb.universe.tf/allow-shared-ip: sharedservicename)

Is there a plan to introduce something similar, or is this not a possibility?

Tag v0.1 is only built for amd64

Hello,

The container tagged 0.1 is only built for amd64 and not armv7/arm64. Can we please restore the multi-arch builds? Thank you!

Bump golangci-lint to 1.56.2

This is required to support golang 1.22

https://github.com/kube-vip/kube-vip-cloud-provider/actions/runs/8073030699/job/22055939869?pr=110

run golangci-lint
  Running [/home/runner/golangci-lint-1.55.2-linux-amd64/golangci-lint run --out-format=colored-line-number,github-actions] in [] ...
  level=warning msg="Failed to discover go env: failed to run 'go env': exit status 1"
  level=error msg="Running error: context loading failed: failed to load packages: failed to load with go/packages: err: exit status 1: stderr: go: downloading go1.22 (linux/amd64)\ngo: download go1.22 for linux/amd64: toolchain not available\n"

Statefulset instead of Deployment

For other cloud-provider, they are either Daemonset (Example AWS, VSphere) or Deployment (Example Azure). Kube-vip-cloud-provider doesn't seems to be necessary to be statefulset, it just allocates IPs from an ConfigMap. It doesn't need to hold specific "states".
So I think we should follow the convention that other cloud-provider has.

Pod kube-vip-cloud-provider-0 fails on K3S

Hi,
I am running latest K3S three-node cluster on DietPI ARM64 OS, all nodes are RPI4b with 8GB RAM. All nodes are masters in HA with embedded storage.
Kube-vip was deployed with arp, controlplane and services options. It runs fine, I can reach the cluster via the VIP from my laptop.

I have installed kube-vip-cloud-provider as instructed, but the pod fails:

NAMESPACE     NAME                                     READY   STATUS             RESTARTS        AGE
kube-system   kube-vip-cloud-provider-0                0/1     CrashLoopBackOff   7 (3m28s ago)   14m

Content of the log:
standard_init_linux.go:228: exec user process caused: exec format error

It looks like the compatibility issue with the ARM64 platform, which is puzzling as kube-vip runs absolutely fine.

Incorrect IP Address Reported in Logs with annotations / v0.0.5

kube-vip-cloud-provider: v0.0.5 on K3s v1.26.4+k3s1.

I noticed the message of Deprecate service.spec.loadBalancerIP and testing changing to annotation.

Before starting, confirmed I had a deprecated warning:

I0514 17:46:49.075861       1 loadBalancer.go:89] syncing service 'apt-cacher-ng' (07d8d6e5-a608-40e5-bfbd-5097c02e4d10)
W0514 17:46:49.075881       1 loadBalancer.go:94] service.Spec.LoadBalancerIP is defined but annotations 'kube-vip.io/loadbalancerIPs' is not, assume it's a legacy service, updates its annotations
I0514 17:46:49.075971       1 event.go:294] "Event occurred" object="apt-cacher-ng/apt-cacher-ng" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0514 17:46:50.267391       1 loadBalancer.go:89] syncing service 'apt-cacher-ng' (07d8d6e5-a608-40e5-bfbd-5097c02e4d10)

Within the Helm Chart for this application I changed to annotation:

            annotations:
              kube-vip.io/loadbalancerIPs: 192.168.10.243
            type: LoadBalancer
            # loadBalancerIP: 192.168.10.243

Watched the Kube-VIP CloudProvider logs:

I0514 17:50:51.463391       1 loadBalancer.go:89] syncing service 'apt-cacher-ng' (07d8d6e5-a608-40e5-bfbd-5097c02e4d10)
I0514 17:50:51.463437       1 event.go:294] "Event occurred" object="apt-cacher-ng/apt-cacher-ng" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message="192.168.10.243 -> "
I0514 17:50:51.463487       1 event.go:294] "Event occurred" object="apt-cacher-ng/apt-cacher-ng" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0514 17:50:51.479789       1 loadBalancer.go:208] no cidr config for namespace [apt-cacher-ng] exists in key [cidr-apt-cacher-ng] configmap [kubevip]
I0514 17:50:51.479802       1 loadBalancer.go:213] Taking address from [cidr-global] pool
I0514 17:50:51.491008       1 loadBalancer.go:170] Updating service [apt-cacher-ng], with load balancer IPAM address [192.168.10.241]
I0514 17:50:51.512616       1 event.go:294] "Event occurred" object="apt-cacher-ng/apt-cacher-ng" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message=" -> 192.168.10.241"
I0514 17:50:51.515449       1 loadBalancer.go:89] syncing service 'apt-cacher-ng' (07d8d6e5-a608-40e5-bfbd-5097c02e4d10)

Within the logs above I see two references to an incorrect IP address: 192.168.10.241, this belongs to a different LoadBalancer service in the CIDR pool:

$ kubectl get svc mosquitto-mqtt -n mosquitto

NAME             TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
mosquitto-mqtt   LoadBalancer   10.43.55.109   192.168.10.241   1883:30214/TCP   276d

It's respective logs:

I0514 17:53:36.964549       1 loadBalancer.go:213] Taking address from [cidr-global] pool
I0514 17:53:36.984608       1 loadBalancer.go:170] Updating service [mosquitto-mqtt], with load balancer IPAM address [192.168.10.241]
I0514 17:53:37.002935       1 event.go:294] "Event occurred" object="mosquitto/mosquitto-mqtt" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message=" -> 192.168.10.241"
I0514 17:53:37.003038       1 loadBalancer.go:89] syncing service 'mosquitto-mqtt' (cab480f2-8721-46c4-866a-99764b17a3cc)

In the end the service was assigned the correct IP address. Not sure if this is just a logging issue?

$ kubectl get svc -n apt-cacher-ng

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
apt-cacher-ng   LoadBalancer   10.43.171.146   192.168.10.243   3142:32669/TCP   150d

I observed this happening more than once, for example:

I0514 18:51:31.749840       1 loadBalancer.go:208] no cidr config for namespace [syncthing] exists in key [cidr-syncthing] configmap [kubevip]
I0514 18:51:31.749862       1 loadBalancer.go:213] Taking address from [cidr-global] pool
I0514 18:51:31.777835       1 loadBalancer.go:170] Updating service [syncthing-listen], with load balancer IPAM address [192.168.10.242]
I0514 18:51:31.796257       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message=" -> 192.168.10.242"
I0514 18:51:31.796674       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0514 18:51:31.796683       1 loadBalancer.go:89] syncing service 'syncthing-listen' (d756dca6-d0cd-4145-a569-4715c6b9fbae)
I0514 18:51:31.796730       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0514 18:51:31.796747       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0514 18:51:36.331556       1 loadBalancer.go:89] syncing service 'syncthing-listen' (d756dca6-d0cd-4145-a569-4715c6b9fbae)
I0514 18:51:36.331691       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0514 18:51:36.331754       1 event.go:294] "Event occurred" object="syncthing/syncthing-listen" fieldPath="" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

This service was defined to use 192.168.10.244 but the logs above show 192.168.10.242:

            annotations:
              kube-vip.io/loadbalancerIPs: 192.168.10.244
...
            type: LoadBalancer
            # loadBalancerIP: 192.168.10.244

The service is updated correctly:

metadata:
  annotations:
    kube-vip.io/loadbalancerIPs: 192.168.10.244
$ kubectl get svc syncthing-listen -n syncthing

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                           AGE
syncthing-listen   LoadBalancer   10.43.224.251   192.168.10.244   21027:32291/UDP,22000:30639/TCP   140d

Seems to just be a logging issue as far as I can tell.

kube-vip.io docs point to old manifest?

Documentation at https://kube-vip.io/hybrid/services/#using-the-plunder-cloud-provider-ccm points to using https://kube-vip.io/manifests/controller.yaml to deploy the Plunder Cloud Provider.

That manifest points to image: plndr/plndr-cloud-provider:0.1.5 which only has an amd64 image. I was able to deploy on arm using plndr/plndr-cloud-provider:9eb45d26, which is the current nightly build.

However, as the provider has moved to this repository, it looks like I should be using this manifest - https://github.com/kube-vip/kube-vip-cloud-provider/blob/main/manifest/kube-vip-cloud-controller.yaml, which points to kubevip/kube-vip-cloud-provider:0.1 and uses the new name.

kubevip/kube-vip-cloud-provider:0.1 doesn't have an arm build either - however the latest tag does.

  • It looks like the documentation needs updating to point to the latest kube-vip-cloud-provider manifest?
  • It also looks like the manifest needs to point to a multi-arch image, or the 0.1 image needs to be built for multi-arch?

Happy to do a PR for both, if my guesses are correct!

Issue with vip assignment using cidr-global

I noticed a couple of issues when creating a loadbalancer service using kube-vip.
Could you please take a look at it? Thank you.

Issue number 1. Cannot change VIP for the service that did not have hardcoded loadBalancerIP.

My setup:

kubectl apply -f https://kube-vip.io/manifests/rbac.yaml
KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name"
)
alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
kube-vip manifest daemonset --services --inCluster --arp --interface eth0 | kubectl apply -f -
kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifes
t/kube-vip-cloud-controller.yaml
echo $KVVERSION
v0.4.0

The service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: LoadBalancer
  # not present initially
  #loadBalancerIP: XXX.YY.77.148
  selector:
    app: nginx
  ports:
      # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
    - port: 80
      targetPort: 80
      # Optional field
      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
      nodePort: 30080
      protocol: TCP

The kubevip configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubevip
  namespace: kube-system
data:
  cidr-global: XXX.YY.77.148/32,XXX.YY.77.149/32

Initial setup logs for the above loadbalancer service.
Log from kube-vip-cloud-provider-0

I1117 01:43:38.982374       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1117 01:43:39.133704       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
I1117 01:43:39.134248       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I1117 01:43:39.134263       1 loadBalancer.go:234] Taking address from [cidr-global] pool
I1117 01:43:39.134321       1 loadBalancer.go:190] Updating service [nginx-service], with load balancer IPAM address [XXX.YY.77.148]
E1117 01:43:39.201644       1 controller.go:275] error processing service default/nginx-service (will retry): failed to ensure load balancer: Error updating Service Spec [nginx-service] : Operation cannot be fulfilled on services "nginx-service": the object has been modified; please apply your changes to the latest version and try again
I1117 01:43:39.201788       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Error updating Service Spec [nginx-service] : Operation cannot be fulfilled on services \"nginx-service\": the object has been modified; please apply your changes to the latest version and try again"
I1117 01:43:44.204320       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1117 01:43:44.271326       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
I1117 01:43:44.271372       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I1117 01:43:44.271387       1 loadBalancer.go:234] Taking address from [cidr-global] pool
I1117 01:43:44.271401       1 loadBalancer.go:190] Updating service [nginx-service], with load balancer IPAM address [XXX.YY.77.148]
I1117 01:43:44.428180       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Log from kube-vip-ds

[kube-vip-ds-vhkxp] time="2021-11-17T01:43:38Z" level=info msg="Service [nginx-service] has been addded/modified, it has no assigned external addresses"
[kube-vip-ds-vhkxp] time="2021-11-17T01:43:39Z" level=info msg="Service [nginx-service] has been addded/modified, it has no assigned external addresses"
[kube-vip-ds-vhkxp] time="2021-11-17T01:43:44Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
[kube-vip-ds-vhkxp] time="2021-11-17T01:43:44Z" level=info msg="New VIP [XXX.YY.77.148] for [nginx-service/035b0b4c-48ac-42bf-90bb-6b90406b7ed5] "

OK, that worked

$ k get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes      ClusterIP      10.43.0.1       <none>          443/TCP        6d10h
nginx-service   LoadBalancer   10.43.157.128   XXX.YY.77.148   80:30080/TCP   19s

However, if I try to update the service now w/ loadBalancerIP, it fails.
I set "loadBalancerIP" XXX.YY.77.149 in the loadbalancer service, and apply it with k apply -f file.

Log from kube-vip-cloud-provider-0

I1117 01:49:27.706107       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message="XXX.YY.77.148 -> XXX.YY.77.149"
I1117 01:49:27.706484       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1117 01:49:27.800194       1 loadBalancer.go:149] syncing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
I1117 01:49:27.800583       1 loadBalancer.go:164] found existing service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5) with vip XXX.YY.77.148
I1117 01:49:27.801227       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Log from kube-vip-ds

[kube-vip-ds-vhkxp] time="2021-11-17T01:49:27Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"

kubectl - Issue here is that vip is not updated to XXX.YY.77.149

$ k get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kubernetes      ClusterIP      10.43.0.1       <none>          443/TCP        6d10h
nginx-service   LoadBalancer   10.43.157.128   XXX.YY.77.148   80:30080/TCP   7m40s`

So, I delete the service now.

$ k delete -f  ../tmp/test-loadbalancer.yaml
deployment.apps "nginx-deployment" deleted
service "nginx-service" deleted

Log from kube-vip-cloud-provider-0

I1117 01:54:27.307074       1 loadBalancer.go:96] deleting service 'nginx-service' (035b0b4c-48ac-42bf-90bb-6b90406b7ed5)
I1117 01:54:27.307543       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletingLoadBalancer" message="Deleting load balancer"
I1117 01:54:27.477461       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletedLoadBalancer" message="Deleted load balancer"

Log from kube-vip-ds

[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="[LOADBALANCER] Stopping load balancers"
[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="[VIP] Releasing the Virtual IP [XXX.YY.77.148]"
[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg=Stopped
[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Removed [035b0b4c-48ac-42bf-90bb-6b90406b7ed5] from manager, [0] advertised services remain"
[kube-vip-ds-vhkxp] time="2021-11-17T01:54:27Z" level=info msg="Service [nginx-service] has been deleted

Now, I add service back in still with XXX.YY.77.149 hardcoded and that works, but keep going ...

$ k apply -f  ../tmp/test-loadbalancer.yaml
deployment.apps/nginx-deployment created
service/nginx-service created

Log from kube-vip-cloud-provider-0

I1117 01:58:06.109447       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1117 01:58:06.612550       1 loadBalancer.go:149] syncing service 'nginx-service' (28dec8dc-d5dc-4d70-92b7-a1da283f3940)
I1117 01:58:06.704250       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Log from kube-vip-ds

[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="New VIP [XXX.YY.77.149] for [nginx-service/28dec8dc-d5dc-4d70-92b7-a1da283f3940] "
[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Starting advertising address [XXX.YY.77.149] with kube-vip"
[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Started Load Balancer and Virtual IP"
[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"
[kube-vip-ds-vhkxp] time="2021-11-17T01:58:06Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.149]"`

However, this is where the issue number 2 comes in.
If I now, delete the loadbalancer service and add it back in with a new hardcoded loadBalancerIP XXX.YY.77.148, it fails to assign the desired VIP.

Log from kube-vip-cloud-provider-0 (delete followed by add)
Delete

I1117 02:10:27.676977       1 loadBalancer.go:96] deleting service 'nginx-service' (dfa8a94a-f5b0-443a-9228-a5b984c778fe)
I1117 02:10:27.678238       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletingLoadBalancer" message="Deleting load balancer"
I1117 02:10:27.822216       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="DeletedLoadBalancer" message="Deleted load balancer"

Add with loadbalancerip XXX.YY.77.148

I1117 02:11:52.265166       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I1117 02:11:52.388895       1 loadBalancer.go:149] syncing service 'nginx-service' (ce7a213f-cece-41a5-88e4-e5cd8f8552e2)
I1117 02:11:52.425914       1 event.go:291] "Event occurred" object="default/nginx-service" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Log from kube-vip-ds (delete followed by add)
Delete

[kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="[VIP] Releasing the Virtual IP [XXX.YY.77.149]"
[kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg=Stopped
[kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="Removed [dfa8a94a-f5b0-443a-9228-a5b984c778fe] from manager, [0] advertised services remain"
[kube-vip-ds-vhkxp] time="2021-11-17T02:10:27Z" level=info msg="Service [nginx-service] has been deleted"

Add with loadbalancer ip XXX.YY.77.148

[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="New VIP [XXX.YY.77.148] for [nginx-service/ce7a213f-cece-41a5-88e4-e5cd8f8552e2] "
[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Starting advertising address [XXX.YY.77.148] with kube-vip"
[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Started Load Balancer and Virtual IP"
[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=error msg="Error updating Service [nginx-service] Status: Operation cannot be fulfilled on services \"nginx-service\": the object has been modified; please apply your changes to the latest version and try again"
[kube-vip-ds-vhkxp] time="2021-11-17T02:11:52Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
[kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [rke2-coredns-rke2-coredns] has been addded/modified, it has no assigned external addresses"
[kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [rke2-metrics-server] has been addded/modified, it has no assigned external addresses"
[kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [nginx-service] has been addded/modified, it has an assigned external addresses [XXX.YY.77.148]"
[kube-vip-ds-vhkxp] time="2021-11-17T02:12:45Z" level=info msg="Service [kubernetes] has been addded/modified, it has no assigned external addresses"

kubectl - external IP is shown as pending whereas it should have been assigned the expected XXX.YY.77.148

$ k get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP      10.43.0.1       <none>        443/TCP        6d10h
nginx-service   LoadBalancer   10.43.234.151   <pending>     80:30080/TCP   21m

vip is not updated in service.status.loadBalancer.

I installed kube-vip ccm in my local environment.

When creating svc with type: LoadBalancer, spec.loadBalancerIP is set, but status.loadBalancer is not updated.

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80}],"selector":{"name":"nginx"},"type":"LoadBalancer"}}
  creationTimestamp: "2021-08-10T02:39:37Z"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  name: nginx
  namespace: default
  resourceVersion: "4364132"
  uid: 63134136-0804-4ea9-a548-bc57891385be
spec:
  clusterIP: 10.103.121.46
  clusterIPs:
  - 10.103.121.46
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 172.28.128.190
  ports:
  - name: http
    nodePort: 31878
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}

EXTERNAL-IP is still output as <pending>.

$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        8d
nginx        LoadBalancer   10.103.121.46   <pending>     80:31878/TCP   66s

This is the ccm log.

I0810 02:39:37.233343       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0810 02:39:37.252303       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0810 02:39:37.270534       1 loadBalancer.go:149] syncing service 'nginx' (63134136-0804-4ea9-a548-bc57891385be)
I0810 02:39:37.270572       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I0810 02:39:37.270587       1 loadBalancer.go:234] Taking address from [cidr-global] pool
I0810 02:39:37.270599       1 loadBalancer.go:190] Updating service [nginx], with load balancer IPAM address [172.28.128.190]
E0810 02:39:37.285650       1 controller.go:275] error processing service default/nginx (will retry): failed to ensure load balancer: Error updating Service Spec [nginx] : Operation cannot be fulfilled on services "nginx": the object has been modified; please apply your changes to the latest version and try again
I0810 02:39:37.286119       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: Error updating Service Spec [nginx] : Operation cannot be fulfilled on services \"nginx\": the object has been modified; please apply your changes to the latest version and try again"
I0810 02:39:42.286970       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0810 02:39:42.286992       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0810 02:39:42.300116       1 loadBalancer.go:149] syncing service 'nginx' (63134136-0804-4ea9-a548-bc57891385be)
I0810 02:39:42.300161       1 loadBalancer.go:229] No cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I0810 02:39:42.300178       1 loadBalancer.go:234] Taking address from [cidr-global] pool
I0810 02:39:42.300191       1 loadBalancer.go:190] Updating service [nginx], with load balancer IPAM address [172.28.128.190]
I0810 02:39:42.317369       1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"

Multiple Services Use IP

I followed the instructions for kube-vip, including following the instructions for kube-vip-cloud-provider, followed the instructions for Rancher.

I used 192.168.99.100 in my ARP manifest, and I used 192.168.99.100-192.168.99.199 in my ConfigMap.

I found that after I deployed a workload (nginx) with service type LoadBalancer, it used the same IP as the Traefik included with k3s. This continued even after I redeployed the workload, deleted and deployed the workload again, removed the loadBalancerIP, specified a different loadBalancerIP, and finally deleted and deployed a different workload entirely (alpine):

$ kubectl get services --all-namespaces
NAMESPACE             NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
cattle-fleet-system   gitjob                 ClusterIP      10.43.64.93     <none>           80/TCP                       68m
cattle-system         rancher                ClusterIP      10.43.148.21    <none>           80/TCP,443/TCP               73m
cattle-system         rancher-webhook        ClusterIP      10.43.220.243   <none>           443/TCP                      67m
cattle-system         webhook-service        ClusterIP      10.43.252.16    <none>           443/TCP                      67m
cert-manager          cert-manager           ClusterIP      10.43.143.203   <none>           9402/TCP                     74m
cert-manager          cert-manager-webhook   ClusterIP      10.43.52.73     <none>           443/TCP                      74m
default               alpine                 ClusterIP      10.43.23.22     <none>           80/TCP                       17m
default               alpine-loadbalancer    LoadBalancer   10.43.101.2     192.168.99.111   80:31615/TCP                 17m
default               httpd                  ClusterIP      10.43.71.242    <none>           80/TCP                       43m
default               httpd-loadbalancer     LoadBalancer   10.43.156.159   192.168.99.112   80:30906/TCP                 43m
default               kubernetes             ClusterIP      10.43.0.1       <none>           443/TCP                      88m
kube-system           kube-dns               ClusterIP      10.43.0.10      <none>           53/UDP,53/TCP,9153/TCP       88m
kube-system           metrics-server         ClusterIP      10.43.222.119   <none>           443/TCP                      88m
kube-system           traefik                LoadBalancer   10.43.31.248    192.168.99.111   80:31948/TCP,443:30174/TCP   88m

Deploying another workload (httpd) while this one was running resulted in that workload getting the next IP in the range.

kube-vip-cloud-provider appears to understand that this configuration is not valid:

$ kubectl logs -n kube-system kube-vip-cloud-provider-0
[...]
I0410 23:32:57.719659       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0410 23:32:57.719692       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0410 23:32:57.719766       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0410 23:32:57.719774       1 event.go:291] "Event occurred" object="default/alpine-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0410 23:32:57.719784       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0410 23:32:57.719790       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0410 23:32:57.719798       1 event.go:291] "Event occurred" object="default/httpd-loadbalancer" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0410 23:32:57.719808       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0410 23:32:57.719813       1 event.go:291] "Event occurred" object="kube-system/traefik" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"

192.168.99.111 appears to be pointing to Traefik based on the HTTP response I get when visiting that URL, not apline-loadbalancer.

I believe kube-vip-cloud-provider should be assigning the next available IP to alpine-loadbalancer, not 192.168.99.111.

Would anyone be willing to help me explain this behavior?

Multi cluster loadbalancing

Hi,

i'm looking for an on premise solution for managing services of type load balancer in multiple kubernetes cluster. This is currently not supported and is maybe out of scope of this project. I recognized that this is possible by configuring kube-vip to use wireguard.

Would it be an acceptable feature if the kube-vip-cloud-provider could create services in an upstream cluster and mapping node ports of downstream cluster nodes to the upstream cluster service ?

Are there big disadvantages compared to the wireguard solution with kube-vip?

Or should it be a separate provider for that because the logic would be completely decoupled from the current kube-vip-cloud-provider implementation?

I put already some time into an attempt to implement it in general. You can take a look on it at my fork. I've tested it successfully with an ingress-nginx-controller but not with tls enabled until now. If it is acceptable for you i would like to open a pull request.

There are some limitations until now. Endpoints for Upstream Services are limited to one subset and sessionAffinity is hardcoded. I've added an example configuration to examples directory.

Question: Will there be a new release for supporting the new annotation method?

Sorry, but as there are no "discussions" enabled I open this issue.

I have seen kube-vip v0.5.12 was released and is now supporting the new annotation-parameter for explicitly setting a VIP, but still there is no new kube-vip-cloud-provider release. But there was some code-change already, see #52

Do I have to add then both in a service-config (spec and annotation) in the meantime until cloud-provider got updated?

statefulset config of env var KUBEVIP_NAMESPACE cannot take effect in the pod

I install kube-vip-cloud-provider via helm chart, it installed into a namespace call "kube-vip", and also apply a configmap name kubevip into this namespace, for details check here
install-kube-vip-cloud-provider-for-service-loadbalancer
but the pod log still shows that it still retrieve config map from namespace kube-system

Inf-Alpine01:~$ kubectl describe statefulset kube-vip-cloud-provider -n kube-vip
Name:               kube-vip-cloud-provider
Namespace:          kube-vip
CreationTimestamp:  Sat, 14 May 2022 15:18:35 +0800
  Containers:
   kube-vip-cloud-provider:
    Image:      kubevip/kube-vip-cloud-provider:v0.0.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /kube-vip-cloud-provider
      --leader-elect-resource-name=kube-vip-cloud-controller
    Environment:
      KUBEVIP_NAMESPACE:   kube-vip
      KUBEVIP_CONFIG_MAP:  kubevip
    Mounts:                <none>
  Volumes:                 <none>
Volume Claims:             <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  32m   statefulset-controller  create Pod kube-vip-cloud-provider-0 in StatefulSet kube-vip-cloud-provider successful
Inf-Alpine01:~$ kubectl logs kube-vip-cloud-provider-0 -n kube-vip | grep configMap
E0514 07:23:18.807292       1 loadBalancer.go:94] Unable to retrieve kube-vip ipam config from configMap [kubevip] in kube-system

Feature request: To reduce IP usage, optional feature to allow sharing of existing LoadBalancer IPs if different ports.

Optionally allow intelligent sharing of LoadBalancer addresses using this feature:

https://kube-vip.io/docs/usage/kubernetes-services/#multiple-services-on-the-same-ip

Example:

When this feature is enabled, if two or more LoadBalancer services requesting non-overlapping ports are configured, the cloud provider will allocate one IP which is shared between the services instead of one per service.

Assuming different ports, this can decrease the amount of IPs in use across a cluster.

Creating a LoadBalancer using the CCM

I created a custom RKE2 cluster on Linode using rancher. I then had to manually install the CCM driver as explained here: https://www.linode.com/docs/guides/install-the-linode-ccm-on-unmanaged-kubernetes.

Once that was all configured, I tried installing traefik which deploys a LoadBalancer, however, I get the following error in the CCM:

Error syncing load balancer: failed to ensure load balancer: [400]
[configs[0].nodes[0].address] Must be in address:port format;
[configs[0].nodes[1].address] Must be in address:port format;
[configs[0].nodes[2].address] Must be in address:port format;
[configs[1].nodes[0].address] Must be in address:port format;
[configs[1].nodes[1].address] Must be in address:port format;
[configs[1].nodes[2].address] Must be in address:port format

Below is the generated service from the traefik helm chart

apiVersion: v1
kind: Service
metadata:
  name: traefik
  namespace: traefik
  uid: c2e99808-93ff-4fa2-a537-b61cb0abc352
  resourceVersion: '22974'
  creationTimestamp: '2024-01-03T04:09:48Z'
  labels:
    app.kubernetes.io/instance: traefik-traefik
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: traefik
    helm.sh/chart: traefik-25.0.0
  annotations:
    meta.helm.sh/release-name: traefik
    meta.helm.sh/release-namespace: traefik
  finalizers:
    - service.kubernetes.io/load-balancer-cleanup
  managedFields:
    - manager: helm
      operation: Update
      apiVersion: v1
      time: '2024-01-03T04:09:48Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:meta.helm.sh/release-name: {}
            f:meta.helm.sh/release-namespace: {}
          f:labels:
            .: {}
            f:app.kubernetes.io/instance: {}
            f:app.kubernetes.io/managed-by: {}
            f:app.kubernetes.io/name: {}
            f:helm.sh/chart: {}
        f:spec:
          f:allocateLoadBalancerNodePorts: {}
          f:externalTrafficPolicy: {}
          f:internalTrafficPolicy: {}
          f:ports:
            .: {}
            k:{"port":80,"protocol":"TCP"}:
              .: {}
              f:name: {}
              f:port: {}
              f:protocol: {}
              f:targetPort: {}
            k:{"port":443,"protocol":"TCP"}:
              .: {}
              f:name: {}
              f:port: {}
              f:protocol: {}
              f:targetPort: {}
          f:selector: {}
          f:sessionAffinity: {}
          f:type: {}
    - manager: linode-cloud-controller-manager-linux-amd64
      operation: Update
      apiVersion: v1
      time: '2024-01-03T04:09:48Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"service.kubernetes.io/load-balancer-cleanup": {}
      subresource: status
  selfLink: /api/v1/namespaces/traefik/services/traefik
status:
  loadBalancer: {}
spec:
  ports:
    - name: web
      protocol: TCP
      port: 80
      targetPort: web
      nodePort: 30328
    - name: websecure
      protocol: TCP
      port: 443
      targetPort: websecure
      nodePort: 32498
  selector:
    app.kubernetes.io/instance: traefik-traefik
    app.kubernetes.io/name: traefik
  clusterIP: 10.43.58.100
  clusterIPs:
    - 10.43.58.100
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  allocateLoadBalancerNodePorts: true
  internalTrafficPolicy: Cluster

I am not sure if this is a misconfiguration on my side or Linodes API. Anything would help.

How to upgrade?

Hi there, I just noticed a new version of kube-vip-cloud-provider was released. How can I upgrade my existing deployment?

Feature request: support for loadBalancerClass

Would it be possible to have the provider check the "loadBalancerClass", so that it only assigns IP addresses to matching services? For example by setting an environment variable "KUBEVIP_LB_CLASS" (or so).

Not writing status to ConfigMap

The kube-vip CP in 0.1 will update the kubevip ConfigMap with its status when reconciling a Service of type LoadBalancer. It does this to the .data.kubevip-services key storing its value as JSON. This is going to be problematic for GitOps approaches because unless an exclusion is applied, the CP and the controller are going to fight over reconciliation of the ConfigMap. May just want to think of another approach here like writing to a separate CR which the CP can own.

Does tag 0.0.1 support arm64?

Hi. I am getting exec format error when spinning up image kubevip/kube-vip-cloud-provider:0.0.1. According to docker hub that tag does support arm64. I am running k3s on arm64 and VIP is working but cannot get cloud provider to deploy.

Thanks!

ConfigMap IP Range Bug!

kubectl create configmap --namespace kube-system kubevip --from-literal range-global=192.168.0.200-192.168.0.202

if range ip pool is over ip/prefix 24,kube-vip-cloud-provider will be crash.

example:
cidr-global: 192.168.0.0/20 ---- OK
range-global: 192.168.0.1-192.168.0.255 ----- OK
range-global: 192.168.0.1-192.168.1.255 ------- kube-vip-cloud-provider will crash with no logs.
range-global: 192.168.0.1-192.168.0.255,192.168.1.0-192.168.1.255 ------- OK too

Configmap seems hardcoded with the kube-system namespace

If I create the configmap in the namespace where I have installed kube-vip-cloud-provider, the configuration is not detected:

I0720 21:17:33.111202       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0720 21:17:33.111178       1 loadBalancer.go:79] syncing service 'kubernetes' (ec4af361-dfc0-42c3-836a-c5b4c5e2ec9b)
I0720 21:17:33.111326       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0720 21:17:33.122677       1 loadBalancer.go:169] no cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I0720 21:17:33.122689       1 loadBalancer.go:172] no global cidr config exists [cidr-global]
I0720 21:17:33.122692       1 loadBalancer.go:186] no range config for namespace [default] exists in key [range-default] configmap [kubevip]
I0720 21:17:33.122694       1 loadBalancer.go:189] no global range config exists [range-global]
E0720 21:17:33.122706       1 controller.go:275] error processing service default/kubernetes (will retry): failed to ensure load balancer: no address pools could be found
I0720 21:17:33.122730       1 event.go:291] "Event occurred" object="default/kubernetes" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: no address pools could be found"

It works only if I created in the kube-system namespace

Failed to obtain the LoadBalancer IP address

1.env:

v1.24.3+k3s1
kube-vip:v0.4.4
kube-vip-cloud-provider: v0.0.2
arp

2.problem:
After kubectl expose deploy nginx --port=80 --type=LoadBalancer is executed, external-IP is always in the Pending state.

3.log:
[root@ah-ap-01 ~]# kubectl logs kube-vip-cloud-provider-0 -n kube-system
I0719 09:47:29.406828 1 serving.go:331] Generated self-signed cert in-memory
W0719 09:47:30.554329 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0719 09:47:30.559632 1 controllermanager.go:127] Version: v0.0.0-master+$Format:%h$
W0719 09:47:30.561301 1 controllermanager.go:139] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues
I0719 09:47:30.564234 1 secure_serving.go:197] Serving securely on [::]:10258
I0719 09:47:30.564309 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-vip-cloud-controller...
I0719 09:47:30.565157 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0719 09:47:30.611134 1 leaderelection.go:253] successfully acquired lease kube-system/kube-vip-cloud-controller
I0719 09:47:30.611288 1 event.go:291] "Event occurred" object="kube-system/kube-vip-cloud-controller" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="kube-vip-cloud-provider-0_ec957898-20c7-44de-940a-d97c31304f4a became leader"
I0719 09:47:30.611633 1 event.go:291] "Event occurred" object="kube-system/kube-vip-cloud-controller" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="kube-vip-cloud-provider-0_ec957898-20c7-44de-940a-d97c31304f4a became leader"
I0719 09:47:30.614564 1 node_controller.go:108] Sending events to api server.
W0719 09:47:30.614660 1 core.go:57] failed to start cloud node controller: cloud provider does not support instances
W0719 09:47:30.614685 1 controllermanager.go:251] Skipping "cloud-node"
I0719 09:47:30.617399 1 node_lifecycle_controller.go:77] Sending events to api server
W0719 09:47:30.617470 1 core.go:76] failed to start cloud node lifecycle controller: cloud provider does not support instances
W0719 09:47:30.617487 1 controllermanager.go:251] Skipping "cloud-node-lifecycle"
I0719 09:47:30.620560 1 controllermanager.go:254] Started "service"
I0719 09:47:30.620606 1 core.go:108] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0719 09:47:30.620714 1 controllermanager.go:251] Skipping "route"
I0719 09:47:30.620737 1 controller.go:239] Starting service controller
I0719 09:47:30.620771 1 shared_informer.go:240] Waiting for caches to sync for service
I0719 09:47:30.724097 1 shared_informer.go:247] Caches are synced for service
I0719 09:50:09.504447 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0719 09:50:09.553169 1 loadBalancer.go:78] syncing service 'nginx' (d05f7293-bd30-4315-948f-fe4a1794c6c4)
I0719 09:50:09.554412 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"
I0719 09:50:09.626388 1 loadBalancer.go:153] no cidr config for namespace [default] exists in key [cidr-default] configmap [kubevip]
I0719 09:50:09.626423 1 loadBalancer.go:156] no global cidr config exists [cidr-global]
I0719 09:50:09.626437 1 loadBalancer.go:175] no range config for namespace [default] exists in key [range-default] configmap [kubevip]
I0719 09:50:09.626446 1 loadBalancer.go:180] Taking address from [range-global] pool
I0719 09:50:09.626557 1 addressbuilder.go:95] Rebuilding addresse cache, [6] addresses exist
I0719 09:50:09.639181 1 loadBalancer.go:121] Updating service [nginx], with load balancer IPAM address [192.168.60.25]
I0719 09:50:09.669527 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="LoadbalancerIP" message=" -> 192.168.60.25"
I0719 09:50:09.669853 1 loadBalancer.go:78] syncing service 'nginx' (d05f7293-bd30-4315-948f-fe4a1794c6c4)
I0719 09:50:09.669884 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0719 09:50:09.669899 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuredLoadBalancer" message="Ensured load balancer"
I0719 09:50:09.669912 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0719 09:50:09.669924 1 event.go:291] "Event occurred" object="default/nginx" kind="Service" apiVersion="v1" type="Warning" reason="UnAvailableLoadBalancer" message="There are no available nodes for LoadBalancer"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.