Giter VIP home page Giter VIP logo

cloud-provider-kubevirt's Introduction

cloud-provider-kubevirt

Build Status Go Report Card

The KubeVirt cloud-provider allows you to use KubeVirt and Kubernetes as a "cloud" to run Kubernetes clusters on top. This project contains the kubevirt-cloud-controller-manager, an implementation of the cloud controller manager (see Concepts Underlying the Cloud Controller Manager for more details).

Introduction

The KubeVirt cloud-provider allows a Kubernetes cluster running in KubeVirt VMs (tenant cluster) to interact with KubeVirt and Kubernetes (infrastructure cluster) to provision, manage and clean up resources. For example, the cloud-provider ensures that zone and region labels of nodes in the tenant cluster are set based on the zone and region of the KubeVirt VMs in the infrastructure cluster. The cloud-provider also ensures tenant cluster services of type LoadBalancer are properly exposed through services in the UnderKube.

Prerequisites

In order to have the LoadBalancer logic working in the "tenant KubeVirt cluster, user must make sure the the KubeVirt VMs, used for the tenant cluster nodes, are created with the following labels:

cluster.x-k8s.io/cluster-name: <tenant-cluster-name>
cluster.x-k8s.io/role: worker

Those labels are used by the infra cluster services as a NodeSelector - traffic from the infra cluster services created for the tenant cluster is redirected into VM with those Labels

How to run kubevirt-cloud-controller-manager

See Running cloud-controller-manager for general information on how to configure your tenant cluster to run kubevirt-cloud-controller-manager. You can find example manifests for kubevirt-cloud-controller-manager in the manifests directory for static Pod, Deployment and DaemonSet configurations.

To get it to run, you'll need to provide a kubeconfig for the infrastructure cluster to the kubevirt-cloud-controller-manager configuration. The configuration should contain an kubeconfig key, like in the following example:

cat /etc/kubernetes/cloud/config

Output:

kubeconfig: <infraKubeConfigPath>
loadBalancer:
  creationPollInterval: 5
  creationPollTimeout: 60

How to build a Docker image

With make image you can build a Docker image containing kubevirt-cloud-controller-manager.

Development

Create a cloud config

First create a cloud config file in the project directory

touch dev/cloud-config

Next add a kubeconfig path to the cloud-config file. The kubeconfig must point to the infrastructure cluster where KubeVirt is installed.

kubeconfig: <infraKubeConfigPath>

For more configuration options look at the cloud configuration

Build KubeVirt CCM

Build kubevirt-cloud-controller-manager using make build. It will put the finished binary in bin/kubevirt-cloud-controller-manager.

Run KubeVirt CCM

Run the following command:

bin/kubevirt-cloud-controller-manager --kubeconfig <path-to-tenant-cluster-kubeconfig> --cloud-config dev/cloud-config 

cloud-provider-kubevirt's People

Contributors

afritzler avatar brianmcarey avatar briantopping avatar davidvossel avatar dependabot[bot] avatar dhiller avatar gonzolino avatar kubevirt-bot avatar mfranczy avatar nirarg avatar nunnatsa avatar qinqon avatar sankalp-r avatar stoyanr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-provider-kubevirt's Issues

Node labels remain the same on a tenant cluster after KubeVirt VM is migrated to another zone

Like described in README for our set up we use infrastructure cluster that hosts KubeVirt VMs (previously known as UnderKube) and tenant cluster that uses those VMs as kubernetes nodes (OverKube).

Nodes that host UnderKube VMs are located in two different availability zones and labeled accordingly.

Recently we were testing VM migration scenarios and found out that labels (such as zone) on OverKube nodes always remain the same despite VMs were migrated between racks, meaning different zones.

What we expect: labels are being changed dynamically when KubeVirt VMs on an UnderKube cluster are migrated between availability zones and correctly propagated to kubernetes nodes on a tenant cluster.

I can provide more info if needed about migration steps etc

Error adding new node to cluster: Node is invalid with 'spec.providerID' forbidden error

We encountered the following error when attempting to add a new node to a tenant cluster:

Node "xxx" is invalid: spec.providerID: Forbidden: node updates may not change providerID except from "" to valid, requeuing

Initially, we used another cloud provider to add nodes to the cluster. However, when we attempted to re-add them, we received the above error message. It appears that when updating the node manifest, the cloud-provider-kubevirt attempts to update the providerID field value with providerID: kubevirt://nodeName

Is it possible to disable updates to this field if it already exists?"

Support Local ExternalTrafficPolicy

As mentioned in this ticket: #15, EnsureLoadBalancer can only support an ExternalTrafficPolicy=Cluster.
It would also require:

  • Do not label the proxy pod/VMI where actual pod does not exist
  • React on endpoint updates (if the actual pod is rescheduled on a different node)

Is there a plan to fix this issue to have ExternalTrafficPolicy=Local work ?
We for example have this ticket opened kubermatic/kubermatic#9022 by our customer that would like to use the Local externalTrafficPolicy.

Use distroless base image rather than alpine

Lookup for VMI by hostname

The current implementation doesn't work as expected, it fails with:

that match label selector "", field selector "spec.Hostname=testmachine": field label not supported: spec.Hostname

Create E2E tests

In order to gain more code maturity, need to add integration tests

Those tests would be done using cluster-api clusters

For more info about cluster-api, see:

Add e2e tests

Add integration covering al the use cases, one solution would be to copy mechanism from capk.

Revendor to k/k 1.17.x or newer

In order to support adding the new topology labels to overkube nodes, as opposed to the deprecated failure-domain labels (see also #11), the CCM should be revendored to a version of kubernetes newer than 1.16. Unfortunately, this is hard to do since kubevirt.io/client-go is still based on 1.16 and there are conflicts, see also #10.

There are two ways this issue could be resolved:

  1. Wait until kubevirt.io/client-go is revendored to a newer kubernetes (or actively contribute to this).
  2. Remove dependency to kubevirt.io/client-go/kubecli and use a different client for reading / writing Kubevirt resources. A good alternative could be sigs.k8s.io/controller-runtime/pkg/client. In this case, only the dependency to kubevirt.io/client-go/api/v1 would remain, which should not be an issue.

The second approach requires more changes to the CCM itself, but can be done without depending on any changes to kubevirt.io/client-go. As an additional bonus, this would remove the dependency to glog described in #10 and would enable vendoring of kubevirt.io/client-go newer than 0.26.5.

@afritzler @gonzolino What do you think?

Will there be a new cloud-provider-kubevirt release this year

Hi Team,

The last release for this repository is in the month of Oct 2022. Will there be a new release of cloud-provider-kubevirt in the near future ?

Is there a document that refers to the timelines for the release of this repository.

Thanks
Sharath

Support Local ExternalTrafficPolicy

As it stands, EnsureLoadBalancer can only support an ExternalTrafficPolicy of Cluster. This is unusable when the client IP address of external traffic must be the actual client address and not the masquerade address of the CNI.

Supporting this will require:

  • - Copy the ExternalTrafficPolicy to the proxy service
  • - Do not label the proxy pod/VMI where actual pod does not exist
  • - Test updates

Open to suggestions on what information is available to the CCM to more selectively label the proxies.

Need workaround for MetalLB service backend mismatch

#16 copies annotations from proxied service to proxy. It was primarily created for load balancers that use service annotations to provide metadata for endpoint wiring, for instance an external IP address.

This works in general, but the allow-shared-ip of MetalLB is a special case. There are likely to be many of this nature. The value provided in the annotation is a lookup key for other services that an IP address should be allowed to be shared with. If two services have the same IP address request but do not contain the same sharing key, the services will not be allowed to share the address.

A nuance of this check is the services must also be managed by the same load balancer. This only makes sense.

When deployed in Gardener, the load balancer of a shoot is not the same as the external load balancer of the cluster. So blindly copying allow-shared-ip will fail because there are two load balancers using the key.

There are several ways to make the key unique, what I am concerned with is how to do this without adding special cases for unrelated projects to the code.

Any thoughts, @gonzolino or @stoyanr?

Add external IP update mechanism for LBs

The kubvirt-cloud-provider currently tries to set the external IP of a LoadBalancer service only at creation time (see https://github.com/kubevirt/cloud-provider-kubevirt/blob/master/pkg/cloudprovider/kubevirt/loadbalancer.go#L94,L108). In case that fails or the external IP changes, the cloud-provider will not reconcile the service.
A mechanism in the EnsureLoadbalancer method (https://github.com/kubevirt/cloud-provider-kubevirt/blob/master/pkg/cloudprovider/kubevirt/loadbalancer.go#L81,L87) is needed to update the external IP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.