kubernetes-sigs / cluster-api-ipam-provider-in-cluster Goto Github PK
View Code? Open in Web Editor NEWAn IPAM provider for Cluster API that manages pools of IP addresses using Kubernetes resources.
License: Apache License 2.0
An IPAM provider for Cluster API that manages pools of IP addresses using Kubernetes resources.
License: Apache License 2.0
Now that clusterctl supports ipam providers via kubernetes-sigs/cluster-api#7288, some work needs to be done in this provider to integrate with the clusterctl changes.
This provider-contract doc covers everything that needs to be done, but a summary of what I think needs to happen is:
There may be more that needs to be done, but this should be a good starting point.
/kind feature
Changes were recently made to the IPAddressClaim controller to prevent reconciliation when the Cluster
object is paused. The Cluster
object has a spec.Paused
property, and can also have a Paused annotation. The code copied from CAPV and placed into the IPAddressClaim controller looks at the Cluster
's spec.Paused
property during updates, and looks at the Cluster
's paused annotations during creates. It seems the Update and Create functions should both look at both the annotation and the property.
We thought this was odd and @srm09 agreed. We opened this issue on CAPV as a result of our discussion.
kubernetes-sigs/cluster-api-provider-vsphere#1890
Sagar suggests this paused detection predicate should be a shared library function owned by either CAPV or CAPI, and all controllers should detect paused in the same. This issue is a reminder to update the IPAddressClaim controller when the CAPV issue is resolved.
It would be useful if we could some usage statistics from the pool without having to query the IPAddresses and count ourselves.
We are thinking of adding a total, used, free in the pool status. E.g.
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: pool-name
namespace: pool-ns
spec:
subnet: 10.0.0.0/24
status:
ipAddresses:
# The total number of IPs the pool has.
total: 255
# The number of IPs that have already been allocated.
used: 9
# The number of IPs that are available to allocate.
free: 250
We see that pools can be updated without regard for what IPs are in use. This can lead to situations where IPs can become out of range of the pool's configuration.
We are thinking of adding validation in the webhook that would check if there is an IP address that is already allocated before allowing it to be removed from the pool. This continues to allow configuration of the pool, but prevents an IP Address that is in use to be removed from the pool.
We are also thinking of adding an outOfRange
status count on the pool status (similar to #112) to expose potential issues if it already happens to have out of range IPs. This is mostly considering the case of updates to the pool that may have occurred without the webhook validation.
In addition to the outOfRange
status count on the pool we could also add an outOfRange status condition on the IPAddress.
The provider URL is no longer on telekom but has moved to kubernetes-sigs, so should be:
providers:
- name: in-cluster
url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
type: IPAMProvider
otherwise clusterctl init errors with:
Fetching providers
Error: failed to get provider components for the "in-cluster" provider: target namespace can't be defaulted. Please specify a target namespace
This will unblock kubernetes-sigs/cluster-api#8811, so we can get this provider accessible via clusterctl
Currently, it is possible to create/update a pool where the Gateway address is not within the pool's inferred Subnet. It seems the pool webhook should validate that the Gateway is within the pool's subnet when the pool's IP family is detected to be IPv4.
Gateway is an IPv4 concept, and thus it doesn't make sense to make the suggested validation when the pool's IP family is IPv6.
When using Velero to do a backup/restore, we see that claims are getting reconciled when the paused cluster has yet to be restored. We see that the cluster is always restored last, due to the dependency tree.
Our use case is Velero specific, but we think this is a more general bug too. If a claim is linked to a cluster and it can't be retrieved, we think it should skip reconciliation until the cluster can be evaluated.
As of writing this issue, the provider has no validation for pools that overlap. This may cause the provider to give out duplicate IPs, which could result in serious issues. We'd like to start a discussion about why or why not validation would be something desired.
We're interested in cases where rejecting a pool that overlaps another pool would be incorrect behavior.
The clusterctl move is not taking into consideration the globalinclusterippools.
When support for spec.Addresses
was added, providing spec.Subnet
was not allowed. Users are forced to Specify spec.Prefix
when specifying spec.Addresses
.
Pools should allow specifying spec.Subnet
, and if not provided, it should be derived and defaulted. The default can be derived from the spec.Addresses
and spec.Prefix
.
Also, when specifying spec.Subnet
, spec.Prefix
should be allowed and should match the spec.Subnet
. The spec.Prefix
can be defaulted from the spec.Subnet
.
We think this is useful when running kubectl get inculsterippool
. Including the subnet in the output makes matching up the pool's configuration with the IAAS's configuration easier.
Example:
Here are two pools that are the same, except they're in different namespaces:
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: common-name
namespace: ns-a
spec:
first: 192.168.0.2
last: 192.168.0.3
prefix: 24
gateway: 192.168.0.1
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: common-name
namespace: ns-b
spec:
first: 192.168.0.2
last: 192.168.0.3
prefix: 24
gateway: 192.168.0.1
If Pool A were to give out 192.168.0.2, then Pool B will not give out that same address, even though the pools are in different namespaces. The pools should give out IPs independent of one another, and IPs in use by a different pool should not affect any other pool.
reproduction steps:
1) k3d cluster create --no-lb --k3s-arg "--disable=traefik,servicelb,metrics-server,local-storage@server:*"
2) clusterctl init
3) from root of this repo: make install && make deploy
4)
cat <<EOF | kubectl apply -f -
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: bar
namespace: giantswarm
spec:
addresses:
- 10.10.225.232-10.10.225.238
gateway: 10.10.225.0
prefix: 24
EOF
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "default.globalinclusterippool.ipam.cluster.x-k8s.io": failed to call webhook: the server could not find the requested resource
workaround:
kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io caip-in-cluster-mutating-webhook-configuration
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io caip-in-cluster-validating-webhook-configuration
cat <<EOF | kubectl apply -f -
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: bar
namespace: giantswarm
spec:
addresses:
- 10.10.225.232-10.10.225.238
gateway: 10.10.225.0
prefix: 24
EOF
globalinclusterippool.ipam.cluster.x-k8s.io/baasd created
Currently an InClusterIPPool is per namespace. We would like to have the ability to define a single pool of IPs that can fulfill claims for clusters in various namespaces.
Right now it is possible to define the same pool in multiple namespaces and get the behavior of a shared pool. This has the overhead of maintaining that same resource in multiple namespaces. When changing the size of the pool all instances would need to be updated or risk configuration drift.
We propose adding a reference field to the InClusterIPPool
resource called poolPointer
which would be an ObjectReference.
Using CAPV as an example users would create a template in the same way as before:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: VSphereMachineTemplate
metadata:
name: example
namespace: cluster-ns
spec:
template:
spec:
cloneMode: FullClone
numCPUs: 8
memoryMiB: 8192
diskGiB: 45
network:
devices:
- dhcp4: false
fromPool:
group: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
name: my-pool-pointer
Create the pool pointer:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: my-pool-pointer
namespace: cluster-ns
spec:
poolPointer:
- name: my-pool
namespace: pool-ns
kind: InClusterIPPool
Create the pool:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: my-pool
namespace: pool-ns
spec:
pools:
- subnet: 10.10.10.0/24
start: 10.10.10.100
end: 10.10.10.200
For pools that have a poolPointer
the controller will need be updated to resolve the IPPool data from the provided namespace/name pool.
The webhook for InClusterIPPool
should reject any resources that provide both pools
and poolPointer
.
IPAddresses
are already created in the namespace of the IPAddressClaim
and the controller currently lists IPAddresses
from all namespaces when determining what IP to assign based on the kind/name
of the pool.
We're happy to implement this and are looking for feedback on our proposed approach.
As per current CRD Spec, InClusterIPPool or GlobalInClusterIPPool supports addresses from 1 subnet only.
For eg,
Single InClusterIPPool and GlobalInClusterIPPool cannot support below two subnets,
Request you to support both subnets under single IPPool. We were able to use this feature using capm3 ippool.
apiVersion: ipam.metal3.io/v1alpha1
kind: IPPool
metadata:
name: vlan
spec:
pools:
It will be good to have image for arm64
Wen are trying to integrate this provider as part of our infrastructure in Proxmox.
But we couldn't find any way of handling reserved IPs.
As we use this IPAM provider to get ip addresses to set to our machines, but there's no way of setting an already
existent IP.
which means, that would end up in a conflicts.
for example we have an IPrange: 10.10.10.1/24 and gateway: 10.10.10.1
In our range, we would have some Machines running , e.g. jumphosts, CI, dhcp server, or VMs used for maintenance.
so, if we have a manual machine provisioned to this network with ip e.g. 10.10.10.3
,
How to tell IPAM provider to no use this IP?
In a scenario we've encountered, a backup/restore specifically, we see that the finalizer and owner refs are not restored. In the case they're missing, the controller should restore these owner references and finalizer. Currently, the finalizer is created when the controller creates the objects.
I noticed an issue recently when creating multiple clusters with the same pool and forgetting that my pool size was too small for the number of machines I created. The observed behavior was that when the pool runs out of IPs, the cluster will never come up because it's waiting for a node to obtain an IP address. Even when deleting an unused cluster, this issue does not resolve itself. I have to manually delete the claim and its owner resources in order to get the deployment to progress.
We should add a watch for when IPAddressClaims are deleted to trigger reconciles on other IPAddressClaims.
Feature Request:
We have a case where we're not able to obtain a contiguous block of IP addresses. We can, however, get a set of IPs that are on the same network. We'd like the ability to configure the pool with this list of IP.
We propose being able to set a list of IP in place of a CIDR or start & end:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
spec:
prefix: 24
gateway: 10.0.0.1
addresses:
- 10.0.0.5
- 10.0.0.7
- 10.0.0.9
- 10.0.0.14
- 10.0.0.6
It seems some validations would be prudent too, to ensure the IPs are within the gateway/prefix range.
IPAddressClaim now has a spec.clusterName
field, which should be preferred over the cluster.x-k8s.io/cluster-name
annotation.
The IPAM CRDs supplied by CAPI suggest that Gateway is not a required field. CAPV is written so that gateway can be supplied on the VSphereMachine, removing the need to configure it on the pool. CAIP Validation allows for an empty Gateway, but later fails to Parse the empty Gateway here:
CAIP should honor it's validations and not error on an empty gateway.
I noticed that a pool can be deleted even if IPAddresses have been allocated for that pool. This seems undesirable.
Option 1: Add a finalizer to the pool that can be used to ensure the pool isn't removed until all IPAddresses are deleted. Drawback here is that the pool is marked for deletion and this may be an undesirable state if a user decides that they can't clean up the IPAddresses and want to continue using the pool.
Option 2: Guard deletion of the pool with the webhook. More complex then a finalizer, but prevents them from potentially entering a bad state.
I'm leaning towards implementing option 2 if we do this, but open to hearing opinions here.
Ref kubernetes-sigs/cluster-api#9478
Object already exists, updating IPAddress="mgmt-cluster-control-plane-g6ppb-net0-inet6" Namespace="default"
Retrying with backoff Cause="error updating \"ipam.cluster.x-k8s.io/v1alpha1, Kind=IPAddress\" default/mgmt-cluster-control-plane-g6ppb-net0-inet6: admission webhook \"validation.ipaddress.ipam.cluster.x-k8s.io\" denied the request: spec: Forbidden: the spec of IPAddress is immutable"
Hi. Awesome sig!
There seems to be a bug in ipam.cluster.x-k8s.io/v1alpha2 that is poorly documented.
It is no longer spec.exclude, now it's apparently spec.excludedAddresses that defines the excluded addresses.
However, neither of them work now.
I'm using the following configuration.
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: ${CLUSTER_CLASS_NAME}
namespace: ${NAMESPACE}
spec:
prefix: 25
addresses:
- 10.1.157.0/25
excludedAddresses:
- 10.1.157.1-10.1.157.20
gateway: 10.1.157.1
The full error:
Error from server (BadRequest): error when creating "STDIN": InClusterIPPool in version "v1alpha2" cannot be handled as a InClusterIPPool: strict decoding error: unknown field "spec.excludedAddresses"
If cert-manager is already installed in the ClusterAPI management cluster (before it is initialized) but is not the version expected by clusterctl, then clusterctl --ipam incluster --config clusterctl-IPAM.config.yaml
, with clusterctl-IPAM.config.yaml being:
---
providers:
- name: incluster
url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
type: IPAMProvider
fails with a strange error:
Error: failed to get provider components for the "incluster" provider: failed to read "ipam-components.yaml" from provider's repository "ipam-incluster": release not found for version download,
please retry later or set "GOPROXY=off" to get the current stable release: 404 Not Found
If I specify an IPAM version, e.g. v0.1.0-alpha.2 (clusterctl init --ipam incluster:v0.1.0-alpha.2 --config clusterctl-IPAM.config.yaml
) the error is:
Error: failed to get provider components for the "incluster:v0.1.0-alpha.2" provider: failed to read "ipam-components.yaml" from provider's repository "ipam-incluster": failed to download files
from GitHub release v0.1.0-alpha.2: failed to get file "ipam-components.yaml" from "v0.1.0-alpha.2" release
After I upgraded cert-manager to the version expected by clusterctl the init went smoothly.
When I launch either clusterctl upgrade plan
or clusterctl upgrade apply ...
i get this error:
Error: invalid provider metadata: version v1.0.0-rc.0 (one of the available versions) for the provider capi-ipam-in-cluster-system/ipam-incluster does not match any release series
I guess this error is caused by the name of the release tag
We've discussed in k8s slack and in the cluster-lifecycle-sigs meeting an intention to move this repo into kubernetes-sigs.
Assuming all interested parties approve this change there is some work we need to do to make it happen.
The k8s community lists out some rules for donated repositories that we need to ensure we satisfy. Some things must be done after a move, such as getting integrated with k8s CI.
I'll do my best to outline those requirements below so we can start to check them off. I'm fairly sure we already meet several of these requirements. I'll check those off as I validate them.
If there are any things that we may have missed, pease let me know and I'll ensure we keep track of them.
I apologize for my lack of knowledge, but I am trying to use your cluster-api-ipam-provider-in-cluster and have encountered some issues.
I created a cluster using the instructions on this page about Docker:
https://cluster-api.sigs.k8s.io/user/quick-start.html
Then I ran the following commands and confirmed that caip-in-cluster-system is running:
git clone https://github.com/telekom/cluster-api-ipam-provider-in-cluster
cd cluster-api-ipam-provider-in-cluster
kubectl apply -k config/default
Based on the README.md you provided, I created an inclusterippool-sample. Here are the specific details I used:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
spec:
subnet: 10.0.0.0/24
gateway: 10.0.0.1
Next, I tried to create a pod to verify that its IP address is within the inclusterippool, but it is clear that the pod's IP is not within the inclusterippool. Here is an example of the pod:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
I have reviewed the information you provided and did not see any indication that it can only be used in specific environments, such as AWS or vSphere.
So, how can I verify that IPAM is functioning correctly? If there is any misunderstanding in the above content, please correct me. Alternatively, could you provide an example of how to use it so I can verify it? Thank you for your help.
Until now, we don't have working e2e tests,
we can use the CAPI test framework to add tests.
if infrastructure provider is needed we can probably use some e.g. (vSphere).
/help
An ipam-components.yaml
file should be provided for use with clusterctl
to install CAIP as an IPAMProvider.
The clusterctl
documentation suggests that as part of the provider contract:
The provider is required to generate a components YAML file and publish it to the provider’s repository. This file is a single YAML with all the components required for installing the provider itself (CRDs, Controller, RBAC etc.).
In the case of an IPAM Provider, the file should be called ipam-components.yaml
https://cluster-api.sigs.k8s.io/clusterctl/provider-contract.html
It looks like we are trying to solve the same issues related to IP address management at vSphere using Kubernetes Cluster API.
Is this component working with CAPV v1.5.1 and Cluster API 1.3.1 ?
Given the following template with reference to the cluster IP pool using the new feature "addressesFromPools" in CAPV 1.5.0:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: vmware-test-controlplane
spec:
template:
spec:
cloneMode: linkedClone
datacenter: dc1
datastore: datastore5
diskGiB: 10
folder: vmware-test
memoryMiB: 8192
network:
devices:
- dhcp4: false
dhcp6: false
gateway4: 192.168.0.1
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: vmware-test-controlplane
networkName: VM Network
...
and the IPAM:
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: vmware-test-controlplane
spec:
start: 192.168.5.10
end: 192.168.5.14
prefix: 24
gateway: 192.168.0.1
From those configuration we do get IPAddresses and IPAddressclaims as expected, however, the IP are not propagated to the virtual machine.
I wonder if cluster-api-ipam-provider-in-cluster
was written before the release of CAPV 1.5.0 and thereby not being compatible with the new CAPV feature.
Note, we are using Talos for bootstrap & control plane, however, that should not affect IPAM.
When a cluster is moved from a CAPI manager to another with
clusterctl move
the IPAM resources (GlobalInClusterIPPool, InClusterIPPool, IPAddressClaim and IPAddress) are not migrated to the destination CAPI management cluster.
The IPPool needs to be created beforehand on the destination, and the IPAddresses end up being different than on the source (luckily the IPAM controller does not try to change the VMs).
The current timestamps are integer format, which are helpful for log aggregators, but make it hard for humans to decipher.
Proposal: Logs should be changed to ISO format, or should include both integer and ISO format timestamps.
2023-02-03T18:46:39Z
additionalPrinterColumns is currently
- additionalPrinterColumns:
- description: Subnet to allocate IPs from
jsonPath: .spec.subnet
name: Subnet
type: string
- description: First address of the range to allocate from
jsonPath: .spec.first
name: First
type: string
- description: Last address of the range to allocate from
jsonPath: .spec.last
name: Last
type: string
but should be
- additionalPrinterColumns:
- description: Subnet to allocate IPs from
jsonPath: .spec.subnet
name: Subnet
type: string
- description: First address of the range to allocate from
jsonPath: .spec.start
name: Start
type: string
- description: Last address of the range to allocate from
jsonPath: .spec.end
name: End
type: string
spec is currently
spec:
description: InClusterIPPoolSpec defines the desired state of InClusterIPPool.
properties:
addresses:
description: Addresses is a list of IP addresses that can be assigned.
This set of addresses can be non-contiguous. Can be omitted if subnet,
or first and last is set.
items:
type: string
type: array
end:
description: Last is the last address that can be assigned. Must come
after first and needs to fit into a common subnet. If unset, the
second last address of subnet will be used.
type: string
gateway:
description: Gateway
type: string
prefix:
description: Prefix is the network prefix to use. If unset the prefix
from the subnet will be used.
maximum: 128
type: integer
start:
description: First is the first address that can be assigned. If unset,
the second address of subnet will be used.
type: string
subnet:
description: Subnet is the subnet to assign IP addresses from. Can
be omitted if addresses or first, last and prefix are set.
type: string
but needs to be
spec:
description: InClusterIPPoolSpec defines the desired state of InClusterIPPool.
properties:
addresses:
description: Addresses is a list of IP addresses that can be assigned.
This set of addresses can be non-contiguous. Can be omitted if subnet,
or start and end are set.
items:
type: string
type: array
end:
description: End is the last address that can be assigned. Must come
after start and needs to fit into a common subnet. If unset, the
second to last address of subnet will be used.
type: string
gateway:
description: Gateway
type: string
prefix:
description: Prefix is the network prefix to use. If unset the prefix
from the subnet will be used.
maximum: 128
type: integer
start:
description: Start is the first address that can be assigned. If unset,
the second address of subnet will be used.
type: string
subnet:
description: Subnet is the subnet to assign IP addresses from. Can
be omitted if addresses or start, end and prefix are set.
type: string
At time of writing, the reconciler gathers up the in use IPs, but incorrectly only gathers them from the claim's namespace.
This means a claim from a different namespace will incorrectly determine the set of in use IPs.
As it is written now, CAPV assumes that a Gateway will be present on IPAddress
objects, and will fail to reconcile a VSphereVM
if it's not present.
The webhook should validate the presence of the Gateway, and reject pools that do not have a valid gateway. Gateways should need to be in range of the provided subnet
.
Alternatively, the gateway could be defaulted to the first IP in the subnet range.
Broken out of the conversation started in #123 (comment).
There are ip addresses that shouldn't be given out that are reserved for special cases in the subnet. This is typically the network address, the broadcast address, and the gateway address.
If I defined the following pool:
Gateway: 192.168.0.1
Prefix: 16
Addresses:
- 192.168.0.0/16
The controller should not allocate: 192.168.0.0 (network address), 192.168.0.1 (gateway address), 192.168.255.255 (broadcast address). It would therefore give out 192.168.0.2-192.168.255.254.
If a user wants to allocate reserved addresses i.e ignore the above functionality except for gateway, a user can set a new flag called allocatedReservedAddresses
, which would allocate 192.168.0.0-192.168.255.255 (except for 192.168.0.1). If the user doesn't want to reserve the gateway they should not set the gateway.
Hello there,
I created by accident two globalinclusteripool CRs with unique names and same IP address ranges. No complaints from the admission webhook, nor IPAM operator. Cluster API is happily rolling new nodes with the same IP addresses 😯
Cluster nodes with same IPs!Shouldn't be this behaviour somehow checked and prohibited?
When using the GlobalInClusterIPPool
(the CRD is not namespace scoped) and referencing it by creating various IpAddressClaim
in various namespaces, the generated IpAdress
crs then contain IP are not unique. They basically start the sequence for each namespace again.
Pools fail to show the first/last or start/end values of pools.
There is also some mixing of first/last and start/end. Perhaps one set should be eliminated.
$ cat pool.yaml
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
namespace: default
spec:
subnet: 10.0.0.0/24
gateway: 10.0.0.1
start: 10.0.0.22
end: 10.0.0.40
$ kubectl apply -f pool.yaml
inclusterippool.ipam.cluster.x-k8s.io/inclusterippool-sample created
$ kubectl get inclusterippool
NAME SUBNET FIRST LAST
inclusterippool-sample 10.0.0.0/24
The example
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
spec:
first: 10.0.0.10
last: 10.10.0.42
prefix: 24
gateway: 10.0.0.1
should be
apiVersion: ipam.cluster.x-k8s.io/v1alpha1
kind: InClusterIPPool
metadata:
name: inclusterippool-sample
spec:
start: 10.0.0.10
end: 10.10.0.42
prefix: 24
gateway: 10.0.0.1
ipam provider does not support capi v1.6.0, apparently because CAPI stopped serving the alpha API version
is it possible to update ipam provider to support new CAPI within half a year?
When clusterctl move runs it attempts to delete everything off the source cluster. This currently fails because it may attempt to delete the pool before it deletes all the ipaddresses and there doesn't seem to be a way to control delete order. So it will error on the validate delete webhook, which #124 added functionality to check that there are no in use IPs before allowing a pool to be deleted.
For future cluster-api releases (1.5+), there was an annotation added that a validate delete webhook can look for to potentially allow the validation to be skipped called: clusterctl.cluster.x-k8s.io/delete-for-move
(via kubernetes-sigs/cluster-api#8322).
For the current cluster-api release and below (1.4.x), there doesn't seem to be a good way to do this so I think we need to add our own annotation that allows the delete validation to be skipped, which a user could have on their pool before a move e.g ipam.cluster.x-k8s.io/skip-validate-delete-webhook
.
We're currently working on our own provider: https://github.com/ionos-cloud/cluster-api-provider-ionoscloud
We would like to assign static public IP addresses to our nodes. The problem we have here is, that reserving IP addresses in IONOS Cloud is quite random. It might be that we reserve 2 IP addresses and one of them starts with 85.xxxx and the other one with 224.xxx
Currently the validation only allows to provide a prefix that has at least a value of 1
. For public IP addresses this would mean, we could only make use of a subnet from either 0.0.0.0/1
and 128.0.0.0/1
Making use of two IPPools doesn't really make sense here, only thing that is blocking using a 0
prefix is the validation in the webhook.
Is there a reason why /0
is not allowed here or can this be updated?
In the case that an IPAddressClaim references a InClusterIPPool that does not exist, the claim should be marked
ready: false, with reason: "Pool not found".
This leaves also the opportunity for other reasons, such as "Pool depleted" and such.
We need to adopt the following bots after the repository move is complete.
Hello IPAM provider community,
Is there any simple way of migrating machines from one range to another within the same globalinclusterapiVersion? And then releasing the original range?
Eg. we want to migrate machines from 10.129.241.30-10.129.241.40 to 10.129.241.90-10.129.241.100. After migration, we want to free up the formerly used range 10.129.241.30-10.129.241.40.
Original object:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.30-10.129.241.40
gateway: 10.129.241.254
prefix: 23
I would expect adding another range:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.30-10.129.241.40
- 10.129.241.90-10.129.241.100
gateway: 10.129.241.254
prefix: 23
and then removing the original range:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.90-10.129.241.100
gateway: 10.129.241.254
prefix: 23
But that workflow is forbidden:
error: globalinclusterippools.ipam.cluster.x-k8s.io "cluster-inclusterippool" could not be patched: admission webhook "validation.globalinclusterippool.ipam.cluster.x-k8s.io" denied the request: pool addresses do not contain allocated addresses: [10.129.241.32-10.129.241.32 10.129.241.34-10.129.241.34]
IP addresses are reserved. I understand. I would expect IPAM to inform ClusterAPI to roll out new machines with new IP addresses/claims using the newly added range. But it looks like the only possibility is to create a completely new globalinclusterippol object.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.