Giter VIP home page Giter VIP logo

provider-cloudscale's Introduction

provider-cloudscale

Build Go version Version GitHub downloads

Crossplane provider for managing resources on cloudscale.ch.

Documentation: https://vshn.github.io/provider-cloudscale/

Local Development

ℹī¸ Some architecture design notes are also available in the documentation

Requirements

  • docker
  • go
  • helm
  • kubectl
  • yq
  • sed (or gsed for Mac)

Some other requirements (e.g. kind) will be compiled on-the-fly and put in the local cache dir .kind as needed.

Common make targets

  • make build to build the binary and docker image
  • make generate to (re)generate additional code artifacts
  • make test run test suite
  • make local-install to install the operator in local cluster
  • make install-samples to run the provider in local cluster and apply sample manifests
  • make run-operator to run the code in operator mode against your current kubecontext
  • make test-e2e to run e2e tests with kuttl

See all targets with make help

QuickStart Demonstration

  1. Get an API token cloudscale.ch
  2. export CLOUDSCALE_API_TOKEN=<the-token>
  3. make local-install install-samples

Kubernetes Webhook Troubleshooting

The provider comes with mutating and validation admission webhook server.

To test and troubleshoot the webhooks on the cluster, simply apply your changes with kubectl.

  1. To debug the webhook in an IDE, we need to generate certificates:

    make webhook-cert
  2. Start the operator in your IDE with WEBHOOK_TLS_CERT_DIR environment set to .kind.

  3. Send an admission request sample of the spec:

    # send an admission request
    curl -k -v -H "Content-Type: application/json" --data @samples/admission.k8s.io_admissionreview.json https://localhost:9443/validate-cloudscale-crossplane-io-v1-bucket

Crossplane Provider Mechanics

For detailed information on how Crossplane Provider works from a development perspective check provider mechanics documentation page.

e2e testing with kuttl

Some scenarios are tested with the Kubernetes E2E testing tool Kuttl. Kuttl is basically comparing the installed manifests (usually files named ##-install*.yaml) with observed objects and compares the desired output (files named ##-assert*.yaml).

To execute tests, run make test-e2e from the root dir.

If a test fails, kuttl leaves the resources in the kind-cluster intact, so you can inspect the resources and events if necessary. Please note that Kubernetes Events from cluster-scoped resources appear in the default namespace only, but kubectl describe ... should show you the events.

If tests succeed, the relevant resources are deleted to not use up costs on the cloud providers.

Cleaning up e2e tests

Usually make clean ensures that buckets and users are deleted before deleting the kind cluster, provided the operator is running in kind cluster. Alternatively, make .e2e-test-clean also removes all buckets and objectsusers.

To cleanup manually on control.cloudscale.ch, search for resources that begin with or contain e2e in the name.

provider-cloudscale's People

Contributors

ccremer avatar davidgubler avatar kidswiss avatar renovate[bot] avatar zugao avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

provider-cloudscale's Issues

Create object user on cloudscale.ch

Summary

As user
I want to create objects user on cloudscale.ch
So that I can create buckets using the new credentials

Context

Proposed CRD API:

apiVersion: cloudscale.s3.appcat.io/v1
kind: ObjectsUser
metadata:
  name: backup-user
  namespace: default # namespace scoped CR
spec:
  secretRef: backup-user-credentials
status:
  userID: uf8wrngf23hiudbskdjf2znyalh1 # from cloudscale.ch API
  conditions: []

Out of Scope

  • Supporting more than 1 cloudscale.ch project (API token)

Further links

Acceptance Criteria

Given a resource using spec:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    name: backup-user
    namespace: default
  spec:
    secretRef: backup-user-credentials
And the operator pod has a "CLOUDSCALE_API_TOKEN" environment variable defined
When reconciling the resource
Then an objects user is created in cloudscale.ch through API/SDK using the constructed name "default.backup-user" *
And a resource is created using spec:
  apiVersion: v1
  kind: Secret
  metadata:
    name: backup-user-credentials
    namespace: default
    finalizers:
      - s3.appcat.vshn.io/user-protection
  data:
    AWS_ACCESS_KEY_ID: <base64: generated access key from cloudscale.ch API>
    AWS_SECRET_ACCESS_KEY: <base64: generated secret key from cloudscale.ch API>
And the resource is updated to:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    namespace: default
    finalizers:
      - s3.appcat.vshn.io/user-protection
  spec:
    secretRef: backup-user-credentials
  status:
    userID: <generated from cloudscale.ch API>
    conditions:
      - message: "user created"
        reason: Available
        status: "True"
        type: Ready
Given a resource using spec:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    namespace: default
  spec:
    secretRef: backup-user-credentials
When reconciling the resource encounters any error
Then the resource is updated to:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    name: backup-user
    namespace: default
  spec:
    secretRef: backup-user-credentials
  status:
    conditions:
      - message: "<error message>"
        reason: ProvisioningFailed
        status: "True"
        type: Failed

*The name is generated with following rule: <metadata.namespace>.<metadata.name> (this is supported by cloudscale.ch)

Implementation Ideas

Bucket can get stuck in deletion when ObjectsUser is deleted first

Description

A Bucket resource requires valid S3 credentials to make any operations using the S3 client.
When trying to delete a Bucket where the S3 credentials are missing, deletion may become stuck because we don't know whether the bucket is actually deleted or not.

This is the case with Crossplane provider. This is the way how Crossplane-runtime performs the deletion:

  1. kubectl delete is issued, which sets a metadata.deletionTimestamp. Since there is a finalizer present managed by the provider, Kubernetes doesn't immediately delete the resource.
  2. Runtime attempts to reconcile the Bucket resource (Reconcile())
  3. Runtime calls Connect() from the managed.ExternalConnecter interface
  4. Runtime calls Observe() from the managed.ExternalClient interface
  5. Runtime calls Delete() from the managed.ExternalClient interface
  6. Runtime attempts another reconciliation of the Bucket resource (Reconcile())
  7. Runtime calls Connect() from the managed.ExternalConnecter interface
  8. Runtime calls Observe() from the managed.ExternalClient interface

Now, in step 3 and 7 we fetch the Secret with the S3 credentials to construct an S3 client. If the Secret is missing or invalid, we can't know the state in cloudscale's API.

Basically, we need to deal with missing Secret during deletions, while still respecting the following scenarios:

  • The secret is missing because it's expected to be provisioned yet. for example, if Bucket and ObjectsUser are created at the same time, yet the Bucket is reconciled before we have valid credentials. Eventually we have the credentials a few seconds later and then we can create the Bucket -> this is Kubernetes' eventual consistency in action (think of a Helm chart rollout where every resource is applied at the same time)
  • The secret is accidentally deleted or modified by the (human) user (the ObjectsUser resource still exists) -> Eventually the controller for ObjectsUser should restore the Secret upon next reconciliation.

The following cases are human errors, which we don't (or can't) have to deal with:

  • A human user deletes all bucket manually and then the ObjectsUser. The ObjectsUser can successfully be deleted, which deletes also the Secret. The Bucket can get stuck since it cannot attempt to delete it (step 3)

Additional Context

The cloudscale.ch API won't allow us to delete an ObjectsUser if there are buckets still present. So this is a race condition happening only in Kubernetes, when both resources are deleted at roughly the same time, e.g. in a Crossplane Composition. Also, apparently Crossplane's runtime is paranoid and tries to verify that the resource has really been deleted even if we don't return an error in step 5 (aka Bucket successfully deleted in cloudscale.ch)

I'm assuming that the 2nd observation (step 7 & 8) happens to support asynchronous deletions and/or provisioning. There may be providers out there that return the successful deletion but delete the actual resources in the background. Subsequent observations are required to confirm when the resource is actually gone.
However, deleting a bucket is a synchronous operation (at least when the bucket is empty) and thus subsequent observations get into our way because we already have deleted the bucket, the user is gone as well and now there's no more credentials to be able to observe.

Logs

No response

Expected Behavior

The Bucket should be deleted in possibly future reconciliations if the bucket has been deleted on cloudscale.ch.

Steps To Reproduce

Preparation:

Pull latest commit of #21

export CLOUDSCALE_API_TOKEN=<token>
make crossplane-composition
kubectl apply -f package/samples/*providerconfig.yaml

Test:

kubectl apply -f package/samples/claim.*.yaml
# Now verify all resources are ready

kubectl -n default delete ObjectBucket object-bucket-claim
# Now verify that the Bucket may become stuck in deletion, while ObjectsUser is already gone

You may need to perform these steps multiple time to trigger the race condition.

Versions

Code: a214991
Crossplane-runtime: v0.17.0

Delete buckets using S3 object user credentials

Summary

As user
I want to delete an existing bucket by deleting a Kubernetes resource
So that I can get rid of unneeded data using an operator

Context

In #3 we created buckets using an S3 user. This issue is about cleaning them up when a Bucket CRD gets deleted.

Out of Scope

  • Deleting the S3 user
  • Any further deletion protection

Further links

No response

Acceptance Criteria

Given the resource with the spec:
  apiVersion: s3.appcat.io/v1
  kind: Bucket
  metadata:
    name: backup-bucket
    namespace: default
    deletionTimestamp: <deletion-time>
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    secretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
And the bucket exists in cloudscale.ch project
And there is no more data in the bucket
When reconciling the resource
Then the bucket is deleted
And the finalizer `s3.appcat.vshn.io/bucket-protection` in the resource is removed
Given the resource with the spec:
  apiVersion: s3.appcat.io/v1
  kind: Bucket
  metadata:
    name: backup
    namespace: default
    deletionTimestamp: <deletion-time>
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    secretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
When reconciling the resource encounters an error
Then the resource is updated to: (added condition)
  apiVersion: s3.appcat.io/v1
  kind: Bucket
  metadata:
    name: backup
    namespace: default
    deletionTimestamp: <deletion-time>
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    secretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
  status:
    conditions:
      - message: "<error message>"
        reason: DeletionFailed
        status: "True"
        type: Failed

Implementation Ideas

No response

Create buckets using S3 object user credentials

Summary

As user
I want to create a Kubernetes resource describing a Bucket
So that I can start using a bucket after it's been provisioned by the operator

Context

In #1 we create S3 users on cloudscale.ch. The result is a secret containing the credentials for this user.
Using these credentials, we can create buckets on cloudscale.ch

Since creating buckets is generic and follows standard S3 protocol, we can create a CRD that is generic for all cloud providers.

CRD Proposal

apiVersion: s3.appcat.vshn.io/v1
kind: Bucket
metadata:
  name: my-bucket
spec:
  credentialsSecretRef: backup-user-credentials
  region: lpg
  endpoint: objects.lpg.cloudscale.ch
  bucketName: my-bucket # optional, defaults to metadata.name
status:
  conditions: []
  bucketName: ""

Notes:

  • Using a secret ref instead of referencing other CRD-based objects allows to reference any valid Secret with the correct keys in it, making integration of other S3 users quite generic.

Out of Scope

  • Deleting buckets
  • Updating metadata of buckets
  • Creating/Deleting users

Further links

Acceptance Criteria

Given a Secret using the following spec:
  apiVersion: v1
  kind: Secret
  metadata:
    name: backup-user-credentials
    finalizers:
      - s3.appcat.vshn.io/user-protection
  data:
    AWS_ACCESS_KEY_ID: <base64: access key>
    AWS_SECRET_ACCESS_KEY: <base64: secret key>
And a Bucket resource using the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: ""
When reconciling the Bucket `my-bucket`
Then a new S3 bucket `my-bucket` (defaulting from `metadata.name`) is created in the region `lpg` in cloudscale using the credentials as given in the Secret
And the resource `my-bucket` is updated to the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: my-bucket
  status:
    bucketName: my-bucket
    conditions:
      - type: Ready
        message: "Bucket ready"
        reason: Available
        status: true
Given a Secret using the following spec:
  apiVersion: v1
  kind: Secret
  metadata:
    name: backup-user-credentials
    finalizers:
      - s3.appcat.vshn.io/user-protection
  data:
    AWS_ACCESS_KEY_ID: <base64: access key>
    AWS_SECRET_ACCESS_KEY: <base64: secret key>
And a Bucket resource using the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: some-bucket-name
When reconciling the Bucket `my-bucket`
Then a new S3 bucket `some-bucket-name` is created in the region `lpg` in cloudscale using the credentials as given in the Secret
And the resource `my-bucket` is updated to the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: some-bucket-name
  status:
    bucketName: some-bucket-name
    conditions:
      - type: Ready
        message: "Bucket ready"
        reason: Available
        status: true
Given a user changes a Bucket resource to the following spec (change of spec.bucketName):
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: some-bucket-name
  status:
    bucketName: other-bucket
When reconciling the Bucket `my-bucket`
Then the resource `my-bucket` is updated to the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
    bucketName: some-bucket-name
  status:
    bucketName: other-bucket
    conditions:
      - type: Ready
        message: "Bucket ready"
        reason: Available
        status: false
      - type: Failed
        message: "Bucket name cannot be changed"
        reason: ProvisioningFailed
        status: true
Given a Bucket resource using the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
When reconciling the Bucket `my-bucket` encounters any errors
Then the resource `my-bucket` is updated to the following spec:
  apiVersion: s3.appcat.vshn.io/v1
  kind: Bucket
  metadata:
    name: my-bucket
    finalizers:
      - s3.appcat.vshn.io/bucket-protection
  spec:
    credentialsSecretRef: backup-user-credentials
    region: lpg
    endpoint: objects.lpg.cloudscale.ch
  status:
    conditions:
      - type: Ready
        message: "Bucket provisioning failed"
        reason: Available
        status: false
      - type: Failed
        message: <error message>
        reason: ProvisioningFailed
        status: true

Implementation Ideas

  1. Create CRD spec as proposed
  2. Setup controller for this CRD
  3. In the controller, read the content of the referenced secret
  4. Use Minio S3 client to create buckets, seems fairly easy to use: https://github.com/minio/minio-go
  5. Update status of CRD

Buckets Can Be Owned Multiple Times

Description

Currently the provider does not register any errors when a second bucket with the same name is being created under the same objects user. The provider adopts the bucket. This can result in a situation where two Crossplane managed resources point to the same bucket.

This is basically the same issue as vshn/provider-exoscale#21

Additional Context

Currently, the code seems to simply "adopt" the bucket if it exists during the first reconciliation after the object was created. We need some logic to differentiate this case from the other reconciliations.

Logs

_No response_

Expected Behavior

The status conditions of the duplicated CRD must register the error.

Steps To Reproduce

  1. make local-install install-samples
  2. kubectl get bucket bucket -oyaml > bucket.yaml
  3. update metadata.name to a different k8s name meanwhile leaving the bucketName the same.
  4. kubectl apply -f bucket.yaml
  5. check status conditions.

Versions

master (5b1aef8)

Delete object user on cloudscale.ch

Summary

As user
I want to delete an object user on cloudscale.ch
So that I can remove unneeded users

Context

In #1 we created object users. This issue is about cleaning them up.

Out of Scope

  • Deleting buckets that belong to the user- ( #4 )

Further links

Acceptance Criteria

Given a resource exists using spec:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    name: backup-user
    namespace: default
    finalizers:
      - s3.appcat.vshn.io/user-protection
  spec:
    secretRef: backup-user-credentials
  status:
    userID: awefiu0204
And the operator pod has a "CLOUDSCALE_API_TOKEN" environment variable defined
And there are no S3 buckets belonging to `awefiu0204`
When a user deletes the resource
Then the objects user having the ID `awefiu0204` is deleted in cloudscale.ch through API/SDK
And the finalizer `s3.appcat.vshn.io/user-protection` is removed from the resource `ObjectsUser/backup-user`
And the finalizer `s3.appcat.vshn.io/user-protection` is removed from the resource `Secret/backup-user-credentials`
Given a resource exists using spec:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    name: backup-user
    namespace: default
  spec:
    secretRef: backup-user-credentials
  status:
    userID: awefiu0204
And there are existing S3 buckets ["bucket1", "bucket2"] belonging to `awefiu0204`
Then the resource is updated to:
  apiVersion: cloudscale.s3.appcat.io/v1
  kind: ObjectsUser
  metadata:
    name: backup-user
    namespace: default
  spec:
    secretRef: backup-user-credentials
  status:
    userID: awefiu0204
    conditions:
      - type: Failed
        reason: DeletionPending
        message: "Existing buckets that prevent removal of user: [bucket1, bucket2]"
        status: true
      - type: Ready
        reason: Available
        message: Resource is being deprovisioned
        status: false

Implementation Ideas

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤ī¸ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.