Giter VIP home page Giter VIP logo

kubernetes-operator's Introduction

Doppler Kubernetes Operator

Automatically sync secrets from Doppler to Kubernetes and auto-reload deployments when secrets change.

Doppler Kubernetes Operator Diagram

Overview

  • The Doppler Kubernetes Operator is a controller which runs inside a deployment on your Kubernetes cluster
  • It manages custom resources called DopplerSecrets, each of which contains a reference to a Kubernetes secret containing your Doppler Service Token and a reference to the Kubernetes secret where Doppler secrets should be synced
  • The operator continuously monitors the Doppler API for changes to your Doppler config and updates the managed Kubernetes secret automatically
  • If the secrets have changed, the operator can also reload deployments using the Kubernetes secret. See below for details on configuring auto-reload.

Step 0: Enable Kubernetes Secret Encryption at Rest

The Doppler Kubernetes Operator uses Kubernetes Secrets to store sensitive data.

Kubernetes Secrets are, by default, stored as unencrypted base64-encoded strings. By default they can be retrieved - as plain text - by anyone with API access, or anyone with access to Kubernetes' underlying data store, etcd. Therefore, Kubernetes recommends enabling encryption at rest to secure this data.

Step 1: Deploy the Operator

Using Helm

You can install the latest Helm chart with:

helm repo add doppler https://helm.doppler.com
helm install --generate-name doppler/doppler-kubernetes-operator

Updates can be performed with helm upgrade.

One caveat is that Helm cannot update custom resource definitions (CRDs). To simplify this, Doppler guarantees that CRDs will remain backwards compatible. CRDs can be updated directly from the Helm chart manifest with:

helm repo update
helm pull doppler/doppler-kubernetes-operator --untar
kubectl apply -f doppler-kubernetes-operator/crds/all.yaml

Using kubectl

You can also deploy the operator by applying the latest installation YAML directly:

kubectl apply -f https://github.com/DopplerHQ/kubernetes-operator/releases/latest/download/recommended.yaml

Regardless of the installation method, this will use your locally-configured kubectl to:

  • Create a doppler-operator-system namespace
  • Create the resource definition for a DopplerSecret
  • Setup a service account and RBAC role for the operator
  • Create a deployment for the operator inside of the cluster

You can verify that the operator is running successfully in your cluster with ./tools/operator-logs.sh. This waits for the deployment to roll out and then tails the log. You can leave this command running to keep monitoring the logs or quit safely with Ctrl-C.

Step 2: Create a DopplerSecret

A DopplerSecret is a custom Kubernetes resource with references to two secrets:

  • A Kubernetes secret where your Doppler Service Token is stored (AKA "Doppler Token Secret"). This token will be used to fetch secrets from your Doppler config. The operator will be looking for the token in the serviceToken field of this secret.
  • A Kubernetes secret where your synced Doppler secrets will be stored (AKA "Managed Secret"). This secret will be created by the operator if it does not already exist.

Note: While these resources can be created in any namespace, it is recommended that you create your Doppler Token Secret and DopplerSecret inside the doppler-operator-system namespace to prevent unauthorized access. The managed secret should be namespaced with the deployments which will use the secret.

Generate a Doppler Service Token and use it in this command to create your Doppler token secret:

kubectl create secret generic doppler-token-secret -n doppler-operator-system --from-literal=serviceToken=dp.st.dev.XXXX

If you have the Doppler CLI installed, you can generate a Doppler Service Token from the CLI and create the Doppler token secret in one step:

kubectl create secret generic doppler-token-secret -n doppler-operator-system --from-literal=serviceToken=$(doppler configs tokens create doppler-kubernetes-operator --plain)

Next, we'll create a DopplerSecret that references your Doppler token secret and defines the location of the managed secret.

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test # DopplerSecret Name
  namespace: doppler-operator-system
spec:
  tokenSecret: # Kubernetes service token secret (namespace defaults to doppler-operator-system)
    name: doppler-token-secret
  managedSecret: # Kubernetes managed secret (will be created if does not exist)
    name: doppler-test-secret
    namespace: default # Should match the namespace of deployments that will use the secret

If you're following along with these example names, you can apply this sample directly:

kubectl apply -f config/samples/secrets_v1alpha1_dopplersecret.yaml

Check that the associated Kubernetes secret has been created:

# List all Kubernetes secrets created by the Doppler operator
kubectl describe secrets --selector=secrets.doppler.com/subtype=dopplerSecret

The operator continuously watches for secret updates from Doppler and when detected, automatically and instantly updates the associated secret.

Next, we'll cover how to configure a deployment to use the Kubernetes secret and enable auto-reloading for Deployments.

Step 3: Configuring a Deployment

Using the Secret in a Deployment

To use the secret created by the operator, we can use the managed secret in one of three ways. These methods are also covered in greater detail in the Kubernetes Secrets documentation.

envFrom

The envFrom field will populate a container's environment variables using the secret's Key-Value pairs:

envFrom:
  - secretRef:
      name: doppler-test-secret # Kubernetes secret name

valueFrom

The valueFrom field will inject a specific environment variable from the Kubernetes secret:

env:
  - name: MY_APP_SECRET # The name of the environment variable exposed in the container
    valueFrom:
      secretKeyRef:
        name: doppler-test-secret # Kubernetes secret name
        key: MY_APP_SECRET # The name of the key in the Kubernetes secret

volume

The volume field will create a volume that is populated with files containing the Kubernetes secret:

volumes:
  - name: secret-volume
    secret:
      secretName: doppler-test-secret # Kubernetes secret name

Your deployment can use this volume by mounting it to the container's filesystem:

volumeMounts:
  - name: secret-volume
    mountPath: /etc/secrets
    readOnly: true

Automatic Redeployments

In order for the operator to reload a deployment, three things must be true:

  • The deployment is in the same namespace as the managed secret
  • The deployment has the secrets.doppler.com/reload annotation set to 'true' (string)
  • The deployment uses the managed secret

Here's an example of the reload annotation:

annotations:
  secrets.doppler.com/reload: 'true'

The Doppler Kubernetes operator reloads deployments by updating an annotation with the name secrets.doppler.com/secretsupdate.<KUBERNETES_SECRET_NAME>. When this update is made, Kubernetes will automatically redeploy your pods according to the deployment's configured strategy.

Full Examples

Complete examples of these different deployment configurations can be found below:

If you've named your managed Kubernetes secret doppler-test-secret in the previous step, you can apply any of these examples directly:

kubectl apply -f config/samples/deployment-envfrom.yaml
kubectl rollout status -w deployment/doppler-test-deployment-envfrom

Once the Deployment has completed, you can view the logs of the test container:

kubectl logs -lapp=doppler-test --tail=-1

Setup is complete! To test the sync behavior, modify a secret in the Doppler dashboard and wait 60 seconds. Run the logs command again (or use the watch command) to see the pods automatically restart with the new secret data.

Name Transformers

Name Transformers enable secret names to transformed from Doppler's UPPER_SNAKE_CASE format into any of the following environment variable compatible formats:

Type Default Transform
camel API_KEY apiKey
upper-camel API_KEY ApiKey
lower-snake API_KEY api_key
tf-var API_KEY TF_VAR_api_key
dotnet-env SMTP__USER_NAME Smtp__UserName
lower-kebab API_KEY api-key

Simply add the nameTransformer field with any of the above types:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test
  namespace: doppler-operator-system
spec:
  tokenSecret:
    name: doppler-token-secret
  managedSecret:
    name: doppler-test-secret
    namespace: default
  nameTransformer: dotnet-env

The nameTransformer values are also validated prior to admission to prevent transformation failures.

Download Formats

Instead of the standard Key / Value pairs, you can download secrets as a single file in the following formats:

  • json
  • dotnet-json
  • env
  • env-no-quotes
  • yaml

When format is specified, a single DOPPLER_SECRETS_FILE key is set in the created secret with the string contents of the downloaded file.

Simply add the format field:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dotnet-webapp-appsettings
  namespace: doppler-operator-system
spec:
  tokenSecret:
    name: doppler-token-dotnet-webapp
    namespace: doppler-operator-system
  managedSecret:
    name: dotnet-webapp-appsettings
    namespace: default
  format: dotnet-json

You can then configure your deployment spec to mount the file at the desired path:

...
    spec:
      containers:
        - name: dotnet-webapp
          volumeMounts:
            - name: doppler
              mountPath: /usr/src/app/secrets 
              readOnly: true
      volumes:
        - name: doppler
          secret:
            secretName: dotnet-webapp-appsettings  # Managed secret name
            optional: false
            items:
              - key: DOPPLER_SECRETS_FILE # Hard-coded by Operator when format specified
                path: appsettings.json # Name or path to file name appended to container mountPath

Specifying Secret Subsets to Sync

You can have the operator only sync a subset of secrets in a Doppler config. To do this, specify them in the secrets spec property:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test
  namespace: doppler-operator-system
spec:
  tokenSecret:
    name: doppler-token-secret
  secrets:
    - HOSTNAME
    - PORT
  managedSecret:
    name: doppler-test-secret
    namespace: default

If this property is omitted all secrets are synced.

Kubernetes Secret Types and Value Encoding

By default, the operator syncs secret values as they are in Doppler to an Opaque Kubernetes secret as Key / Value pairs.

In some cases, the secret name or value stored in Doppler is not the format required for your Kubernetes deployment. For example, you might have Base64-encoded TLS data that you want to copy to a native Kubernetes TLS secret (kubernetes.io/tls).

You can use custom types and processors to achieve this.

Failure Strategy and Troubleshooting

Inspecting Status

If the operator fails to fetch secrets from the Doppler API (e.g. a connection problem or invalid service token), no changes are made to the managed Kubernetes secret or your deployments. The operator will continue to attempt to reconnect to the Doppler API indefinitely.

The DopplerSecret uses status.conditions to report its current state and any errors that may have occurred.

In this example, our Doppler service token has been revoked and the operator is reporting an error condition:

$ kubectl describe dopplersecrets -n doppler-operator-system
Name:         dopplersecret-test
Namespace:    doppler-operator-system
Labels:       <none>
Annotations:  <none>
API Version:  secrets.doppler.com/v1alpha1
Kind:         DopplerSecret
Metadata:
  ...
Spec:
  ...
Status:
  Conditions:
    Last Transition Time:  2021-06-02T15:46:57Z
    Message:               Secret update failed: Doppler Error: Invalid Service token
    Reason:                Error
    Status:                False
    Type:                  secrets.doppler.com/SecretSyncReady
    Last Transition Time:  2021-06-02T15:46:57Z
    Message:               Deployment reload has been stopped due to secrets sync failure
    Reason:                Stopped
    Status:                False
    Type:                  secrets.doppler.com/DeploymentReloadReady
Events:                    <none>

You can safely modify your token Kubernetes secret or DopplerSecret at any time. To update our Doppler service token, we can modify our token Kubernetes secret directly and the changes will take effect immediately.

The DopplerSecret resource manages the managed Kubernetes secret but does not officially own it. Therefore, deleting a DopplerSecret will not automatically delete the managed secret.

Included Tools

Uninstalling

To uninstall the operator, first delete any DopplerSecret resources and any referenced Kubernetes secrets that are no longer needed.

kubectl delete dopplersecrets --all --all-namespaces
kubectl delete secret doppler-token-secret -n doppler-operator-system

If you installed the operator with Helm, you can use helm uninstall to remove the installation resources. Otherwise, run the following command:

kubectl delete -f https://github.com/DopplerHQ/kubernetes-operator/releases/latest/download/recommended.yaml

Development

This project uses the Operator SDK.

When developing locally, you can run the operator using:

make install run

See the Operator SDK Go Tutorial for more information.

Release

This project is released with Github Actions. Adding a Github Release will start an action which builds the operator image and publishes it to DockerHub. Tag names should match the pattern vX.X.X.

kubernetes-operator's People

Contributors

nmanoogian avatar ryan-blunden avatar sgudbrandsson avatar watsonian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-operator's Issues

Set Loglevel

Dear Team,

how can i set the loglevel of the operator so my kibana is not spammed with info logs that do nothing for me?

Thanks in advance.

to put this into persepective, here are the last 24h for an active doppler system:

grafik

Manage CRDs via Helm

Request

Consider supporting managing for CRDs via Helm.

Reasoning

We manage our charts, including doppler-kubernetes-operator using ArgoCD. And we use Renovatebot to keep dependencies up to date.

The renovatebot PR looks like this:

image

and seeing there is a new property in the DopplerSecret, made me think that I might have to update my CRDs. But this might not always be the case, and I do not want to do this manually.

How to achieve

Helm cannot upgrade custom resource definitions in the <chart>/crds folder by design.

However, by moving the CRDs inside the <chart>/templates directory, it is possible to keep them in-sync, together with the rest of the templates when there's a new version.

See for example how ArgoCD project is doing the same: https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd#custom-resource-definitions

[Kubernetes] imagePullSecrets: unable to deploy

Thanks again for a nice solution to our problems ;)

When deploying custom images to Kubernetes from a private image repository, an acces configuration is required.
Link to Docs

This secret is defined by a key called " .dockerconfigjson ". You can probably see where im going with this:

There is currently no way to deploy it with the kubernetes operator. https://docs.doppler.com/docs/kubernetes-operator

We really would appreciate a solution to this as the only secret "not dopplered" are our image pull secrets =)
The best solution is probably a Name Transformer.

Thank you!

Strange double-deployment when doing helm deploy

Hi,

We've been facing some very strange issues in production lately in GKE Autopilot.
When we deploy new revision of our application, the helm deploy action is running, then for some reason, Doppler triggers another deployment very soon after the helm deployment starts.
We are using HPA to automatically scale to N pods and expect helm to just update the deployment, leaving the pod count as is.

When Doppler triggers a deployment immediately after the helm deployment starts, the workload scales down to 1 pod.

You can see the effect here:
image
image

We deploy multiple times per day, so this is causing a major headache in production.

We're using Doppler 1.2.5 currently, and I don't know if upgrading will do anything.
Have you seen this behavior?
What do you recommend?

Configure resources for all containers

It would be great if this chart either directly configured resources for the rbac proxy container or allowed consumers of the chart to manually specify resources. Additionally, it would be great of both pods had requests and limits for ephemeral-storage which is a resource as of 1.25. I'm finding that often my nodes get killed because the rbac proxy pod uses up more storage than it requests (with a request of 0) and this causes the node as a whole to run out of storage and cause all pods on the node to suddenly get evicted.

The operator should allow arbitrary string->string mappings for secrets

Problem: if the desired secret key is not possible to produce via any existing nameTransformer, then DopplerSecret cannot be used to sync the secrets to kubernetes. Example: the secret key string tls.cert key cannot be produced with any nameTransformer from a Doppler secret name.

It should be possible to provide a string-string mapping of <doppler_upper_snake_case_name> to <arbitrary_string> so that any secret can be populated.

Example of what I'd suggest:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test
  namespace: doppler-operator-system
spec:
  tokenSecret:
    name: doppler-token-secret
  # doppler-side secret names cannot contain ":" so it could be used to segment the list entries:
  secrets:
    # directly map "VAR1" from doppler to key "something.totally.different" in the kubernetes secret object
    - 'VAR1:something.totally.different'
  managedSecret:
    name: doppler-test-secret
    namespace: default

Getting started from Readme not working

Hi,
I try to follow the getting started instructions from this readme, but running into issues at step 2

I'm using the option with doppler CLI:
kubectl create secret generic doppler-token-secret -n doppler-operator-system --from-literal=serviceToken=$(doppler configs tokens create doppler-kubernetes-operator --plain)

This results in following error: Doppler Error: You must specify a project

When I add a project (and after the second error message the "config"), I receive following error message

Unable to create service token Doppler Error: Please provide a valid config. error: failed to create secret secrets "doppler-token-secret" already exists

Can anyone help here please?

Request for change of behavior introduced in 1.2.0 which breaks prior use cases

In c55ad5e, a change was introduced which only allows DopplerSecret objects in the operator namespace. This breaks self-service within multi-tenancy.

Instead, I would like to propose a change that takes into account existing RBAC in the cluster while providing the original flexibility.

  1. Add a ValidatingWebhookConfiguration that registers a webhook for all DopplerSecret objects.
  2. Reject any DopplerSecret objects that attempt to overwrite any existing managedSecrets
  3. Reject any Doppler Secret objects that violate the user's RBAC. Combining SelfSubjectAccessReview and impersonation with the userInfo data will allow the operator to determine authorization of the action at time of creation of the DopplerSecret

For those looking for a solution to maintain the previous behavior

You should be able to specify the previous chart version of 1.1.1 with helm. That is our temporary solution.

allow custom namespace

Currently helm install doppler-kubernetes-operator doppler/doppler-kubernetes-operator --namespace doppler-operator-system --create-namespace will fail

Reconcile algorithm overuses the Doppler API

We have been swapping out an in-house k8s reloading/secret handling solution to this official operator solution. Previously we used Doppler Webhooks for reloading, and an init script at container startup to load the current secrets.

What we have noticed since migrating is that we gets lots of these errors in the pod logs:

Doppler Error: Exceeded rate limit of 240 secret read requests within 60 seconds. Retry in 1 seconds. Upgrade to the Enterprise plan to increase your limit

We have many DopplerSecret custom resources (but many of them reference the same Doppler Config actually). Despite there being many, they rarely ever change (on the frequency level of 1-2 changes per week), so we should not exceed the API rate limits.

It doesn't make any sense to me that the operator needs to HTTP-GET all secrets every N seconds from the Doppler API. It should be able to use a functionality similar to the Webhooks to do push-based reconciliation instead of polling Doppler. Doing so would drastically reduce the API load on Doppler.

I would propose one of two solutions:

  1. Stick to polling, rely on ETag and If-None-Match headers to decrease load on Doppler API (looks like Etag is already implemented) but increase the API rate limit specifically for HTTP-304 responses since they should be less expensive for Doppler than HTTP-200 reponses.
  2. Use a functionality similar to the Webhooks to do push-based reconciliation instead of polling. For example, I would be okay with exposing the operator via an ingress so it could receive notifications from Doppler. Then the polling-based solution could be kept as a fallback with its frequency significantly decreased.

Random failure publishing new secrets on changes

We've successfully deployed the Doppler Operator to our environments a little over 2 months ago.

We're noticing however that while everything works as it should post-deployment (we can change a secret in our config, and run kubectl describe secrets --selector=secrets.doppler.com/subtype=dopplerSecret , we will see the changes almost immediately), after a while (days) the operator doesn't seem to pick up the new secrets anymore.

In order to fix this, we have to delete the CRD for that environment and recreate it.

We dug through the logs, but didn't notice anything out of the ordinary.

Happy to provide more info/details/logs that would help in diagnosing this. We are on GKE, k8s 1.23.

"Cannot change existing managed secret type from Opaque to ." after upgrading to 1.4.0

Hi, doppler-operator is not updating secrets anymore and logging errors Cannot change existing managed secret type from Opaque to . since upgrading it to 1.4.0 (with Helm).

This probably has something to do with the new support for Kubernetes secret types in this version but I'm not sure exactly how it's causing the issue with our existing secrets?

GCP GKE INFO logs are showing ERROR

Followed the README to install the operator using kubectl and it seems to be running as normal, but every minute I'm getting a chunk of error logs that show INFO in the message and are showing as ERROR.

Is there something I need to change to correct the output?

Screen Shot 2023-03-10 at 9 30 22 AM

...
logName: "projects/project_id/logs/stderr",
severity: "ERROR"
textPayload: "2023-03-10T14:26:47.498Z	INFO	..."
...

cannot find Service Account

I have this problem, do not touch anything doppler in the last 20 days. it just stopped updating and i found this in the logs
-Cannot find Service Account in pod to build in-cluster rest config: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0000d4001, 0xc000172000, 0xbb, 0x10f)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb8
k8s.io/klog/v2.(*loggingT).output(0x251bc80, 0xc000000003, 0x0, 0x0, 0xc0001de150, 0x2472f85, 0x7, 0x18e, 0x0)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x19d
k8s.io/klog/v2.(*loggingT).printf(0x251bc80, 0x3, 0x0, 0x0, 0x17bca5f, 0x46, 0xc00059d990, 0x1, 0x1)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.initKubeConfig(0x0, 0x0, 0x4)
/home/travis/gopath/src/github.com/brancz/kube-rbac-proxy/main.go:398 +0x18f
main.main()
/home/travis/gopath/src/github.com/brancz/kube-rbac-proxy/main.go:151 +0xd5f

goroutine 18 [syscall]:
os/signal.signal_recv(0x0)
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/os/signal/signal_unix.go:29 +0x41

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x251bc80)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd6

Can't create managed secret for a project's root config

Logs say:
ERROR controllers.DopplerSecret Unable to update dopplersecret {"dopplersecret": "namespace/dopplersecret-root", "error": "Cannot change existing managed secret type from Opaque to . Delete the managed secret and re-apply the DopplerSecret."}

DopplerSecret manifest:
apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/resource-policy: keep
meta.helm.sh/release-name: namespace
labels:
app.kubernetes.io/name: app
name: dopplersecret-root
namespace: namespace
spec:
config: root
managedSecret:
name: dopplersecrets-root
namespace: namespace
project: project
tokenSecret:
name: dopplertoken-root

Status of DopplerSecret object:
status:
conditions:

  • lastTransitionTime: "2024-02-27T21:24:09Z"
    message: 'Secret update failed: Cannot change existing managed secret type from
    Opaque to . Delete the managed secret and re-apply the DopplerSecret.'
    reason: Error
    status: "False"
    type: secrets.doppler.com/SecretSyncReady
  • lastTransitionTime: "2024-02-27T21:24:09Z"
    message: Deployment reload has been stopped due to secrets sync failure
    reason: Stopped
    status: "False"
    type: secrets.doppler.com/DeploymentReloadReady

I am not sure why it states that the managed secret exists, as it is the DopplerSecret itself that is creating it and then complaining about an incorrect secret type (which is not being expanded correctly since it says 'from Opaque to .').
I tried recreating the DopplerSecret multiple times, but it did not help.

Pod/Deployment doesn't restart although recognized by the operator

Versions

Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.17-gke.1901", GitCommit:"b5bc948aea9982cd8b1e89df8d50e30ffabdd368", GitTreeState:"clean", BuildDate:"2021-05-27T19:56:12Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
Operator: v0.1.0

Problem

Pod does not restart even though the secret was updated

Expected result

Pod should restart as soon as the secret is updated, instead only the secret gets updated

Logs

│ 2021-07-15T19:28:44.164Z    INFO    controllers.DopplerSecret    [/] Secrets have been modified    {"dopplersecret": "external-secrets/dopplersecret-test", "verifyTLS": true, "host": "https://api.doppler.com", "oldVersion": "W/\"70d6dcadc0177a11c86e856195e8be2c1078975aaa2fb7ab37ae1db4b5aa03ec\"", "newVersion": "W/\"f37c20815bb0f7c177425f50e14e8051588f0c011e5 │
│ 2021-07-15T19:28:44.170Z    INFO    controllers.DopplerSecret    [/] Successfully updated existing Kubernetes secret                                                                                                                                                                                                                                                     │
│ 2021-07-15T19:28:44.178Z    INFO    controllers.DopplerSecret    Finished reconciling deployments    {"dopplersecret": "external-secrets/dopplersecret-test", "numDeployments": 1}

Configs

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test # DopplerSecret Name
  namespace: external-secrets
spec:
  tokenSecret: # Kubernetes service token secret (namespace defaults to doppler-operator-system)
    name: doppler-token-secret
    namespace: doppler-operator-system
  managedSecret: # Kubernetes managed secret (will be created if does not exist)
    name: doppler-test-secret
    namespace: external-secrets # Should match the namespace of deployments that will use the secret
---
apiVersion: v1
kind: Pod
metadata:
  name: doppler-busybox
  namespace: external-secrets
  annotations:
    secrets.doppler.com/reload: 'true'
spec:
  containers:
  - name: busybox
    image: busybox:glibc
    command:
      - sleep
      - "3600"
    envFrom:
      - secretRef:
          name: doppler-test-secret

How to configure a "master token secret"

Hey guys,
First of all thank you for the operator, we use it constantly.

Now there has come a point where we are spending more time deploying "token secrets" into the cluster than anything esle, which is especially true if you do feature deployments with kubernetes.

Is there a simple way to deploy a "token secret" with multiple DOPPLER tokens inside and use that to create other secrets?

Usercase:

  1. Have a project that contains all the doppler tokens for the different configurations you deploy in kubernetes. At this point im deploying around 10 token secrets and its getting kumbersome to deploy them by hand.
  2. deploy these tokens into the cluster using doppler
  3. Use the deployed secret of that operation to feed other doppler configurations inside the cluster.

basically im looking for this:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-backups
  namespace: doppler-operator-system
spec:
  host: https://api.doppler.com
  managedSecret:
    name: dopplersecret-staging-postgres
    namespace: postgres-staging
    type: Opaque
  resyncSeconds: 60
  tokenSecret:
    name: doppler-cluster-tokens
    **KEY**: token-backups
  verifyTLS: true

But maybe there is a better way which i simply cant see?

thanks

Helm Chart dependency `kube-rbac-proxy` deprecation warning

The Helm Chart's usage of kube-rbac-proxy container outputs these logs during start-up.

==== Deprecation Warning ======================
Insecure listen address will be removed.
Using --insecure-listen-address won't be possible!
The ability to run kube-rbac-proxy without TLS certificates will be removed.
Not using --tls-cert-file and --tls-private-key-file won't be possible!
For more information, please go to https://github.com/brancz/kube-rbac-proxy/issues/187
===============================================

v1.5.0 recommended.yaml

      containers:
      - args:
        - --secure-listen-address=0.0.0.0:8443
        - --upstream=http://127.0.0.1:8080/
        - --logtostderr=true
        - --v=10
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.14.1
        name: kube-rbac-proxy
        ports:
        - containerPort: 8443
          name: https

Snippet from the Github Issue brancz/kube-rbac-proxy#187

What

We are removing the option to run kube-rbac-proxy without configured TLS certificates.
This means that:

using insecure-listen-addresss won't work any more.
not setting tls-cert-file and tls-private-key-file won't work any more.

Upstream H2C should still work, but we might remove verified claims about an identity that are send to upstream in the future.

Why

We are aware that we create obstacles in running kube-rbac-proxy for testing or debugging purposes.
But we reduce the probability for an insecure set up of kube-rbac-proxy, which is a security relevant component.

Running kube-rbac-proxy without TLS certificates, makes it possible to impersonate kube-rbac-proxy.

The reason that we remove that capability is a pre-acceptance requirement for kube-rbac-proxy, before we can donate the project so sig-auth of k8s.

Support for ARM64-based CPUs

Hey Doppler Team!

Our team is planning to explore using this Kubernetes operator on ARM64-based Devices (AWS Graviton2 + Raspberry Pi). Is it possible to provide support for this architecture?

Looking into your GitHub Actions Script, it looks like this can be done with a one-liner. Within this job, you can enter in platforms: linux/amd64,linux/arm64 after line 32 to easily provide support. You can refer to this document for more information.

Only caveat with this approach is builds will take longer due to ARM64 Emulation. From my testing it took roughly 40 minutes to build the Docker Container on a self-hosted GitHub Actions Runner.

Hoping for your team's support for this though!

Feature request: Service Account support

As I understand how Doppler now works is that a service token gives access to a single branch config, this way tokens and branch config locations are tightly coupled without any need from the user to specify where the branch config is located.

This way the operator knows where to fetch the secrets. Service accounts however can be used to fetch secrets from many configs. I suspect we need to configure through the DopplerSecret where to fetch the config/secrets from but this would require changes, is this assumption correct?

Is there an option to automatically create a new config if it's not found?

Issue Description:
I'm using Doppler for production, staging, and local development. To streamline the onboarding process for new developers, it would be beneficial if the Kubernetes operator could automatically generate new configurations if they are not already present. Currently, this requires an additional step, either using the CLI or the interface.

Suggested Approach:
I'm currently using the following doppler.yaml configuration:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: "{{ .Values.name }}-api-doppler"
  namespace: default
spec:
  tokenSecret:
    name: "doppler-api-token-secret"
  project: api
  config: 
     name: "{{ .Values.dopplerEnv }}"
     auto: true # Automatically creates the config if not found.
  managedSecret:
    name: "{{ .Values.name }}-api-secrets"
    namespace: default

Question:
Is there an alternative approach using Kubernetes YAML files to achieve this desired behavior, without relying on the CLI or interface?

Feature: Support kubernetes.io/tls instead of only Opaque

As I understand from the Kubernetes documentation the kubernetes.io/tls only difference is enforcing DER standards and that the key/cert is present. So consider this as a nice-to-have feature request.

Current code only supports Opaque Kubernetes secrets.


Having the operator create kubernetes.io/tls when a certificate is present would be nice!

Forcing DopplerSecret objects to be created in operator namespace breaks namespace isolation

Edit: This is partially my bad. I had a stale sealed secret obfuscating the real issue (see #45 (comment))

Original message (not overly relevant):

The following simple DopplerSecret definition leads to a kubernetes secret being created, but the keys in that secret are in snake_case rather than the documented default SCREAMING_SNAKE_CASE.

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: oauth2-secrets
spec:
  tokenSecret:
    name: doppler-service-token
  managedSecret:
    name: oauth2-secrets

I tried adding a nameTransformer with a value of upper-snake (which is not a documented supported value), but that leads to an error saying that it's not supported, advising that the supported values are supported values: "upper-camel", "camel", "lower-snake", "tf-var", "dotnet-env", "lower-kebab". None of these will get the keys back to the format they need to be in.

Unfortunately, this means that we can't make use of envFrom as our applications all expect env vars to be in SCREAMING_SNAKE_CASE. The workaround is to specify every env var with a valueFrom or change the implementation to use non-idiomatic env var styles.

Using the same project locally (outside of kubernetes) does not have this issue.

Allow DopplerSecret to be deployed to other namespaces

@nmanoogian I saw:

and would have preferred having the option to limit DopplerSecret to a specific namespace or even only reading tokens from the same namespace

it seems counter intuitive to the namespacing of kubernetes. The DopplerSecret should be able to reconcile be deployed to other namespaces. its the cross namespace access that was problematic allowing non operators to enumerate or access secrets that they did not have access to

I have a Doppler Token and own it and I am an application owner, I have to coordinate with the team that deploys the Doppler Operator to deploy my DopplerSecret just so that my namespace can have a secret. It seems we are artificially limiting who can manage DopplerSecret

External Secrets Operator would also allow also me to do it this way using SecretStore and Secret in the same namespace so I would suggest having Doppler Operator mimic that ability.

Thank you for your time :)

Default namespace for TokenSecret not applied correctly

I tried following the sample configuration provided and the only way it worked for me was by adding a namespace of doppler-operator-system in the DopplerSecret file.

secrets_v1alpha1_dopplersecret.yaml

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test # DopplerSecret Name
  namespace: doppler-operator-system
spec:
  tokenSecret: # Kubernetes service token secret
    name: doppler-token-secret
    # HAD TO ADD THIS FOR IT TO WORK
    namespace: doppler-operator-system
  managedSecret: # Kubernetes managed secret (will be created if does not exist)
    name: doppler-test-secret
    namespace: default # Should match the namespace of deployments that will use the secret

After adding that namespace the operator was able to find the Token secret and generate the ManagedSecret in the desired namespace.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.