Giter VIP home page Giter VIP logo

secrets-store-csi-driver-provider-aws's Issues

Helm 2.17.0

For our internal team logic, we are still using helm 2.17.0 client and tiller too. Do you have any instructions to deploy with helm 2.17.0? Actually, I am getting an error in the API version expected is 1 found is version 2.

Thanks

Question: I want each to have an environment variable value.

I have one aws secret manager. There are five secret values in aws secret manager: username, password, engine, host, and port. I want to display the values of username, password, engine, host, and port respectively when connecting to the inside of the pod (exec -it) using the env field. For example, use the 'kubectl exec -it -- /bin/bash' command to access the pod and the result of the environment variable is USERNAME = admin / PASSWORD = ;g{}. /engine=mysql/host=rds01.mysql/port=3306 I want to show. But now it is displayed as a single line in json format as below. USERNAME=(PASSWORD={"username": "admin", "password": ";g{}.", "engine": "mysqldb", "host" : "rds01.mysql", "port": 3306} Is there any way to solve this?

Question: Is hostNetwork required?

The csi-secrets-store-provider-aws DaemonSet YAML file is setup with hostNetwork access. This is obviously not ideal from a security perspective (and from a port clash perspective).

Does this workload actually need hostNetwork?

Allow to split objectName to set envvar name

Actually, we only have the possibility to use the full name of the parameter to set the variable name and optionally define a separator when using / in parameter name.

The suggestion consists in allow to split part of parameter name and use result to create the variable name.

Eg:

  • basename - anything after the last '/' is the variable name (e.g. with path /service, parameter /service/app/param becomes PARAM)
  • relative - anything after the path is the variable name (e.g. with path /service, parameter /service/app/param becomes APP_PARAM)
  • absolute - the full path with parameter name is the variable name (e.g. with path /service, parameter /service/app/param becomes SERVICE_APP_PARAM)

provider-volume: default socket location under /etc is suboptimal

This driver uses a local Unix socket (aws.sock) which by default is located under /etc/kubernetes/secrets-store-csi-providers:

endpointDir = flag.String("provider-volume", "/etc/kubernetes/secrets-store-csi-providers", "Rendezvous directory for provider socket")

This default location is somehow strange and suboptimal, as the filesystem entry is neither a regular file (which could be backup'd via usual backup tools) nor it contains persistent configuration (which is meant to persist across reboots).

This is particularly impactful on ostree-based systems, where the content of /etc is versioned and can be atomically rolled back (see docs).

As the socket is a local ephemeral entry in the filesystem, it could be moved to a better non-persistent hierarchy like /run.

Discussion: Best practice to restrict permissions

What is best practice to manage the access of Pods to Secrets?
There are different approaches how you can restrict access.
I think that when you do not have a self-obligated policy how you implement access restriction, managing the permission will quickly become uncontrollable.
In my setting I have ~70 pods and ~10 secrets.
In the following, I discuss four approaches.

In AWS Secrets Manager you can store JSON in a Secret.
What I mean with Key in the following is an attribute of a JSON object stored in a Secret.

Approach 1

Approach 1

Every Pod has a Volume for managing its Secrets.
This Volume has a SecretProviderClass.
Every SecretProviderClass references many Secrets.
Every Secret contains exactly one Key.
Every Pod has its own ServiceAccount.
Every ServiceAccount has its own IAM Role.
For every Secret an IAM Role needs access to, it references an IAM Policy.
An IAM Policy allows access to exactly one Secret.

New Secrets can be added to a Pod by doing the following:

  1. Add the Secret to the Pod's SecretProviderClass
  2. Add an IAM Policy to the IAM Role that the Pod's ServiceAccount points to.

Pros

  • ✔️ Pods have only access to the Secrets they need (least priviledge)
  • ✔️ Allows to restrict Secret access on AWS level
  • ✔️ Allows to use different types of Secrets (i.e. Credentials for Amazon RDS database, Credentials for Amazon DocumentDB database, Credentials for Amazon Redshift cluster, or Credentials for other database, or Other)
  • ✔️ Allows rotation logic per secret

Cons

  • ❌ SecretProviderClass specification may become cluttered with a lot of Secrets

Approach 2

Approach 2

Description

Every Pod has a Volume per Secret it needs access to.
Every SecretProviderClass references exactly one Secret.
Every Secret contains exactly one key.
Every Pod has its own ServiceAccount.
Every ServiceAccount has its own IAM Role.
For every secret an IAM Role needs access to, it references the respective IAM Policy.
An IAM Policy allows access to exactly one Secret.

Usage

New Secrets can be added to a Pod by doing the following:

  1. Add a new Volume to the Pod pointing to the SecretProviderClass that points to the Secret
  2. Add a IAM Policy to the IAM Role that the Pod's ServiceAccount points to.

Pros

  • ✔️ Pods have only access to the Secrets they need (least priviledge)
  • ✔️ Allows to restrict Secret access on AWS level
  • ✔️ Allows to use different types of Secrets
  • ✔️ Allows rotation logic per secret

Cons

  • ❌ Pod specification may become cluttered with a lot of Volumes

Approach 3

Approach 3

Every Pod has one Volume that references the only SecretProviderClass.
The SecretProviderClass references exactly the only Secret.
There is one Secret that contains all Keys.
All Pods use the same ServiceAccount.
There is only one ServiceAccount, one IAM Role, one IAM Policy.
This IAM Role references the IAM Policy.
This IAM Policy allows access to exactly one Secret.

New Secrets can be added to a Pod by doing the following:

  1. Add a new Key to the Secret

Pros

  • ✔️ Easy to manage

Cons

  • ❌ Insecure: Pods have access to all Secrets
  • ❌ Does not allow to restrict Secret access on AWS level
  • ❌ Does not allow to use different types of Secrets
  • ❌ Does not allow rotation logic per secret
  • ❌ Cannot use different Secret types

Approach 4

Approach 4

Allow all combinations of approaches 1-3 at once.

Pros

  • ✔️ Flexible

Cons

  • ❌ If all approaches are mixed, it is hard to trace permissions. Error-prone.

My personal preference currently is Approach 1.
I would like to hear experiences from others.
Are there even better approaches?
Are there any pros/cons that I have missed?

Informational messages are logged as errors

Our logs show the below messages logged frequently and at error level:

I0322 06:00:50.813426       1 server.go:107] Servicing mount request for pod [...] in namespace default using service account [...] with region eu-west-1
I0322 06:00:51.010475       1 auth.go:123] Role ARN for [...] is arn:aws:iam::[...]

These are informational messages that should not be treated like errors.

`helm repo` command in README.md shows 404 error

I tried to install Secrets Store CSI Driver using helm repo command written in README.md, but I got 404 error (see below).

$ helm repo add secrets-store-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts
Error: looks like "https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts" is not a valid chart repository or cannot be reached: failed to fetch https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts/index.yaml : 404 Not Found

According to the official site of Secrets Store CSI Driver, the correct command is below:

$ helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts

Feature request: Cross account fetching of secrets without changing the pod IRSA role

I would like to fetch secrets from another account without changing the IRSA role of the pods. I would specify the role on the other account in the CRD like this:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: <NAME>
spec:
  provider: aws
  role: <ROLE OF THE OTHER ACCOUNT>
  parameters:
    objects: |
        - objectName: "MySecret"
          objectType: "secretsmanager"

Or maybe like per secret like this:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: <NAME>
spec:
  provider: aws
  parameters:
    objects: |
        - objectName: "MySecret"
           role: <ROLE OF THE OTHER ACCOUNT>
           objectType: "secretsmanager"

I can't change the IRSA role of the pods as that role is used to access other local resources in the same account.

Cannot mount secrets to use as ImagePullSecrets

If I define the secret as "kubernetes.io/dockerconfigjson" the secret is not created in kubernets and it cannot be used as imagePullSecrets

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets-env
spec:
  provider: aws
  secretObjects:
    - secretName: secretdocker
      type: kubernetes.io/dockerconfigjson
      data:
        - objectName: "docker_config"
          key: .dockerconfigjson
  parameters:
    objects: |
      - objectName: "/secret/docker_config"
        objectType: "ssmparameter"
        objectAlias: "docker_config"

and Pod definition

kind: Pod
apiVersion: v1
metadata:
  name: env-example
spec:
  serviceAccountName: sa-k8s-secret-poc
  imagePullSecrets:
    - name: secretdocker
  containers:
    - image: miki79/testimage:1
      name: testimage
      imagePullPolicy: Always
      volumeMounts:
        - name: secrets-store-inline
          mountPath: "/mnt/secrets-store"
          readOnly: true
  volumes:
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "aws-secrets-env"

I get the error Warning FailedToCreateSecret 3s (x10 over 5s) csi-secrets-store-controller timed out waiting for the condition

If I modify the secretObjects type to "Opaque" the secret is created correctly.

I'm not sure if I'm doing something wrong, or kubernetes.io/dockerconfigjson is not supported, as I can't find any information in the documentation.

Sync all key/value pairs from AWS secret to K8S secret

I am currently experimenting with using secretObjects to sync AWS Secrets into K8S secrets, and while I can use the jmesPath functionality to get a kubernetes secret that mirrors the AWS Secret, I have to list out every key in the secret manually. For a secret named MySecret with the below data:

{
  "FOO": "bar",
  "BIN": "baz"
}

I would need to create the following SecretProviderClass in order to fully mirror that into a Kubernetes secret:

    apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
    kind: SecretProviderClass
    metadata:
      name: secret-access-test
      namespace: default
    spec:
      provider: aws
      secretObjects:
      - data:
        - key: FOO
          objectName: FOOAlias
        - key: BIN
          objectName: BINAlias
        secretName: test-sync-secret
        type: Opaque
      parameters:
        objects: |
          - objectName: "MySecret"
            objectType: "secretsmanager"
            jmesPath:
            - path: "FOO"
              objectAlias: "FOOAlias"
            - path: "BIN"
              objectAlias: "BINAlias"

If I added a new value to the AWS Secret, I would then need to also update the secretObjects and parameters adding it there as well. It would be nice if the provider could take a key/value formatted secret and automatically sync all key/value pairs into the kubernetes secret.

Perhaps something like this:

    apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
    kind: SecretProviderClass
    metadata:
      name: secret-access-test
      namespace: default
    spec:
      provider: aws
      secretObjects:
      - data:
          objectName: MySecret
          syncAllKeys: true
        secretName: test-sync-secret
        type: Opaque
      parameters:
        objects: |
          - objectName: "MySecret"
            objectType: "secretsmanager"

under secretObjects, for any given objectName, rather than specifying a key value, you could instead set syncAllKeys to true. Assuming that the object could be decoded as key/value pairs, each of those key/value pairs would be entered into the kubernetes secret leaving you with a secret such as this:

apiVersion: v1
data:
  BIN: YmF6
  FOO: YmFy
kind: Secret
metadata:
  name: test-sync-secret
  namespace: default
type: Opaque

Error with Mounting SecretproviderClass with AWS Provider

Hello,

I'm receiving an error when I launch a new deployment. These are the steps I'm using.

  1. Deploy SecretProviderClass: kubectl apply -f test-secrets-array.yaml
  2. Create a deploy using helm

When I go to deploy pods, I receive an error:

"Warning FailedMount 23s (x7 over 55s) kubelet, ip-10-6-19-185.ec2.internal MountVolume.SetUp failed for volume "secrets" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod test-service/deployment-test-data-services-8646cdc588-vbz2p, err: rpc error: code = Unknown desc = Failed to load SecretProviderClass: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go value of type []*provider.SecretDescriptor"

Here's how my manifest is setup for SecretProviderClass:

kind: SecretProviderClass
metadata:
  name: eks-test-secrets
  namespace: test-services
spec:
  provider: aws
  parameters:
    objects: |
      array:
        - |
          objectName: "arn:aws:secretsmanager:us-east-1:174596742332:secret:test1-qlL3Np"
        - |
          objectName: "arn:aws:secretsmanager:us-east-1:174596742332:secret:testsecret-uDiDIO"

I also added the parameter:

--set grpcSupportedProviders=”aws”

Just wanted to see if anyone's having the same issue. Thanks.

How would I limit who can mount what Secrets into what namespaces?

Hey Guys,
how would I prevent folks from one namepspace to also mount secrets from Secret Manager that are mounted already in some other namespace?
What would be access control pattern to use here to make sure specific namespace gets only access to a specific SecretManager secrets? Normally that would be handled via separate IAM roles, but in this case the cluster wide csi driver uses one and the same role for all namespaces?
Aplologies If I misunderstood something, I'm new to this and testing this out now.
Cheers, Tomasz

Not able to run on EKS Fargate-only

I'm running EKS on Fargate-only and it looks like it can not connect to secret manager. Is it because DeamonSet is not supported on Fargate? I do not see any pods of csi-secret-store and sci-secrete-store-provider-aws.

I can provide more details if my assumption isn't correct.

Unable to create a valid json representation of the SecretProviderClass object array

The readme notes that the object parameter is "most easily written using a YAML multi-line string or pipe character". Unfortunately cdk-eks will only accept json (https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-eks.KubernetesManifestProps.html).

I have tried a dozen ways to produce an acceptable json representation but can't seem to get it to work. Could you provide an example that would produce the equivalent of?:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: my-secrets
  namespace: my-namespace
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "arn:aws:blahblah"
        objectType: "secretsmanager"

Many thanks!

Not able to refer aws parameter store values in my pods.

By using SecretProviderClass object we can specify the key for secret objects & we can use them in pod by referring the same key. But if I want to use parameter store values how can I refer them in my pods ? As I can't define any key for parameter store values & also it does not support character '/'(parameter store key start with '/')

I need a solution of how can I refer my parameter store values in my pods.

error:- Getting below error while installing the helm charts.
Error: INSTALLATION FAILED: DaemonSet.apps "fluentbit-daemonset" is invalid: [spec.template.spec.containers[0].env[1].valueFrom.secretKeyRef.key: Invalid value: "/dev/eks/endpoint": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-.a-zA-Z0-9]+'), spec.template.spec.containers[0].env[2].valueFrom.secretKeyRef.key: Invalid value: "/dev/eks/ca": a valid config key must consist of alphanumeric characters, '-', '' or '.'

When using jmesPath i get this error in the driver log and secret not created

What steps did you take and what happened:
When using jmesPath i get this error in the driver log:

failed to get data in spc for secret" err="file matching objectName username not found in the pod" spc="flux-system/csi-gitlab-secrets" pod="flux-system/csi-secrets-sync-6b6fb94978-2tsg4" secret="flux-system/dummy-consumer-gitlab" spcps="flux-system/csi-secrets-sync-6b6fb94978-2tsg4-flux-system-csi-gitlab-secrets 
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: csi-gitlab-secrets
  namespace: flux-system
spec:
  provider: aws
  secretObjects:
    - secretName: dummy-consumer-gitlab
      type: Opaque
      labels:
        provider: "csi-driver"
      data:
        - objectName: username
          key: username
        - objectName: password
          key: password

  parameters:
    objects: |
      - objectName: "arn:aws:secretsmanager:eu-west-2:XXXXXXXXXX:secret:/my/path/to/dummy-secret-xyz123"
        objectType: "secretsmanager"
        objectAlias: flux-dummy-consumer
        jmesPath:
            - path: "username"
              objectAlias: "username"
            - path: "password"
              objectAlias: "password"

aws secret is populated:

{
"username":"abc",
"password":"abc123"
}

pod deploy:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: csi-secrets-sync
  namespace: flux-system
  labels:
    app: csi-secrets-sync
spec:
  replicas: 1
  selector:
    matchLabels:
      app: csi-secrets-sync
  template:
    metadata:
      labels:
        app: csi-secrets-sync
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      serviceAccountName: flux-secrets
      containers:
      - name: csi-secrets-sync
        image: kubernetes/pause:latest
        resources:
          limits:
            cpu: 50m
            memory: 32Mi
        volumeMounts:
          - name: secrets-store
            mountPath: "/mnt/secrets-store"
            readOnly: true
        env:
          - name: username
            valueFrom:
              secretKeyRef:
                name: dummy-consumer-gitlab
                key: username
          - name: password
            valueFrom:
              secretKeyRef:
                name: dummy-consumer-gitlab
                key: password
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
      volumes:
      - name: secrets-store
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass:  "csi-gitlab-secrets"

What did you expect to happen:

I would expect the secret to be mounted with 2 data elements, one for user and one for password.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

When the jmesPath is commented out and the object name is set to the objectAlias i can retrieve the entire json string.

Which provider are you using:
[e.g. Azure Key Vault, HashiCorp Vault, etc. Have you checked out the provider's repo for more help?]

aws

Environment:

Secrets Store CSI Driver version: (use the image tag): 0.2.0 (helm deploy)
Kubernetes version: (use kubectl version): 1.21.5

Unknown desc = Name already in use for objectName

Hello,
When using the ASCP (Aws Secret Configuration Provider) to retrieve some secrets from aws secret manager, I receive the error from the title of the issue when you try to have more than one secret objects in Kubernetes, fetched from the same secret path/name in aws secret manager. Here is an example:
secret1 = secret_name_in_aws_secret_manager
secret2 = secret_name_in_aws_secret_manager
Then use these secret1 and secret2 in the same or even different SecretProviderClass produce an error.

I would like to ask you if this is expected behaviour and if this could be fixed or there is a proper workaround. To avoid this, after trying different SecretProviderClass and different mounts, I've just created two more secrets with different name but I am looking for a better solution. I didn't find anything related in the issues here, in the issues page of the csi-driver and the documentation.
Thank you in advance!

Error connecting to provider "aws"

Hello, I'm using latest helm chart here https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts and met the errors:

GRPC error: failed to mount secrets store objects for pod default/nginx-deployment-75bfbbcf99-bwpqd, err: error connecting to provider "aws": provider not found: provider "aws"

when create a new SecretProviderClass.
I followed all the instruction but unfortunately it doesn't seem to work, have any of you had the same problem?

Failed to listen on unix socket. error: listen unix /etc/kubernetes/secrets-store-csi-providers/aws.sock: bind: permission denied

Related to #12, pods won't start on my cluster due to permissions for the hardcoded unix socket path.

I0521 18:18:44.622108       1 main.go:25] Starting secrets-store-csi-driver-provider-aws version 1.0.r1-2021.04.22.19.04
F0521 18:18:44.624781       1 main.go:44] Failed to listen on unix socket. error: listen unix /etc/kubernetes/secrets-store-csi-providers/aws.sock: bind: permission denied
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc00000e001, 0xc000496000, 0xac, 0xce)
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x25bb0e0, 0xc000000003, 0x0, 0x0, 0xc00048c150, 0x251de21, 0x7, 0x2c, 0x0)
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x25bb0e0, 0x3, 0x0, 0x0, 0x195df6b, 0x2a, 0xc0003b3f18, 0x1, 0x1)
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.main()
        /home/ec2-user/secrets-store-csi-driver-provider-aws/main.go:44 +0x2df

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x25bb0e0)
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
        /home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd8

goroutine 8 [syscall]:
os/signal.signal_recv(0x0)
        /usr/lib/golang/src/runtime/sigqueue.go:147 +0x9d
os/signal.loop()
        /usr/lib/golang/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
        /usr/lib/golang/src/os/signal/signal.go:150 +0x45

goroutine 9 [chan receive]:
main.main.func1(0xc0003588a0, 0xc00021b180)
        /home/ec2-user/secrets-store-csi-driver-provider-aws/main.go:37 +0x45
created by main.main
        /home/ec2-user/secrets-store-csi-driver-provider-aws/main.go:36 +0x205

Permission Error

kubectl logs --follow -n kube-system csi-secrets-store-secrets-store-csi-driver-knkbc -c secrets-store

The k8s secret sync does not seem to be working....

Ran into this error E1103 15:04:34.473126 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:kube-system:secrets-store-csi-driver" cannot list resource "secrets" in API group "" at the cluster scope and unable to figure what exactly needs to be done.

AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity

I have the csi driver and aws provider installed on my eks cluster. I am attempting to follow the instructions for usage, but my test pod is getting the following error trying to mount the secret:

  Warning  FailedMount  41m  kubelet  MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/nginx-deployment-74d68b559b-lsfr9, err: rpc error: code = Unknown desc = Failed fetching parameters: WebIdentityErr: failed to retrieve credentials
caused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity
           status code: 403, request id: 74069505-0a96-4068-a6ea-4e37e1bf9f9d

Here is the corresponding event from Cloud Trail:

{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "WebIdentityUser",
        "principalId": "arn:aws:iam::000000000000:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/00000000000000000000000000000000:sts.amazonaws.com:system:serviceaccount:default:vitotest-sa",
        "userName": "system:serviceaccount:default:vitotest-sa",
        "identityProvider": "arn:aws:iam::000000000000:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/00000000000000000000000000000000"
    },
    "eventTime": "2021-05-24T20:16:19Z",
    "eventSource": "sts.amazonaws.com",
    "eventName": "AssumeRoleWithWebIdentity",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "x.x.x.x",
    "userAgent": "aws-sdk-go/1.37.0 (go1.15.8; linux; amd64)",
    "errorCode": "AccessDenied",
    "errorMessage": "An unknown error occurred",
    "requestParameters": {
        "roleArn": "arn:aws:iam::000000000000:role/eksctl-my-cluster-name-addon-iamserviceaccou-Role1-000000000000",
        "roleSessionName": "secrets-store-csi-driver-provider-aws"
    },
    "responseElements": null,
    "requestID": "74069505-0a96-4068-a6ea-4e37e1bf9f9d",
    "eventID": "e3f6f433-c6e5-45fa-a970-f629aca1d037",
    "readOnly": true,
    "resources": [
        {
            "accountId": "000000000000",
            "type": "AWS::IAM::Role",
            "ARN": "arn:aws:iam::000000000000:role/eksctl-my-cluster-name-addon-iamserviceaccou-Role1-000000000000"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "eventCategory": "Management",
    "recipientAccountId": "000000000000",
    "tlsDetails": {
        "tlsVersion": "TLSv1.2",
        "cipherSuite": "ECDHE-RSA-AES128-SHA",
        "clientProvidedHostHeader": "sts.us-east-1.amazonaws.com"
    }
}

Steps I've taken:

  1. Associated the oidc provider with my cluster via eksctl:
eksctl utils associate-iam-oidc-provider --cluster my-cluster-name --region="us-east-1" --approve
  1. Created an IAM policy with SSM permissions.
POLICY_ARN=$(
    aws --region us-east-1 --query Policy.Arn --output text iam create-policy \
        --policy-name vitotest-policy \
        --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": ["ssm:DescribeParameters", "ssm:GetParameters"],
            "Resource": ["arn:aws:ssm:us-east-1:xxx:parameter/vitotest/*"]
        },
        {
            "Effect": "Allow",
            "Action": [
              "kms:Describe*",
              "kms:List*",
              "kms:Encrypt",
              "kms:Decrypt",
              "kms:ReEncrypt*",
              "kms:GenerateDataKey*"
            ]
            "Resource": ["arn:aws:kms:us-east-1:xxx:key/xxx"]
        }
    ]}'
)
  1. Created IAM service account via eksctl:
eksctl create iamserviceaccount \
    --name vitotest-sa \
    --region="us-east-1" \
    --cluster "my-cluster-name" \
    --attach-policy-arn "$POLICY_ARN" \
    --approve \
    --override-existing-serviceaccounts
  1. Applied the following manifest to my k8s cluster:
---
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: vitotest-aws-secrets
spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "vitotest/mykey"
        objectType: "ssmparameter"
      - objectName: "vitotest/mysecret"
        objectType: "ssmparameter"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
    owner: vito
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        owner: vito
    spec:
      serviceAccountName: vitotest-sa
      volumes:
      - name: secrets-store-inline
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: "vitotest-aws-secrets"
      containers:
      - name: nginx-deployment
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - name: secrets-store-inline
          mountPath: "/mnt/secrets-store"
          readOnly: true

Secret Set as ENV var didn't work

What steps did you take and what happened:
When I simple mount a secret using AWS as provider all works fine:

kubectl exec -it secret-test -- cat /mnt/secrets-store/databasecredentials
{"database-name":"sampleapp","database-password":"securedatabasepassword","database-port":"5432","database-username":"sampleappuser"}

But if I try to sync the secret as a kubernetes secret and use as variable, the container fails saying: Error: secret "database-name" not found

What did you expect to happen:

Use the secrets values as ENV variables.

Anything else you would like to add:
The secret definition:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
    name: sample-app-secret
spec:
  provider: aws
  secretObjects:
   - data:
     - key: database-name
       objectName: databasecredentials
     secretName: database-name
     type: Opaque
  parameters:
    objects: |
      - objectName: "sample-app"
        objectType: "secretsmanager"
        objectAlias: "databasecredentials"

The pod when it fails:

---
apiVersion: v1
kind: Pod
metadata:
  name: secret-test
spec:
  serviceAccountName: sample-app-role
  volumes:
    - name: api-secret
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "sample-app-secret"
  containers:
    - name: application
      image: busybox
      command:
        - "sleep"
        - "3600"
      env:
        - name: API_KEY
          valueFrom:
            secretKeyRef:
              name: database-name
              key: database-name
      volumeMounts:
        - name: api-secret
          mountPath: "/mnt/secrets-store"
          readOnly: true

The pod definition when works, but I don't have the env, only the secret file

---
apiVersion: v1
kind: Pod
metadata:
  name: secret-test
spec:
  serviceAccountName: sample-app-role
  volumes:
    - name: api-secret
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "sample-app-secret"
  containers:
    - name: application
      image: busybox
      command:
        - "sleep"
        - "3600"
      # env:
      #   - name: API_KEY
      #     valueFrom:
      #       secretKeyRef:
      #         name: database-name
      #         key: database-name
      volumeMounts:
        - name: api-secret
          mountPath: "/mnt/secrets-store"
          readOnly: true

Which provider are you using:
AWS secret manager.
The installation was done on eks 1.20 using helm to install the driver:

resource "helm_release" "secrets-store-csi-driver" {
  name       = "secrets-store-csi-driver"
  chart      = "secrets-store-csi-driver"
  repository = "https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts"
  namespace  = "kube-system"
}

and then I install the AWS provider
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml

Environment:

  • Secrets Store CSI Driver version: (use the image tag):
    k8s.gcr.io/csi-secrets-store/driver:v0.0.23
    public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r1-10-g1942553-2021.06.04.00.07-linux-amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-eks-6b7464", GitCommit:"6b746440c04cb81db4426842b4ae65c3f7035e53", GitTreeState:"clean", BuildDate:"2021-03-19T19:33:03Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

GRPC error: failed to mount secrets store object

Hello:

We are currently experiencing an issue with the Secrets Store CSI Driver v0.0.23, where it is not able to mount the secret store object. We are seeing the Pods are stuck in a "ContainerCreating" state:

pod/robtest-aws-secret-manager-deploy-77747544cf-826n5   0/1     ContainerCreating   0          74m
pod/robtest-aws-secret-manager-deploy-77747544cf-8m9cd   0/1     ContainerCreating   0          74m

Here is the SecretProviderClass YAML:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: robtest-aws-secret-provider-class
spec:
  provider: aws
  parameters:
    region: us-east-2
    objects: |
        - objectName: "arn:aws:secretsmanager:us-east-2:<AWS-account-#>:secret:MySecret-3jq4EL"
          objectType: "secretsmanager"

Any ideas as to why we are experiencing this issue below?

I0616 18:17:11.646404       1 nodeserver.go:300] "Using grpc client" provider="aws" 
  pod="robtest-aws-secret-manager-deploy-77747544cf-6pfvr"

I0616 18:19:11.643100       1 nodeserver.go:73] "unmounting target path as node publish volume failed" 
  targetPath="/var/lib/kubelet/pods/945bf1d1-a91b-4018-accd-1d79f5a1dbf9/volumes/kubernetes.io~csi/secrets-store-inline/mount"
  pod="cluster-addons/robtest-aws-secret-manager-deploy-77747544cf-6pfvr"

E0616 18:19:11.664341       1 utils.go:79] GRPC error: failed to mount secrets store objects for
  pod cluster-addons/robtest-aws-secret-manager-deploy-77747544cf-6pfvr,
  err: rpc error: code = Canceled desc = context canceled

SecretsManager plaintext support

I am required to mount some secrets in environment variables, but the ARNs are of plaintext secrets in AWS SecretsManager.
The documentation examples mention only key/value secrets, so I guess plaintext secrets are not supported.
Can anyone confirm that is the case, and whether there is any plan to support plaintext secrets in addition to JSON key/value secrets?
They are a valid option in AWS, therefore it's not unlikely for a developer to be required to fetch them.

Question: Can I speed up the sync process between Amazon Secret Manager and the volume in the pod?

Hello.

I'm using secrets-store-csi-driver-provider-aws with IRSA like described here: https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/

Everything is working well, but if I change manually secret in Amazon Secret Manager it takes about 30-60 seconds get get it "propagated" to the pod with mounted driver: secrets-store.csi.k8s.io volume.

Is there any way how I can speed it up ?

Thank you

secretObjects not working as expected

Firstly, apologies if this is a misconfiguration on my end and not a bug.

I have the following Secrets Manager object:

{
  "username": "admin",
  "password": "abc123",
}

And the following SecretProviderClass:

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: database-secrets
spec:
  provider: aws   
  secretObjects:
  - secretName: db-creds
    type: Opaque
    data:
    - objectName: database-secrets
      key: username
    - objectName: database-secrets
      key: password
  parameters:
    objects: |
        - objectName: "database-secrets"
          objectType: "secretsmanager"
          jmesPath:
              - Path: "username"
                ObjectAlias: "username"
              - Path: "password"
                ObjectAlias: "password" 

This does create a secret with the correct fields, but the data inside them is always the full JSON object, and not the individual value (secret decoded for readability):

apiVersion: v1
data:
  password: {"username":"admin","password":"abc123"}
  username: {"username":"admin","password":"abc123"}
kind: Secret
metadata:
  name: db-creds
type: Opaque

I expected:

apiVersion: v1
data:
  password: "abc123"
  username: "admin"
kind: Secret
metadata:
  name: db-creds
type: Opaque

Versions:

  • k8s: 1.21
  • secrets-store-csi-driver-provider-aws: 1.0.r2-2021.08.13.20.34-linux-amd64
  • secrets-store-csi-driver: v0.2.0

Retrieve secrets from a list of regions

The provider currently supports only one region , configurable as a string.
Given that Secrets Manager has native support for secret replication, would it be possible to improve HA by retrieving secrets from a list of regions?

Feature Request: Option to read a value from inside the secret for key/value secerts

When a secret has a JSON document , for example on the case of key/value secret, would be nice to be able to select the key you want on a particular secret, for Opaque secrets for example

looking at the vault provider they do something like this here
https://github.com/hashicorp/vault-csi-provider/blob/master/internal/provider/provider.go#L151

for example , if we have the secret arn:aws:secretsmanager:us-east-9:000000000000:secret:secret/key-value-secret

which contains
key1 = value1
key2 = value2

currently if we use that, we get a JSON document (either in filesystem or environment variable)

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets
  namespace: chips
spec:
  provider: aws
  secretObjects:
  - secretName: aws-csi-test
    type: Opaque
    labels:
      environment: pro
    data:
    - objectName: values
      key: "vals"
  parameters:
    objects: |
        - objectName: "arn:aws:secretsmanager:us-east-9:000000000000:secret:secret/key-value-secret"
          objectAlias: values
          objectType: "secretsmanager"

and would be nice to be able to have something where we can , either, access the key directly or somehow autopopulate the Opaque secret with key=value data

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secrets
  namespace: chips
spec:
  provider: aws
  secretObjects:
  - secretName: aws-csi-test
    type: Opaque
    labels:
      environment: pro
    data:
    - objectName: values["key1"]
      key: "key1"
    - objectName: values.["key2"}
      key: "key1"
  parameters:
    objects: |
        - objectName: "arn:aws:secretsmanager:us-east-9:000000000000:secret:secret/key-value-secret"
          objectAlias: values
          objectType: "secretsmanager"

Permission denied when policy restricts to current version of secret

In the spirit of "least privileges", it's arguably a good practice restrict access to only the current version of a secret. Unfortunately the secrets-store-csi-driver-provider-aws runs into permission issues when doing so.

Steps to reproduce:

  • Create a role with a policy attached that only allows reading the current version of a secret:
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"],
        "Resource": "arn:aws:secretsmanager:eu-west-1:388662706032:secret:k8s-secret_foo_bar",
        "Condition": {
            "ForAnyValue:StringEquals": {
                "secretsmanager:VersionStage": "AWSCURRENT"
            }
        }
    }
}
  • See pod is stuck in ContainerCreating and events show PermissionDenied on reading the secret.
  • Remove the Condition from the policy, recreate the pod, watch it successfully read the secret.

jmespath parsing breaks on hyphens

Given the following payload in a SecretsManager Secret:

{
  "api-key": "xxxxx",
  "app-key": "yyyyy"
}

And the following secretproviderclass config:

spec:
  provider: aws
  parameters:
    objects: |
      - objectName: "thissecret"
        objectType: secretsmanager
        jmesPath:
          - path: "api-key"
            objectAlias: "api-key"
          - path: "app-key"
            objectAlias: "app-key"

Pods fail to start with the following event message:

 Invalid JMES Path: api-key

Removing the hyphen from the keys "solves" the problem.

(provider version: 1.0.r2-2021.08.13.20.34-linux-amd64)

Selinux enabled and permission denied

Hello,
With the latest version of the provider and Selinux enabled on the worker nodes, pods falls into error state.
The logs show permission denied:
``F0712 10:30:24.765098 1 main.go:52] Failed to listen on unix socket. error: listen unix /etc/kubernetes/secrets-store-csi-providers/aws.sock: bind: permission denied
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000126001, 0xc0001662a0, 0xac, 0xda)
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x25bc120, 0xc000000003, 0x0, 0x0, 0xc0004bc150, 0x251e471, 0x7, 0x34, 0x0)
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x25bc120, 0x3, 0x0, 0x0, 0x195dfad, 0x2a, 0xc00041fef8, 0x1, 0x1)
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.main()
/home/ec2-user/secrets-store-csi-driver-provider-aws/main.go:52 +0x3be

goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x25bc120)
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd8

goroutine 20 [syscall]:
os/signal.signal_recv(0x0)
/usr/lib/golang/src/runtime/sigqueue.go:147 +0x9d
os/signal.loop()
/usr/lib/golang/src/os/signal/signal_unix.go:23 +0x25
created by os/signal.Notify.func1.1
/usr/lib/golang/src/os/signal/signal.go:150 +0x45``

By adding security option to disable Selinux on the pods, it is working fine.

securityContext:
seLinuxOptions:
type: spc_t

Do you have any ideas on how you could make it work naively with Selinux please?

Thanks

Unable to retrieve Env set at the SecretProviderClass

Hi I am able to create the SecretProviderClass and pull down a secret to the mounted volume, however every time I try to create the value as a Environment variable it fails.

I have been over and over the configuration and I can not spot what I am doing wrong. Is anyone able to help 🙏🏻

Here is my secret-provider config.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: aws-secret-application
spec:
  provider: aws
  secretObjects:
    - secretName: database-creds-csi 
      type: Opaque
      data:
      - objectName: database-creds
        key: password
  parameters:
    objects: |
      - objectName: "database-creds"
        objectType: "secretsmanager"

Here is my deployment yaml

apiVersion: v1
kind: Pod
metadata:
  name: application
spec:
  serviceAccountName: db-secret-access
  containers:
    - name: application
      image: busybox
      imagePullPolicy: IfNotPresent
      command:
        - "sleep"
        - "3600"
      env:
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: database-creds-csi
              key: password
      volumeMounts:
        - name: db-secret
          mountPath: "/mnt/secrets-store"
          readOnly: true
  volumes:
    - name: db-secret
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "aws-secret-application"

I get the below error

Warning  Failed     5s (x3 over 7s)  kubelet            Error: secret "database-creds-csi" not found

However if I leave out the env section I can access the value if do this

kubectl exec -it application -- cat /mnt/secrets-store/database-creds; echo
{"password":"NEWPA$$WORD!!"}

Any help is appreciated. Thank you in advance.

Bottlerocket support

When launching the AWS provider on a Bottlerocket node, you get the following error

38f9d34f5435:~ mikestef$ k logs csi-secrets-store-provider-aws-46rkg -n kube-system
I0423 00:23:23.096328       1 main.go:25] Starting secrets-store-csi-driver-provider-aws version 1.0.r1-2021.04.22.19.04
F0423 00:23:23.096808       1 main.go:44] Failed to listen on unix socket. error: listen unix /etc/kubernetes/secrets-store-csi-providers/aws.sock: bind: permission denied
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc00000e001, 0xc0004a6000, 0xac, 0xce)
	/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x25bb0e0, 0xc000000003, 0x0, 0x0, 0xc00049c2a0, 0x251de21, 0x7, 0x2c, 0x0)
	/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x25bb0e0, 0x3, 0x0, 0x0, 0x195df6b, 0x2a, 0xc0003bbf18, 0x1, 0x1)
	/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
	/home/ec2-user/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.main()
	/home/ec2-user/secrets-store-csi-driver-provider-aws/main.go:44 +0x2df

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.