Giter VIP home page Giter VIP logo

charts's Introduction

ChartMuseum Project Helm Charts

Artifact HUB

Add repository

helm repo add chartmuseum https://chartmuseum.github.io/charts

Install chart (Helm v3)

helm install my-chartmuseum chartmuseum/chartmuseum --version 2.15.0

Install chart (Helm v2)

helm install --name my-chartmuseum chartmuseum/chartmuseum --version 2.15.0

charts's People

Contributors

cbuto avatar cvila84 avatar eddycharly avatar eviln1 avatar gezb avatar guptaarvindk avatar jdolitsky avatar kd7lxl avatar medzin avatar mikesigs avatar mkilchhofer avatar out-of-band avatar pgdagenais avatar pytimer avatar ramneekgupta91 avatar rfashwall avatar scbizu avatar scottrigby avatar sebidude avatar sermilrod avatar smana avatar tbickford avatar tijmenstor avatar tuannvm avatar vanto avatar vespian avatar williambrode avatar ymrsmns avatar yurrriq avatar yves-vogl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Ability to source secrets from file path

Use case: Vault sidecar injection e.g. https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example

Since the entrypoint is /chartmuseum, it's not possible to source as in Hashicorp's examples.

I've also attempted the following, but Kubernetes only seems to interpolate for variable names that have already been defined.

extraArgs:
  - --basic-auth-user=$(cat /vault/secrets/config | awk '{print $1}')
  - --basic-auth-pass=$(cat /vault/secrets/config | awk '{print $2}')

Maybe the optimal solution would be to simply allow a onStart script or something that happens before /chartmuseum starts, so that you could source the environment variables?

Chart-museum chart doesn't comply with 'restricted' Pod Security Standard

Current 'restricted' kubernetes pod policy standarts (https://kubernetes.io/docs/concepts/security/pod-security-standards/) require the following to be set up:

spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault

Current helm chart contains setting for runAsNonRoot but not for seccompProfile

Suggestion:
chart-museum should contain options to specify non-default seccompProfile.
Ideally, fully custom securityContext should be possible

I can do a pullrequest

chartmuseum.com website is down

The chartmuseum.com website is down. Looks like there is an automated build failure on the website site. Consequently, all the documentation is offline which i need to access so that i can get my k8s persistent volume deployment working with the chartmuseum pod. I would really like to see more documentation on the --storage-local-rootdir="./chartstorage" usage, since nothing indicates where this path is auto-created on the linux file system. There are definitely some gabs in the docs with the container image running as user chartmuseum and the permissions necessary to add persistent volume. Would be helpful if you baked into the image a /storage directory with the correct permissions so that it could be used for storing the helm chart data without having to hack the pod deployment configuration with initContainers and securityContexts.

Assume DISABLE_METRICS=false when serviceMonitor is enabled

Small quality-of-life enhancement suggestion:

If serviceMonitor.enabled is true, it stands to reason that the administrator intends for metrics to be scraped. In this case, env.open.DISABLE_METRICS should default to false unless explicitly overridden by user-supplied chart values.

Error service annotation processing

Error annotation processing

Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key

Use --debug flag to render out invalid YAML

Steps to reproduce

create values.yaml

service:
  annotations:
    foo: "1"
    bar: "2"

try to render template

helm template some-name chartmuseum/chartmuseum --version 3.1.0 -f values.yaml -s templates/service.yaml

Support annotations on Service

Google Cloud relies on service annotations to enable certain loadbalancer features (like Cloud CDN). It would be useful to have this exposed via the helm chart.

401 when configuring s3

Hi,

When deploying chartmuseum helm chart with following configuration:

Helm chart values (using 3.9.3):

podAnnotations:
  eks.amazonaws.com/sts-regional-endpoints: "true"

extraArgs:
  - --cache-interval=1m

env:
  open:
    AWS_SDK_LOAD_CONFIG: true
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: <BUCKET>
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: <REGION>
    CHART_POST_FORM_FIELD_NAME: chart
    PROV_POST_FORM_FIELD_NAME: prov
    DEPTH: 2
    DEBUG: true
    LOG_JSON: true
    DISABLE_STATEFILES: true
    ENABLE_METRICS: true
    DISABLE_API: false
    ALLOW_OVERWRITE: false
  existingSecret: chartmuseum-creds
  existingSecretMappings:
    BASIC_AUTH_USER: username
    BASIC_AUTH_PASS: password

service:
  type: NodePort

serviceMonitor:
  enabled: true

serviceAccount:
  create: false
  name: chartmuseum

ingress:
  enabled: true
  pathType: Prefix
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/certificate-arn: <ARN>
  hosts:
    - name: chartmuseum.<DOMAIN>
      path: /

Serviceaccount:

apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: <ROLE>
  name: chartmuseum
  namespace: default

Terraform resources:

resource "aws_s3_bucket" "chartmuseum" {
  bucket = <BUCKET>
}

// <CUSTOM_K8S_IAM_ROLE_MODULE>

data "aws_iam_policy_document" "chartmuseum_policy" {
  statement {
    actions = [
      "s3:ListBucket"
    ]
    resources = [
      "arn:aws:s3:::<BUCKET>"
    ]
  }
  statement {
    actions = [
      "s3:DeleteObject",
      "s3:GetObject",
      "s3:PutObject"
    ]
    resources = [
      "arn:aws:s3:::<BUCKET>/*"
    ]
  }
}

When the pod is running I'm getting a loop of 401, such as:

{"L":"DEBUG","T":"2023-03-13T18:05:10.227Z","M":"[723] Incoming request: /","reqID":"6e8395d4-98b5-4254-9646-454a15ff1b50"}
{"L":"ERROR","T":"2023-03-13T18:05:10.228Z","M":"[723] Request served","path":"/","comment":"","clientIP":"10.4.51.192","method":"GET","statusCode":401,"latency":"21.228µs","reqID":"6e8395d4-98b5-4254-9646-454a15ff1b50"}

Any suggestion on what the problem could be?

Thank you,
André Nogueira

Ingress ServiceBackendPort: "use-annotation" got "string", expected "integer" #581

From @cpboyd (helm/chartmuseum#581)

From https://artifacthub.io/packages/helm/chartmuseum/chartmuseum#extra-paths:

helm install my-chartmuseum chartmuseum/chartmuseum \
  --set ingress.enabled=true \
  --set ingress.hosts[0].name=chartmuseum.domain.com \
  --set ingress.extraPaths[0].service=ssl-redirect \
  --set ingress.extraPaths[0].port=use-annotation \

This, however, results in the following error:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0].backend.service.port.number): invalid type for io.k8s.api.networking.v1.ServiceBackendPort.number: got "string", expected "integer"

Replacing use-annotation with an integer works, but it'd be nice to simply refer to port defined in the policy.

Is bug? initcontainer no working when use securityContext and persistence

in deployment.yaml, the control constructure like below code:
{{- if .Values.securityContext.enabled }}
...
{{- else if .Values.persistence.enabled }}
...
{{- end}}

i think the correct is

{{- if .Values.securityContext.enabled }}
...
{{- if .Values.persistence.enabled }}
...
{{- end}}
{{- end}}

duplicate labels in pvc

When enabling persistence, the pvc-template duplicates labels which result in an invalid manifest, cf.

Error:

$ helm template myrelease chartmuseum/chartmuseum --set persistence.enabled=true
...
# Source: chartmuseum/templates/pvc.yaml   
kind: PersistentVolumeClaim                
apiVersion: v1                             
metadata:                                  
  name: myrelease-chartmuseum              
  labels:                                  
    helm.sh/chart: chartmuseum-3.9.2       
    app.kubernetes.io/name: chartmuseum    
    app.kubernetes.io/instance: myrelease  
    app.kubernetes.io/version: "0.15.0"    
    app.kubernetes.io/managed-by: Helm     
   # <---
    app.kubernetes.io/name: chartmuseum    
    app.kubernetes.io/instance: myrelease  
   # --->
...

Basic Authentication not working

Hi ,

we are using package Version:
Chartmuseum helm chart version: 3.7.1
Chartmuseum image version: 0.13.1

we tested Basic Authentication as given in this page by creating secreate
https://github.com/chartmuseum/charts/tree/main/src/chartmuseum#authentication

kubectl create secret generic chartmuseum-secret --from-literal="basic-auth-user=curator" --from-literal="basic-auth-pass=mypassword"

We see following 2 problems :
when we open https://chartmuseum.apps.XXXX.com/api/charts from browser

  1. 1st time browser ask for user/pass but on 2nd time even when browser is closed and re-open its not asking for user/pass .. looks session remember user /pass for longer duration

  2. We changed the pass , but still browser able to connect and show the data with old pass !!

Let us know how to fix this issue

Regards
Ramki

CHART_URL not working properly

Hey folks,

I've deployed chartmuseum using the chart, and I've set CHART_URL like this:

env:
  open:
    CHART_URL: "http://helm.mycompany.com"

But when I fetch a package I get a 404 not found error with this message:

Error: failed to fetch http://helm.mycompany.com/helm.mycompany.com/charts/mycompany-app-1.1.tgz : 404 Not Found

As you can see the base url is repeated.

Here is the index.yaml:

apiVersion: v1
entries:
  mycompany-app:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-12T10:50:24.966478774Z"
    description: A Helm chart for Kubernetes
    digest: **DIGEST**
    name: mycompany-app
    type: application
    urls:
    - helm.mycompany.com/charts/mycompany-app-1.1.tgz
    version: "1.1"
generated: "2022-08-12T10:50:25Z"
serverInfo: {}

I wonder why the http is removed from env var.
I deployed the chart once with CHART_URL: "helm.mycompany.com" but I updated then uninstalled and reinstalled it with the proper CHART_URL but I still can't install or fetch because of this base domaine repetition.

Any clues?

Install from chart to k8s with TLS

Hi,
Going through the installation instructions and chart templates, I find no option for enabling TLS in the application but only on ingress.
What are the steps required to enable TLS in the application? If it's unnecessary or impossible, how can I enable TLS for the ingress while the application is still only listening on 8080 (HTTP)?
Thanks

Chartmuseum Pod Runs but gives an application unavailable from its routed url.

We transitioned to using this chart after the original helm chart for chart museum was deprecated. The new instance has been deployed however the application seems to not be actually serving the endpoint. It is running in ocp4, here are some logs we got from our instance in ocp3 vs. ocp4. Lets us know if more information would be needed for assistance.

Ocp3 pod logs
image

Ocp4 pod logs
image

can't POST anything

Hi guys,
I'm trying to setup my chartmuseum, it looks like I miss something.
Chart itself installed straight forward, I see the welcome to ChartMuseum start page in the browser,
I also see "health"

curl https://charts.my.page/health
{"healthy":true}

but I can't see info and can't POST anything:

curl https://charts.my.page/info
{"error":"not found"}

curl --data-binary "@odoo-17.0.2.tgz" https://charts.my.page/api/charts
{"error":"not found"}

I'm on Rancher 2.5.4, RKE k8s 1.19.4.
this is my yaml I used during the chart installation:

---
  env: 
    open: 
      DEBUG: "false"
  ingress:
    enabled: true
    hosts:
      - name: charts.my.page
        path: /
        tls: true
        tlsSecret: charts.my.page-tls
    certManager: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
  persistence: 
    accessMode: "ReadWriteMany"
    enabled: "true"
    size: "2Gi"
    storageClass: "rook-cephfs"
  replicaCount: "2"

AWS Load Balancer and Chart Museum issue

When trying to install the latest chart version with aws-load-balancer-controller.

Diagnostics:
  eks:index:Cluster$aws:eks/cluster:Cluster (eks-cluster-eksCluster)
    Cluster is ready
 
  kubernetes:helm.sh/v3:Chart$kubernetes:apps/v1:Deployment (chartmuseum/chartmuseum)
    [1/2] Waiting for app ReplicaSet be marked available
    warning: [MinimumReplicasUnavailable] Deployment does not have minimum availability.
    warning: [ProgressDeadlineExceeded] ReplicaSet "chartmuseum-78cbfc496f" has timed out progressing.
    [1/2] Waiting for app ReplicaSet be marked available (0/1 Pods available)
    warning: [Pod chartmuseum/chartmuseum-78cbfc496f-8r9kt]: containers with unready status: [chartmuseum]
 
  kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:Service (chartmuseum/chartmuseum)
    [1/3] Finding Pods to direct traffic to
 
  kubernetes:helm.sh/v3:Chart$kubernetes:networking.k8s.io/v1:Ingress (chartmuseum/chartmuseum)
    Retry #0; creation failed: Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.aws-lb-controller-ns.svc:443/validate-networking-v1-ingress?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "aws-load-balancer-controller-ca")
    error: resource chartmuseum/chartmuseum was not successfully created by the Kubernetes API server : Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.aws-lb-controller-ns.svc:443/validate-networking-v1-ingress?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "aws-load-balancer-controller-ca")
chartMuseum = Chart(
    'chartmuseum',
    ChartOpts(
        chart="chartmuseum",
        version="3.9.3",
        fetch_opts=FetchOpts(
            repo="https://chartmuseum.github.io/charts"
        ),
        namespace=name,
        values={
            "ingress": {
                "enabled": True,
                "ingressClassName": "alb",
                "pathType": "ImplementationSpecific",
                "annotations": {
                    "alb.ingress.kubernetes.io/backend-protocol": "HTTP",
                    "alb.ingress.kubernetes.io/listen-ports": '[{"HTTPS":443},{"HTTP":80}]',
                    "alb.ingress.kubernetes.io/load-balancer-attributes":"idle_timeout.timeout_seconds=300",
                    "alb.ingress.kubernetes.io/scheme": "internet-facing",
                    "alb.ingress.kubernetes.io/ssl-redirect": "443"
                },
                "hosts": [
                    {
                        "name": f"{chartHostname}.{zoneName}",
                        "path": "/",

                        "tls": False
                    },
                ],
            },
            "env": {
                "open": {
                    "STORAGE": "amazon",
                    "STORAGE_AMAZON_BUCKET": cm_bucket.bucket,
                    "STORAGE_AMAZON_REGION": cm_bucket.region,
                    "DEBUG": True,
                    "DISABLE_API": False,
                    "ALLOW_OVERWRITE": True,
                    "AUTH_ANONYMOUS_GET": False,
                    "DEPTH": 1,
                    "AWS_SDK_LOAD_CONFIG": True,
                },
                "secret": {
                    "BASIC_AUTH_USER": "*****",
                    "BASIC_AUTH_PASS": "********",
                }
            },
            "serviceAccount": {
                "create": True,
                "annotations": {
                    "eks.amazonaws.com/role-arn": cm_role.arn
                }
            }
        }
    ),
    opts=pulumi.ResourceOptions(provider=provider,
                                depends_on=[alb_chart, cm_bucket])
)

Deployment option to allow strategy Recreate

There is an issue when you use an pvc to store your charts and want to re-deploy.

In this case, whenever you re-deploy, as the default rolling update strategy is set to the deployment, the volume will not be able to be attached to the new pod as it's still attached to the old pod. It will never be on Ready status and deployment will fail.

Maybe it's a good idea to set the deployment strategy to Recreate if pvc is set for the release?

Missing config in documentation for service accounts and S3 backend

This is from the documentation at https://github.com/chartmuseum/charts/tree/main/src/chartmuseum#permissions-grant-with-iam-roles-for-service-accounts

permissions grant with IAM Roles for Service Accounts

For Amazon EKS clusters, access can be provided with a service account using IAM Roles for Service Accounts.

Specify custom.yaml with such values

env:
  open:
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: my-s3-bucket
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: us-east-1
serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::{aws account ID}:role/{assumed role name}"

the provided value file doesn't work, using it as it is now will use the node's role for S3 access instead of the service account's role, and resulted in access denied for me.
This should be added to environment variables: AWS_SDK_LOAD_CONFIG: true

env:
  open:
    AWS_SDK_LOAD_CONFIG: true
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: my-s3-bucket
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: us-east-1
serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::{aws account ID}:role/{assumed role name}"

Enable Support for HPA in chartmuseum

Overview:

We deployed chart museum using ArgoCD but we are unable to use HPA for scaling out and in of the replicasets for chart museum deployments.
this is because we have default value for replicas: 1 in the values.yaml file which is the default value added to the deployment object.

As per kubernetes documentation below in reference, we could not use both replicas in deployment and enable custom HPA.

Proposal:

  • Would it possible to enable HPA for chartmuseum and add the condition for replicas in deployment spec if we have HPA enabled?

Reference:

Separate Secrets for HTTP Basic Auth and Cloud Credentials

I will briefly describe my scenario:
On our cluster we run ArgoCD alongside Chartmuseum. They belong to the "core" services of the cluster that are needed for all further functionality. Chartmuseum hosts our private Charts and ArgoCD is responsible for CD.
We want to create a Secret chartmuseum-http-auth that contains username/password for HTTP Basic Auth. ArgoCD and Chartmuseum deployments should read from that Secret to get/set credentials.
This Secret would be created before Chartmuseum itself is deployed.
Additionally we have to ship Cloud Credentials with the Chartmuseum deployment to access GCS/S3/etc. Those would be deployed as part of the Chartmuseum deployment.

The issue: We can not read the credentials from different Secrets. chartmuseum/templates/secret is only created if Values.env.existingSecret is not set. See: https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/secret.yaml#L1
In the deployment however we can only pass one secret name. See: https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/deployment.yaml#L92

My ideal workflow: The deployment of Chartmuseum would create the Secret containing Cloud Credentials just as it is doing now. However I want to be able to read HTTP BA credentials from a different Secret, that I was (somehow) created before the Chartmuseum deployment.

Does that make sense for anybody else as well? 😁

Running Chartmuseum in HA mode ?

Hello,

With flux integration enabled and using spot instances we observe quite a few issues when downloading charts - the issue could be mitigated by running Chartmuseum in HA mode ?

This issue is more or less to verify whether its even possible with this chart ?

Deprecated use of `targetPort` in ServiceMonitor

The Prometheus operator have deprecated the .spec.endpoints.targetPort in ServiceMonitors in favor of port.

Atm. the Service in the Helm chart points directly to service.targetPort if used (https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/service.yaml#L38), otherwise directly to the named http port of the main Chartmuseum container.

If the ServiceMonitor should still point towards the http port, an additional port should be opened when using service.targetPort.
Such that the Service goes from

  ports:
  - port: 8080
    targetPort: sidecar-http
    name: sidecar-http
    protocol: TCP

to

  ports:
  - port: 8080
    targetPort: http
    name: http
    protocol: TCP
  - port: 8080 # opening two ports with the same port number is probably not allowed
    targetPort: sidecar-http
    name: sidecar-http
    protocol: TCP

The Ingress resource should be changed accordingly, such as https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/ingress.yaml#L43

If what i propose above would be the preferred solution, I could take a look at it.

BUG - Invalid formatting for multiple service annotations

If you put multiple service annotations in the values.yaml file the template will render and invalid format because the spacing will be off.

To recreate, setup values.yaml to look like

service:
  servicename: chartmuseum
  type: LoadBalancer
  loadBalancerSourceRanges:
    - 1.2.3.4/5
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    external-dns.alpha.kubernetes.io/hostname: charts.my.domain

Attempt to render the chart

helm template chartmuseum/chartmuseum -f values.yaml         
Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key

with debug (working parts omitted)

---
# Source: chartmuseum/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: chartmuseum
  annotations:
        external-dns.alpha.kubernetes.io/hostname: charts.my.domain
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
  labels:
    helm.sh/chart: chartmuseum-3.1.0
    app.kubernetes.io/name: chartmuseum
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.13.1"
    app.kubernetes.io/managed-by: Helm
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  loadBalancerSourceRanges:
  - 1.2.3.4/5
  ports:
  - port: 8080
    targetPort: http
    protocol: TCP
    name: http
  selector:
    app.kubernetes.io/name: chartmuseum
    app.kubernetes.io/instance: RELEASE-NAME
.....(other objects rendered properly) .....

Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
helm.go:81: [debug] error converting YAML to JSON: yaml: line 6: did not find expected key
YAML parse error on chartmuseum/templates/service.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
	helm.sh/helm/v3/pkg/action/action.go:165
helm.sh/helm/v3/pkg/action.(*Install).Run
	helm.sh/helm/v3/pkg/action/install.go:240
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:242
main.newTemplateCmd.func2
	helm.sh/helm/v3/cmd/helm/template.go:73
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:895
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:80
runtime.main
	runtime/proc.go:225
runtime.goexit
	runtime/asm_amd64.s:1371

Notice how the two annotations for the Service are spaced differently. This throws off the parser

ChartMuseum caching is not working with dynamic aws credentials

I am deploying Chart museum using helm charts, and below is my configuration file

spec:
  values:
    env:
      open:
        STORAGE: amazon
        STORAGE_AMAZON_BUCKET: xxxx-helm-charts
        STORAGE_AMAZON_PREFIX: xxxx-charts-s3
        STORAGE_AMAZON_REGION: eu-central-1
        AWS_SHARED_CREDENTIALS_FILE: /aws/credentials
        AWS_REGION: eu-central-1
    extraArgs:
      - --cache-interval=15m
    podAnnotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "vault-kubernetes"
        vault.hashicorp.com/agent-configmap: 'xxxx-charts-configmap'
        vault.hashicorp.com/agent-inject-containers: "chartmuseum"
        vault.hashicorp.com/secret-volume-path: "/aws"
    serviceAccount:
      create: false
      name: "default"
      automountServiceAccountToken: true

I am using vault aws dynamic secret engine to fetch credentials for connecting to s3. All is working fine, except i am getting this error (as below) in my chartmuseum container logs. The secret is rotated successfully by dynamic secret engine but somehow the chartmuseum code that is calling s3 as per the cache-interval is still using the old credentials. It resolves if we restart it but we do not want to add this restart.

_{"L":"INFO","T":"2023-04-08T19:35:17.293Z","M":"Rebuilding index for tenant","repo":""}
{"L":"ERROR","T":"2023-04-08T19:35:17.371Z","M":"InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.\n\tstatus code: 403, request id: XXXXXXXXXXXXX, host id: 9+****************************************************************************************=","repo":""}_

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.