Giter VIP home page Giter VIP logo

podinfo's Introduction

podinfo

e2e test cve-scan Go Report Card Docker Pulls

Podinfo is a tiny web application made with Go that showcases best practices of running microservices in Kubernetes. Podinfo is used by CNCF projects like Flux and Flagger for end-to-end testing and workshops.

Specifications:

  • Health checks (readiness and liveness)
  • Graceful shutdown on interrupt signals
  • File watcher for secrets and configmaps
  • Instrumented with Prometheus and Open Telemetry
  • Structured logging with zap
  • 12-factor app with viper
  • Fault injection (random errors and latency)
  • Swagger docs
  • Timoni, Helm and Kustomize installers
  • End-to-End testing with Kubernetes Kind and Helm
  • Multi-arch container image with Docker buildx and GitHub Actions
  • Container image signing with Sigstore cosign
  • SBOMs and SLSA Provenance embedded in the container image
  • CVE scanning with Trivy

Web API:

  • GET / prints runtime information
  • GET /version prints podinfo version and git commit hash
  • GET /metrics return HTTP requests duration and Go runtime metrics
  • GET /healthz used by Kubernetes liveness probe
  • GET /readyz used by Kubernetes readiness probe
  • POST /readyz/enable signals the Kubernetes LB that this instance is ready to receive traffic
  • POST /readyz/disable signals the Kubernetes LB to stop sending requests to this instance
  • GET /status/{code} returns the status code
  • GET /panic crashes the process with exit code 255
  • POST /echo forwards the call to the backend service and echos the posted content
  • GET /env returns the environment variables as a JSON array
  • GET /headers returns a JSON with the request HTTP headers
  • GET /delay/{seconds} waits for the specified period
  • POST /token issues a JWT token valid for one minute JWT=$(curl -sd 'anon' podinfo:9898/token | jq -r .token)
  • GET /token/validate validates the JWT token curl -H "Authorization: Bearer $JWT" podinfo:9898/token/validate
  • GET /configs returns a JSON with configmaps and/or secrets mounted in the config volume
  • POST/PUT /cache/{key} saves the posted content to Redis
  • GET /cache/{key} returns the content from Redis if the key exists
  • DELETE /cache/{key} deletes the key from Redis if exists
  • POST /store writes the posted content to disk at /data/hash and returns the SHA1 hash of the content
  • GET /store/{hash} returns the content of the file /data/hash if exists
  • GET /ws/echo echos content via websockets podcli ws ws://localhost:9898/ws/echo
  • GET /chunked/{seconds} uses transfer-encoding type chunked to give a partial response and then waits for the specified period
  • GET /swagger.json returns the API Swagger docs, used for Linkerd service profiling and Gloo routes discovery

gRPC API:

  • /grpc.health.v1.Health/Check health checking
  • /grpc.EchoService/Echo echos the received content
  • /grpc.VersionService/Version returns podinfo version and Git commit hash

Web UI:

podinfo-ui

To access the Swagger UI open <podinfo-host>/swagger/index.html in a browser.

Guides

Install

To install Podinfo on Kubernetes the minimum required version is Kubernetes v1.23.

Timoni

Install with Timoni:

timoni -n default apply podinfo oci://ghcr.io/stefanprodan/modules/podinfo

Helm

Install from github.io:

helm repo add podinfo https://stefanprodan.github.io/podinfo

helm upgrade --install --wait frontend \
--namespace test \
--set replicaCount=2 \
--set backend=http://backend-podinfo:9898/echo \
podinfo/podinfo

helm test frontend --namespace test

helm upgrade --install --wait backend \
--namespace test \
--set redis.enabled=true \
podinfo/podinfo

Install from ghcr.io:

helm upgrade --install --wait podinfo --namespace default \
oci://ghcr.io/stefanprodan/charts/podinfo

Kustomize

kubectl apply -k github.com/stefanprodan/podinfo//kustomize

Docker

docker run -dp 9898:9898 stefanprodan/podinfo

Continuous Delivery

In order to install podinfo on a Kubernetes cluster and keep it up to date with the latest release in an automated manner, you can use Flux.

Install the Flux CLI on MacOS and Linux using Homebrew:

brew install fluxcd/tap/flux

Install the Flux controllers needed for Helm operations:

flux install \
--namespace=flux-system \
--network-policy=false \
--components=source-controller,helm-controller

Add podinfo's Helm repository to your cluster and configure Flux to check for new chart releases every ten minutes:

flux create source helm podinfo \
--namespace=default \
--url=https://stefanprodan.github.io/podinfo \
--interval=10m

Create a podinfo-values.yaml file locally:

cat > podinfo-values.yaml <<EOL
replicaCount: 2
resources:
  limits:
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 64Mi
EOL

Create a Helm release for deploying podinfo in the default namespace:

flux create helmrelease podinfo \
--namespace=default \
--source=HelmRepository/podinfo \
--release-name=podinfo \
--chart=podinfo \
--chart-version=">5.0.0" \
--values=podinfo-values.yaml

Based on the above definition, Flux will upgrade the release automatically when a new version of podinfo is released. If the upgrade fails, Flux can rollback to the previous working version.

You can check what version is currently deployed with:

flux get helmreleases -n default

To delete podinfo's Helm repository and release from your cluster run:

flux -n default delete source helm podinfo
flux -n default delete helmrelease podinfo

If you wish to manage the lifecycle of your applications in a GitOps manner, check out this workflow example for multi-env deployments with Flux, Kustomize and Helm.

podinfo's People

Contributors

commixon avatar cv65kr avatar dependabot[bot] avatar dmccaffery avatar duxinxiao avatar errordeveloper avatar exfly avatar flomon avatar grampelberg avatar hiddeco avatar imduffy15 avatar jasonthedeveloper avatar jaykaku avatar jjchambl avatar luxas avatar michaelkebe avatar monotek avatar mstiri avatar mumoshu avatar mustafakarci avatar phoban01 avatar rajatvig avatar runyontr avatar seaneagan avatar stefanprodan avatar taylormonacelli avatar tdickman avatar the-technat avatar utkuozdemir avatar ytsarev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

podinfo's Issues

suggest rebuild of 6.0.0 as 6.0.1 to address vulnerabilities

Here are the insecure items currently in 6.0.0 docker image. I rebuilt myself w/o change to Dockerfile and these vulnerabilities were addressed.

CRITICAL Vulnerability found in os package type (APKG) - apk-tools (fixed in: 2.12.6-r0)(CVE-2021-36159 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-36159)
CRITICAL Vulnerability found in os package type (APKG) - libssl1.1 (fixed in: 1.1.1l-r0)(CVE-2021-3711 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3711)
CRITICAL Vulnerability found in os package type (APKG) - libcrypto1.1 (fixed in: 1.1.1l-r0)(CVE-2021-3711 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3711)

6.3 version pod won't start

Stuck at Container Creating on version 6.3:
helm upgrade --install podinfo podinfo --repo https://stefanprodan.github.io/podinfo --namespace test --create-namespace --set service.enabled=false --set serviceMonitor.enabled=true --set ingress.enabled=true --set ingress.classname=nginx

This works using version 6.2.3:
helm upgrade --install podinfo podinfo --repo https://stefanprodan.github.io/podinfo --version 6.2.3 --namespace test --create-namespace --set service.enabled=false --set serviceMonitor.enabled=true --set ingress.enabled=true --set ingress.classname=nginx

servicemonitor required with prometheus-operator

Running podinfo with flux-operator, Prometheus-operator fails to list podinfo as target. Looks like we need a servicemonitor . Not sure what needs to be added here as this does not work for me

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: podinfo
  labels:
    app: podinfo
spec:
  endpoints:
    - port: "9797"
  selector:
    matchLabels:
      app: podinfo

feat: update helm chart to support secure-port configuration

# enable tls on the podinfo service
tls:
  enabled: false
  # the name of the secret used to mount the certificate key pair
  secretName:
  # the path where the certificate key pair will be mounted
  certPath: /data/cert
  # the port used to host the tls endpoint on the service
  port: 9899
  # the port used to bind the tls port to the host
  # NOTE: requires privileged container with NET_BIND_SERVICE capability -- this is useful for testing
  # in local clusters such as kind without port forwarding
  hostPort:

# create a certificate manager certificate
certificate:
  create: false
  # the issuer used to issue the certificate
  issuerRef:
    kind: ClusterIssuer
    name: self-signed
  # the hostname / subject alternative names for the certificate
  dnsNames:
    - podinfo

Thoughts?

source controller not able to download repo podinfo

[openness@esi22-vm-26 ~]$ flux create source helm podinfo \

--namespace=default
--url=https://github.com/stefanprodan/podinfo
--interval=10m

kubectl describe HelmRepository podinfo
Events:
Type Reason Age From Message


Normal error 23s source-controller failed to download repository index: Get "https://github.com/stefanprodan/podinfo/index.yaml": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

I am running it in a corporate network which has proxy enabled. I have installed flux using curl -s https://fluxcd.io/install.sh | sudo bash and then followed the steps in https://github.com/stefanprodan/podinfo#continuous-delivery. How to start flex source controller with proxy enabled?

Cyan background

Not sure that's normal, but I just tried podinfo for the first time and it displays a kind of agressive cyan background instead of the darker blue.

I do see where it comes from, see the following code :

<div class="v-parallax" id="parallax-hero" style="height: 500px; background-color: cyan;">

Screenshot 2020-03-24 at 09 23 58

Publish MAJOR and MAJOR.MINOR tags for Docker images

Currently when publishing Docker images, tags are pushed with the following format:

  • version tag "MAJOR.MINOR.PATCH" e.g. "6.4.1"
  • mutable tag latest

It would be nice if the following mutable tags were also pushed:

  • MAJOR e.g. "6" - points to the latest release for a given MAJOR version (6 -> 6.4.1)
  • MAJOR.MINOR e.g. "6.4" - points to the the latest release for a given MAJOR.MINOR version (6.4 -> 6.4.1)

This way, users could easily pull in the latest changes for a given MAJOR or MAJOR.MINOR version.

no longer needed replace line in go.mod

It seems that stefanprodan/podinfo at now indirectly depends on newer safe version v0.5.0 of golang.org/x/text.
Keep following replace line in go.mod makes no sense but limits the update of golang.org/x/text. Should it be dropped or commented?

https://github.com/stefanprodan/podinfo/blob/master/go.mod#L40

// Fix CVE-2022-32149
replace golang.org/x/text => golang.org/x/text v0.4.0

https://github.com/stefanprodan/podinfo/blob/master/go.mod#L83

golang.org/x/text v0.5.0 // indirect

Behind proxy podinfo is unhealthy

I am using it for test purpose. I am behind proxy.
After deployment the podinfo shows unhealthy. Any suggestion please, to make it running.

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m38s                 default-scheduler  Successfully assigned vyomsoft-dev/podinfo-9589d9bb-dqjbd to node2
  Normal   Killing    2m8s                  kubelet            Container podinfo failed liveness probe, will be restarted
  Normal   Pulled     2m5s (x2 over 2m38s)  kubelet            Container image "dl-vyomharbor.dev./proxy-cached/stefanprodan/podinfo:latest" already present on machine
  Normal   Created    2m5s (x2 over 2m37s)  kubelet            Created container podinfo
  Normal   Started    2m5s (x2 over 2m37s)  kubelet            Started container podinfo
  Warning  Unhealthy  98s (x12 over 2m35s)  kubelet            Readiness probe failed: Get "http://10.233.75.8:80/": dial tcp 10.233.75.8:80: connect: connection refused
  Warning  Unhealthy  98s (x6 over 2m28s)   kubelet            Liveness probe failed: Get "http://10.233.75.8:80/": dial tcp 10.233.75.8:80: connect: connection refused

Thank you

Helm charts/podinfo/templates/ingress.yaml can't evaluate field hosts

I'm trying to test my cluster with podinfo chart created with fluxcd but it seems it breaks on values.ingress.hosts.

Helm upgrade failed: template: podinfo/templates/ingress.yaml:20:18: executing "podinfo/templates/ingress.yaml" at <.hosts>: can't evaluate field hosts in type interface {} Last Helm logs: preparing upgrade for podinfo resetting values to the chart's original version

And here's my helmrelease:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: podinfo
  namespace: staging
spec:
  releaseName: podinfo
  chart:
    spec:
      chart: podinfo
      sourceRef:
        kind: HelmRepository
        name: podinfo
        namespace: flux-system
      version: ">=1.0.0-alpha"
  interval: 5m
  install:
    remediation:
      retries: 3
  # Default values
  # https://github.com/stefanprodan/podinfo/blob/master/charts/podinfo/values.yaml
  values:
    cache: redis-master.redis:6379
    ingress:
      className: "traefik"
      enabled: true
      hosts:
        - host: stage.mydomain.com
          paths:
          - path: /
            pathType: ImplementationSpecific
      tls:
        secretName: panel-secret
        domains:
          - main: stage.mydomain.com

@stefanprodan i'd appreciate any help ;)

localhost:8080

hi,

if I want to access to project in browser localhost:8080/project
it show me error message not found

aad-pod-identity demo?

Can I use the podinfo image for aad-pod-identity demo? Will it work as expected if I add the details of identity to the manifests?

Separate the endpoints under app and management ports

When we write an app in springboot we have a practice of exposing actuator (spring's health, readiness, version etc... APIs) under a separate port and actual api layer endpoints on entirely separate ports.

This also allow us to create better network policies too.

Support dynamic paths endpoint?

Would it be desirable to add a podinfo endpoint that accepts dynamic paths?

For example, pattern /echo/** would accept /echo, /echo/foo, echo/foo/bar, etc. All paths would be handled the same way.

For now, I'm using k8s echoserver (registry.k8s.io/e2e-test-images/echoserver:2.5) to test this, as it always responds the same way for any URI path.

I use this feature to test arbitrary path patterns passed through a reverse proxy.

Describe the release process

I couldn't wrap my head around who actually calls make version-set, I think you should describe the release process in the readme. It is used as an example in flux tutorials and it makes it hard to understand the CI part of it.

exec user process caused "exec format error"

I'm following https://medium.com/@stefanprodan/running-kubernetes-on-scaleway-bare-metal-with-terraform-and-kubeadm-1cf18aae32d5
and I'm in the process of updating everything to the most recent versions of the involved software (terraform, metrics-server versions, kubernetes versions, etc.)

I'm almost done, but I'm trying out the podinfo deploy but I have just an error message in pod logs when starting the container this way:

$ kubectl --kubeconfig ./$(terraform output kubectl_config) \
>   apply -f https://raw.githubusercontent.com/stefanprodan/k8s-podinfo/master/deploy/auto-scaling/podinfo-svc-nodeport.yaml
service/podinfo-nodeport created

kubectl --kubeconfig ./$(terraform output kubectl_config) \
>   apply -f https://raw.githubusercontent.com/stefanprodan/k8s-podinfo/master/deploy/auto-scaling/podinfo-dep.yaml
deployment.apps/podinfo created
standard_init_linux.go:211: exec user process caused "exec format error"

Screen Shot 2019-08-04 at 01 00 27
Screen Shot 2019-08-04 at 01 01 01

Security Misconfiguration: HTTP Without TLS

Dear Colleague,

We are looking to find ways to help developers find security misconfigurations, i.e., Kubernetes manifest configurations that violate security best practices for Kubernetes manifests.

We have noticed an instance of HTTP without TLS/SSL in one of your Kubernetes manifests. The recommended practice is use of secure HTTP for each team's development and production environment. Enabling TLS ensures secure communication between cluster components. Otherwise, the communication could susceptible to man in the middle attacks.

Location of security misconfiguration:

- --backend-url=http://backend:9898/echo

Please use SSL/TLS to fix this misconfiguration. We would like to hear if you agree to fix this misconfiguration or have fixed the misconfiguration.

More labels for http_requests_total

Currently the metric called http_requests_total has only one label called status.
Adding the same labels as defined for the histogram metric called http_request_duration_seconds_bucket would be nice.
Namely:

  • method
  • path

I think it can be obtained from the http_request_duration_seconds_count but it's not straightforward.

Use service account in tests

I'd like to be able to run the Helm tests using a private repo to demonstrate how this is done. The service account can be added to the tests that pull images by adding this to the Pod specs:

spec:
  {{- if .Values.serviceAccount.enabled }}
  serviceAccountName: {{ template "podinfo.serviceAccountName" . }}
  {{- end }}

"error":"dial tcp: missing address","server":"podinfo-redis:6379"

Issue

Attempting to test some redis monitoring and the podinfo main application can't seem to connect to the redis pods,

values.yml

cache: "podinfo-redis:6379"
# Redis deployment
redis:
  enabled: true
  repository: redis
  tag: 6.0.8

deployment logs

❯ kubectl logs -n podinfo deployment/podinfo
Found 2 pods, using pod/podinfo-65d549885d-f8d2j
{"level":"info","ts":"2022-04-11T23:02:22.027Z","caller":"podinfo/main.go:150","msg":"Starting podinfo","version":"6.1.2","revision":"","port":"9898"}
{"level":"info","ts":"2022-04-11T23:02:22.032Z","caller":"api/server.go:265","msg":"Starting HTTP Server.","addr":":9898"}
{"level":"warn","ts":"2022-04-11T23:02:22.032Z","caller":"api/cache.go:159","msg":"cache server is offline","error":"dial tcp: missing address","server":"podinfo-redis:6379"}
{"level":"warn","ts":"2022-04-11T23:02:52.032Z","caller":"api/cache.go:159","msg":"cache server is offline","error":"dial tcp: missing address","server":"podinfo-redis:6379"}
{"level":"warn","ts":"2022-04-11T23:03:22.036Z","caller":"api/cache.go:159","msg":"cache server is offline","error":"dial tcp: missing address","server":"podinfo-redis:6379"}
{"level":"warn","ts":"2022-04-11T23:03:52.032Z","caller":"api/cache.go:159","msg":"cache server is offline","error":"dial tcp: missing address","server":"podinfo-redis:6379"}

I'm able to nslookup the podinfo-redis service which returns the correct address. I also spun up a test redis-cli container and was able to connect successfully.

redis-cli test

root@redis-cli:/data# redis-cli -h podinfo-redis -p 6379
podinfo-redis:6379> exit
root@redis-cli:/data# redis-cli -h podinfo-redis -p 6379 --stat
------- data ------ --------------------- load -------------------- - child -
keys       mem      clients blocked requests            connections          
1          866.85K  2       0       7556 (+0)           641         
1          866.85K  2       0       7569 (+13)          642         
1          866.85K  2       0       7573 (+4)           642         
1          866.85K  2       0       7580 (+7)           642         
1          866.85K  2       0       7584 (+4)           642         
1          866.85K  2       0       7588 (+4)           642         
1          866.85K  2       0       7592 (+4)           642         
1          866.85K  2       0       7596 (+4)           642  

Support Swagger

Generate Swagger definition for the podinfo API and self host the Swagger UI.

Confusion regarding comments on graceful shutdown

// wait for Kubernetes readiness probe to remove this instance from the load balancer

"wait for Kubernetes readiness probe to remove this instance from the load balancer"
I think this is not about waiting for the readiness probe, as the readiness probe takes a long time to take effect. It seems that this is actually because the SIGTERM and removal from service happen concurrently, and we need to wait a moment?

Describe the release process

I couldn't wrap my head around who actually calls make version-set, I think you should describe the release process in the readme. It is used as an example in flux tutorials and it makes it hard to understand the CI part of it.

cosign verify command in .cosign/README.md is incorrect

While working on a POC with Sigstore's cosign and policy-controller, I attempted to add podinfo, but it was blocked by the policy-controller because the signature couldn't be validated.

I went to the .cosign/README document, and tried to run the command manually, but it failed because the path to the cosign.pub is no longer valid:

$ cosign verify -key https://raw.githubusercontent.com/stefanprodan/podinfo/master/cosign/cosign.pub ghcr.io/stefanprodan/podinfo-deploy:latest
WARNING: the flag -key is deprecated and will be removed in a future release. Please use the flag --key.
Error: loading public key: pem to public key: PEM decoding failed
main.go:46: error during command execution: loading public key: pem to public key: PEM decoding failed

After updating the path to the correct path .cosign/cosign.pub, it failed attempting to pull the Rekor public key:

$ cosign verify -key https://raw.githubusercontent.com/stefanprodan/podinfo/74c60a927c7588900c28960d37ff2e5118d0eedf/.cosign/cosign.pub ghcr.io/stefanprodan/podinfo-deploy:latest
WARNING: the flag -key is deprecated and will be removed in a future release. Please use the flag --key.
panic: error retrieving rekor public key
# stacktrack truncated

After some investigation, I found the cause to be that my version of the cosign executable was too old. Updating to the current version worked. I suspect any 2.x version would have succeeded.

Setting a port on the PodSpec results in a HTTP server crash

{"level":"fatal","ts":"2019-03-21T17:48:59.820Z","caller":"api/server.go:127","msg":"HTTP server crashed","error":"listen tcp: address :tcp://10.111.55.252:9898: too many colons in address","stacktrace":"github.com/stefanprodan/k8s-podinfo/pkg/api.(*Server).ListenAndServe.func1\n\t/go/src/github.com/stefanprodan/k8s-podinfo/pkg/api/server.go:127"}

support for arm / arm64

Thanks a lot for this great project here.
Is it possible to build multiarch images [arm / arm64] for it?

feat: add support for secure-port (tls) in addition to http

AWS recently released a new load balancer controller with nlb-ip load balancing support in EKS, which is useful for E2E TLS termination scenarios, especially in fargate.

Adding support for secure-port to this container is useful for test / demo scenarios using nlb-ip load balancing.

Thinking of the following design:

  • add secure-port argument - 0 as a default value (secure port disabled)
  • add cert-path argument (for loading tls certificate key pair as expected from a volume mount for a tls secret) - /data/cert as a default value
  • add function to start a tls listener using the same handler mux

I'm happy to implement this if the community is in agreement.

Add gPRC API

Implement a gPRC version of the API (on a different port) for podinfo and generate the client in podcli.

Adding gRPC apis functionality

As there is already a server written for gRPC, gRPC apis (replicating HTTP ones) would be a valuable addition to the suite as a whole. I would like to discuss further on this and contribute. Thanks.

Fix group fpr app user

Actual Dockerfile has a typo in adduser invocation which leads to not using app group for app user:

$ docker container run -ti --rm ghcr.io/stefanprodan/podinfo:5.1.2 id app
uid=100(app) gid=65533(nogroup) groups=65533(nogroup),65533(nogroup)

Be able to set swagger "Base URL" via helm chart

When I expose the podinfo via ingress the swagger Base URL is still localhost:9898 and therefore each request have to be edited to work. Would it be possible to set it as a value for the heml chart? As far as I can see this it not possible at the moment.

Use kyverno apply

Consider migrating from conftest to kyverno. This would align tooling with flux v2, and then you can leverage the updated PSS tests from kyverno/policies.

Add imagePullSecrets to service account

To demonstrate using a service account for holding image pull secrets on a private repo, add serviceAccount.imagePullSecrets: [] to the values and

{{- with .Values.serviceAccount.imagePullSecrets }}
imagePullSecrets:
  {{- toYaml . | nindent 2 }}
{{- end }}

to serviceAccount.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.