Giter VIP home page Giter VIP logo

helm-charts-hardened's Introduction

SPIFFE Logo

Production Phase

The Secure Production Identity Framework For Everyone (SPIFFE) Project defines a framework and set of standards for identifying and securing communications between application services. At its core, SPIFFE is:

  • A standard defining how services identify themselves to each other. These are called SPIFFE IDs and are implemented as Uniform Resource Identifiers (URIs).

  • A standard for encoding SPIFFE IDs in a cryptographically-verifiable document called a SPIFFE Verifiable Identity Document or SVIDs.

  • An API specification for issuing and/or retrieving SVIDs. This is the Workload API.

The SPIFFE Project has a reference implementation, the SPIRE (the SPIFFE Runtime Environment), that in addition to the above, it:

  • Performs node and workload attestation.

  • Implements a signing framework for securely issuing and renewing SVIDs.

  • Provides an API for registering nodes and workloads, along with their designated SPIFFE IDs.

  • Provides and manages the rotation of keys and certs for mutual authentication and encryption between workloads.

  • Simplifies access from identified services to secret stores, databases, services meshes and cloud provider services.

  • Interoperability and federation to SPIFFE compatible systems across heterogeneous environments and administrative trust boundaries.

SPIFFE is a graduated project of the Cloud Native Computing Foundation (CNCF). If you are an organization that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF.

SPIFFE Standards

Getting Started

  • spiffe: This repository includes the SPIFFE ID, SVID and Workload API specifications, example code, and tests, as well as project governance, policies, and processes.
  • spire: This is a reference implementation of SPIFFE and the SPIFFE Workload API that can be run on and across varying hosting environments.
  • go-spiffe: Golang client libraries.
  • java-spiffe: Java client libraries

Communications

Contribute

SIGs & Working Groups

Most community activity is organized into Special Interest Groups (SIGs), time-bounded working groups, and our monthly community-wide meetings. SIGs follow these guidelines, although each may operate differently depending on their needs and workflows. Each group's material can be found in the /community directory of this repository.

Name Lead Group Slack Channel Meetings
SIG-Community Umair Khan (HPE) Here Here Notes
SIG-Spec Evan Gilman (VMware) Here Here Notes
SIG-SPIRE Daniel Feldman (HPE) Here Here Notes

Follow the SPIFFE Project You can find us on Github and Twitter.

SPIFFE SSC

The SPIFFE Steering Committee meets on a regular cadence to review project progress, address maintainer needs, and provide feedback on strategic direction and industry trends. Community members interested in joining this call can find details below.

To contact the SSC privately, please send an email to [email protected].

helm-charts-hardened's People

Contributors

anhpatel avatar cccsss01 avatar dennisgove avatar dependabot[bot] avatar developer-guy avatar dfeldman avatar drewwells avatar edwbuck avatar ericpfisher avatar faisal-memon avatar github-actions[bot] avatar grameshtwilio avatar inverseintegral avatar jer8me avatar kenhuffmanatnice avatar kfox1111 avatar krishnakv avatar laithlite avatar maia-iyer avatar marcofranssen avatar mattiasgees avatar mchurichi avatar mcrors avatar moritzschmitz-oviva avatar mrsabath avatar petercable avatar sabre1041 avatar simonostendorf avatar spire-helm-version-checker[bot] avatar truongnht avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts-hardened's Issues

Provide a way to validate kubelet identity on AKS

Hello everyone,

We are encountering an issue with SPIRE while utilizing it via the Helm Chart on Azure AKS, specifically when the skipKubeletVerification flag is disabled.

The error we're encountering is as follows:

time="2024-02-06T17:28:29Z" level=error msg="Failed to collect all selectors for PID" error="workload attestor \"k8s\" failed: rpc error: code = Internal desc = workloadattestor(k8s): unable to perform request: Get \"https://127.0.0.1:10250/pods\": x509: certificate signed by unknown authority" pid=8747 subsystem_name=workload_attestor

It appears that Azure generates separate self-signed certificates for the kubelet and kube-api server. Here are some details regarding the certificates:

Kube API Certificate

> kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | base64 --decode | openssl x509 -text -noout | head -n12

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            46:01:92:5f:0a:b4:5e:18:2a:3b:30:28:20:79:d0:24
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=ca
        Validity
            Not Before: Feb  1 13:35:35 2024 GMT
            Not After : Feb  1 13:45:35 2054 GMT
        Subject: CN=ca

Spire Agent Certificate

via Service Account and mounted on /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

> kubectl get configmaps -n spire kube-root-ca.crt -o json | jq -r '.data[]' |  openssl x509 -text -noout | head -n12

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            46:01:92:5f:0a:b4:5e:18:2a:3b:30:28:20:79:d0:24
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=ca
        Validity
            Not Before: Feb  1 13:35:35 2024 GMT
            Not After : Feb  1 13:45:35 2054 GMT
        Subject: CN=ca
        Subject Public Key Info:

Kubelet Certificate

via Ephemeral Container and OpenSSL CMD

>> kubectl debug -it -n spire spire-agent-XYZ --image=ubuntu --target=spire-agent
>> openssl s_client -showcerts -connect 127.0.0.1:10250 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | head -n12

Certificate:
	Data:
    	Version: 3 (0x2)
    	Serial Number:
        		36:3c:9e:af:fe:d3:55:b8:79:16:0a:50:5d:b8:75:1d:e9:12:5c:0b
    	Signature Algorithm: sha256WithRSAEncryption
    	Issuer: CN = aks-system-29976880-vmss000000
    	Validity
        	Not Before: Feb  7 09:07:17 2024 GMT
        	Not After : Feb  2 09:07:17 2044 GMT
    	Subject: CN = aks-system-29976880-vmss000000
    	Subject Public Key Info:

Has anyone encountered similar issues on Azure? If my assumption is correct and the spire-agent is unable to validate the kubelet server certificate due to missing certificate, a possible solution could be that Helm Chart provide the kubelet certificate to the spire-agent via a hostPath volume mount and set the path via the kubelet_ca_path parameter. This approach may allow the spire-agent to validate identity of the kubelet.

Read only containers

We should be able to further harden the configuration by setting the containers to read only.

Upgrade Job always changes

When helm upgrade, Job and pre-upgrade ClusterRole always seeing diff in 0.15.1

spire-system, spire-server-pre-upgrade, ClusterRole (rbac.authorization.k8s.io) has changed:
...
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: spire-server-pre-upgrade
    annotations:
      "helm.sh/hook": pre-upgrade
      "helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded, hook-failed
  rules:
    - apiGroups: ["admissionregistration.k8s.io"]
      resources: ["validatingwebhookconfigurations"]
-     resourceNames: ["spire-system-spire-controller-manager-webhook"]
+     resourceNames: ["spire-controller-manager-webhook"]
      verbs: ["get", "patch"]
spire-system, spire-server-pre-upgrade, Job (batch) has changed:
...
        securityContext:
          {}
        containers:
        - name: post-install-job
          securityContext:
            {}
          image: docker.io/rancher/kubectl:v1.26.10
          args:
            - patch
            - validatingwebhookconfiguration
-           - spire-system-spire-controller-manager-webhook
+           - spire-controller-manager-webhook

Custom CA

Until k8s has a native way of adding CA's to the containers chain of trust, we need a way to do it on the spire containers anyway.

Backup job to s3

We should support a cronjob to backup sqllite to s3 periodically.

unsupportedBuiltInPlugins - nodeAttestor x509pop

# NOTE: This is unsupported and only to configure currently supported spire built in plugins but plugins unsupported by the chart.
# Upgrades wont be tested for anything under this config. If you need this, please let the chart developers know your needs so we
# can prioritize proper support.
## @skip unsupportedBuiltInPlugins

Feature request to add support for nodeAttestor support for spire-agent.
x509pop is an easy way to get the spire-agent-upstream connected with a spire-server that sits outside Kubernetes. It would be nice to have official support for x509pop on the spire-agent.

Also, I've only used helm a few times in the past, but I'm open to learning and contributing to the project.

Better repo readme

It was brought up that going to github.com/spiffe/helm-charts-hardened that there is a lack of, how to install the chart.

what to do with examples/production

examples/production/ is not as useful as it was in the past. The vast majority of things it did are now in the chart now via recommendations. The documentation facet of it is better handled via chart/README.md and the spiffe.io website. And the test functionality is a little counter to it being a good example.

So, we should figure out what to do with it.

Currently thinking of moving it to tests/integration/production

spire-agent: cli commands do not function as expected

The default setup of this container does not align with the defaults assumed by the cli. As a result, most kubectl exec commands fail unless somebody has specific knowledge of the correct overrides. I think the best approach would be to place the config.yaml into a default location so other things like socket-path can be derived from loading the config file. Here's some examples of commands that do not work

I believe /run/spire/agent-socket/api.sock was in an example somewhere, but the actual socket name is spire-agent.sock as specified in the default config file

$ kubectl -n spire-server exec daemonset/spire-agent -- /opt/spire/bin/spire-agent api validate jwt -audience 'us-box-3' -socketPath /run/spire/agent-sockets/api.sock -svid '...'
Defaulted container "spire-agent" out of: spire-agent, init (init)
unable to validate JWT SVID: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /run/spire/agent-sockets/api.sock: connect: no such file or directory"
command terminated with exit code 1

Or if you don’t specify a socketPath, you get an error like this

$ kubectl -n spire-server exec daemonset/spire-agent -- /opt/spire/bin/spire-agent api validate jwt -audience 'us-box-3' -svid '...'
unable to validate JWT SVID: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /tmp/spire-agent/public/api.sock: connect: no such file or directory"

Then if I try to find the socket filename on the container filesystem, I’m stuck on the environment's limited cli tools

error: Internal error occurred: error executing command in container: Internal error occurred: error executing command in container: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "7870b8cd1679b1a01cdc0e26edace55010d7b20fbc815ea9f3f7d9337c936ee1": OCI runtime exec failed: exec failed: unable to start container process: exec: "ls": executable file not found in $PATH: unknown

A/C

  • spire-agent cli commands work without specifying the config or socket path locations
  • add spire-agent as the default entrypoint so user doesn't have to figure out it's in /opt/spire/bin

Automated version update PRs not running tests

When creating automated PRs from the cron jobs that checks versions, the tests arent being run automatically. We have to push a commit to the branch to force the tests to run.

Suggestion is to use a GitHub App .

Tornjak NOTES.txt require updates

Currently Tornjak (both, backend and frontend) ignore the ingress information and provide always the following into:

Access Tornjak:

  kubectl -n spire-server port-forward service/spire-tornjak-frontend 3000:3000

Ensure you have port-forwarding for tornjak-backend as well.

Open browser to: http://localhost:3000

This should be updated to show the ingress details if ingress is selected.

Improvements to handling OpenShift Ingress

Functionality was added to enable the configuration of Ingress resources to support OpenShift to enable a streamlined way of integrating these charts to support their ecosystem.

These opinionated configurations work great for the simple use case. However, challenges have been experienced when customizing for more complex configurations and improvements can be made to support additional integrations.

Challenges

  1. Annotations overwritten

Annotations that are set on Ingress resources are overwritten by opinionated configurations

  1. Inability to fully define certain properties

Certain properties, such as path and pathPrefix cannot be defined

  1. Opinionated TLS

OpenShift manages certain configurations via annotations along with applying certain defaults. For example, if route.openshift.io/termination annotation is set to edge, it is still possible to define the TLS certificate that will be applied to the ingress

  1. Holistic alignment of Ingress Configurations

These charts include several Ingress resources exposing varying capabilities. The configurations and options are currently not uniform which causes challenges when looking to apply certain configurations

OpenShift Logic

OpenShift enables the "upconverting" of native Ingress resources to OpenShift Routes. The logic that is employed is located here

The following are several valid examples that can be defined within OpenShift to expose access to resources using Ingress resources

  1. Passthrough Termination

No TLS termination occurs at the Ingress Controller as the backend pod is serving SSL certificates

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/termination: passthrough
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-passthrough
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 443
        pathType: ImplementationSpecific
  1. Edge Termination using default Ingress Certificate

TLS termination occurs at OpenShift Ingress Router using the default certificate defined within the Ingress Controller. Traffic to the backend pod is unencrypted

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/termination: edge
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-edge
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 80
        path: /
        pathType: Prefix
  1. Edge Termination using Specified Certificate

TLS termination occurs at OpenShift Ingress Router using the a provided certificate. Traffic to the backend pod is unencrypted

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/termination: edge
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-edge
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - <hostname>
    secretName: <frontend_cert_secret>
  1. Reencrypt using default Ingress certificate and trusted destination certificate

TLS reencryption occurs using the default Ingress certificate and destination certificate that is trusted within the cluster. Most likely using a certificate generated via Service Serving Certificates

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/termination: reencrypt
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-reencrypt
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 443
        path: /
        pathType: Prefix
  1. Reencrypt using provided ingress certificate and trusted destination certificate

TLS reencryption occurs using provided ingress certificate and destination certificate that is trusted within the cluster. Most likely using a certificate generated via Service Serving Certificates

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/termination: reencrypt
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-reencrypt-frontend-dest
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 443
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - <hostname>
    secretName: <frontend_cert_secret>
  1. Reencrypt using provided Ingress certificate and provided destination certificate

TLS reencryption occurs using provided ingress certificate along with a provided certificate served by the pod

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/destination-ca-certificate-secret: <destination_cert_secret>
    route.openshift.io/termination: reencrypt
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-reencrypt-frontend-dest
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 443
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - <hostname>
    secretName: <frontend_cert_secret>
  1. Reencrypt using default Ingress certificate and provided destination certificate

TLS reencryption occurs using provided ingress certificate along with a provided certificate served by the pod

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: spire
    meta.helm.sh/release-namespace: spire-mgmt
    route.openshift.io/destination-ca-certificate-secret: <destination_cert_secret>
    route.openshift.io/termination: reencrypt
  labels:
    app.kubernetes.io/instance: spire
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: spiffe-oidc-discovery-provider
    app.kubernetes.io/version: 1.8.7
    helm.sh/chart: spiffe-oidc-discovery-provider-0.1.0
  name: spire-example-reencrypt-frontend-dest
  namespace: spire-server
spec:
  rules:
  - host: <hostname>
    http:
      paths:
      - backend:
          service:
            name: spire-spiffe-oidc-discovery-provider
            port:
              number: 443
        path: /
        pathType: Prefix

Default container setting

There are a couple pods that users may want to exec into and run commands.

Currently:

  • spire-server
  • spire-agent

Currently you have to either use the -c flag or get a message back like:

Defaulted container "spire-server" out of: spire-server, spire-controller-manager

To get rid of it, the pod can be annotated to tell kubectl which container is the default

Pod annotation:

kubectl.kubernetes.io/default-container: <name>

Revamp controller manager integration

The current controller manager integration has a number of issues:

  • The CRD and CRs are deployed together. This is problematic for CI/CD workflows.
  • Other projects have the CRD as a seperate chart, installed separately
  • Helm won't automatically upgrade if the controller-manager CRD changes. This could be possibly related to not versioning the controller manager CRD when it changes.

Original issue: spiffe/helm-charts#427

Improve the securityContext recommendation to run each Pod using a unique UID.

To improve the security of the SPIFFE pods it would be great if we can adjust the securityContext recommendation to run with a unique UID.

Checking many out of the box installations of other charts they are using runAsUser: 1000, therefore we should pick some UIDs that are not that commonly used to reduce the chance there is potential overlap with other pods in the cluster.

See below my setup to configure the podSecurityContext for each component.

global:
  spire:
    recommendations:
      enabled: true
      namespaceLayout: true
      namespacePSS: true
      priorityClassName: true
      strictMode: true
      securityContexts: true

spire-server:
  podSecurityContext:
    runAsUser: 20000
    runAsGroup: 20000
    fsGroup: 20000

spiffe-csi-driver:
  podSecurityContext:
    runAsUser: 20001
    runAsGroup: 20001
    fsGroup: 20001

spire-agent:
  podSecurityContext:
    runAsUser: 20002
    runAsGroup: 20002
    fsGroup: 20002

spiffe-oidc-discovery-provider:
  podSecurityContext:
    runAsUser: 20003
    runAsGroup: 20003
    fsGroup: 20003

Note

We should decide on a range of UIDs to use for spire pods, so don't take above example as leading.

I also noticed the spiffe-csi-driver doesn't have a podSecurityContext.

Tried adding that.

      securityContext:
        {{- toYaml $podSecurityContext | nindent 8 }}

this unfortunately breaks the csi driver.

See

2024-02-01T10:57:23.171Z	INFO	spiffe-csi-driver/main.go:45	Starting.	{"version": "0.2.3", "nodeID": "ip-10-2-13-145.eu-west-1.compute.internal", "workloadAPISocketDir": "/spire-agent-socket", "csiSocketPath": "/spiffe-csi/csi.sock"}
2024-02-01T10:57:23.171Z	ERROR	server/server.go:33	Unable to remove CSI socket	{"error": "remove /spiffe-csi/csi.sock: permission denied"}
github.com/spiffe/spiffe-csi/pkg/server.Run
	/code/pkg/server/server.go:33
main.main
	/code/cmd/spiffe-csi-driver/main.go:69
runtime.main
	/usr/local/go/src/runtime/proc.go:250
2024-02-01T10:57:23.172Z	ERROR	spiffe-csi-driver/main.go:70	Failed to serve	{"error": "unable to create CSI socket listener: listen unix /spiffe-csi/csi.sock: bind: address already in use"}
main.main
	/code/cmd/spiffe-csi-driver/main.go:70
runtime.main
	/usr/local/go/src/runtime/proc.go:250

Would it be possible to make this work with a podSecurityContext?

spire-agent nodeAttestor and keyManager hardcoded

You can't disable the k8sPsat nodeAttestor or the memory keyManager.. So, while we have unsupportedBuiltInPlugins and customPlugins options for nodeAttestor and keyManager, its not possible to use them.

error in NOTES.txt when spire-server isn't enabled

I think https://github.com/spiffe/helm-charts-hardened/blob/main/charts/spire/templates/NOTES.txt#L24 should be wrapped in a conditional.
the include doesn't work if spire-server isn't enabled

perhaps something like this?

{{- if (index .Values "spire-server").enabled }}
{{-   $className := include "spire-server.controller-manager-class-name" (dict "Values" (index .Values "spire-server") "Release" .Release) }}
{{-   if (index .Values "spire-server").controllerManager.enabled }}
{{-     if (index .Values "spire-server").controllerManager.watchClassless }}

Spire CR's will be handled if no className is specified or if className is set to "{{ $className }}"
{{-     else }}

Spire CR's will be handled only if className is set to "{{ $className }}"
{{-     end }}
{{-   end }}
{{- end }} 

socket rename not cleaned up

If the user renames the workload socket after previously deploying it, it leaves behind a dangling socket to workloads. Clean this up.

discovery provider rework

It has several problems.

Without it enabled, jwt's can't be reliably verified. So it should be enabled by default to ensure jwt functionality of spire works out of the box.

When enabled, it defaults to ACME.

ACME support is broken. There is no way to route the http traffic to the provider to do the handshake to get its certs. It doesn't support dns ACME either. It doesn't persist its certs, putting too much load on LetsEncrypt. It also doesn't share when replicas > 1 putting too much load on LetsEncrypt.

The discovery provider only works if configured with insecureScheme.enabled. Which works well at this point. It is secure in certain configurations so is poorly named, but is hard to guarantee it is in all cases.

Providing your own certs is not supported.

It should be possible to use spire itself to provide the certificates but currently not supported. This probably should be the new default configuration.

socket aliases

Seems like istio expects the socket name to be 'socket' within the csi driver. But other things are expecting the socket name to be spire-agent.sock. We should test out if relative symlinks work, and if so, allow the user to specify a list of alias's for the socket name to be created in the socket share directory linked relatively to the 'spire-agent.sock'.

Ensure that the agent admin socket can be enabled

The Agent configuration supports an agent level setting agent_admin_socket which describes the Unix Domain Socket for Agent administration APIs.

Recent work on configurable log levels will place the post-log level adjustment commands on this socket. It would be nice to have this setting be supported.

Support includes:

  • having a pod-private volume to hold the socket
  • ensuring that this volume is mounted in a directory higher than the workload API Unix Domain Socket (to satisfy the configuration file sanity checks in SPIRE)
  • ensuring that the pod-private socket is exposed to the pod that holds the spire-agent executable, so the agent executable can be used as a client of the socket to adjust the already-running agent process.

tornjak-backend ingress error

When using spire-server.tornjak.ingress.enabled = true helm wants to create an ingress with

spec:
  rules:
    - host: <>
      http:
        paths:
          - path: <>
            pathType: <>
            backend:
              service:
                name: <>
                port:
                  number: tornjak-srv-http

See tornjak ingress template: https://github.com/spiffe/helm-charts-hardened/blob/main/charts/spire/charts/spire-server/templates/tornjak-ingress.yaml#L26C1-L27C1
See ingress template in spire-lib: https://github.com/spiffe/helm-charts-hardened/blob/main/charts/spire/templates/_spire-lib.tpl#L125

This results in nginx ingress controller errors, because the number has to be an integer and not a string.

HA sync issue

There was a conversation on slack about multiple instances of the spire server making their own CA's when in HA mode and usually waiting a certain amount of time for the new instances CA's to sync to the agents before adding it to the LoadBalancer. We currently do not do this. We either need to make the server's initialDelaySeconds a larger number, like 60+ seconds, or make a dynamic readiness probe that waited only on new instances. If not done, agents may get valid certs that other agents don't trust for a while.

Allow audit_log_enabled to be configurable via values

I'd like to be able to switch audit_log_enabled via a configMap. I know that this is possible via environment variable as well. I was wondering if there's a reason why it's not available to do with the config map?

1.0 possible breakage list

We've tried hard to ensure backwards compatibility, a bit to the detriment of usability out of the box. Before 1.0.0 we may want to have one release where we change the defaults to be easier to use. This issue will track possible things to change.

SPIRE 1.9 support

These need to happen before 1.9 support in the chart is released:

  • spiffe/spire#4791 has a flag and an upgrade note to add to the chart.
    • uniqueid CredentialComposer plugin that adds the x509UniqueIdentifier attribute to workload X509-SVIDs (#4862) needs to be added and an upgrade note about it added.

Also #209 is unblocked but shouldn't block a 1.9 based release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.