Giter VIP home page Giter VIP logo

cert-manager / approver-policy Goto Github PK

View Code? Open in Web Editor NEW
63.0 8.0 22.0 1.55 MB

approver-policy is a cert-manager approver that allows users to define policies that restrict what certificates can be requested.

Home Page: https://cert-manager.io/docs/policy/approval/approver-policy/

License: Apache License 2.0

Makefile 15.85% Go 82.23% Shell 1.55% Smarty 0.37%
kubernetes authorization cert-manager

approver-policy's Introduction

cert-manager project logo

Build Status Go Report Card
Artifact Hub Scorecard score CLOMonitor

cert-manager

cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.

It supports issuing certificates from a variety of sources, including Let's Encrypt (ACME), HashiCorp Vault, and Venafi TPP / TLS Protect Cloud, as well as local in-cluster issuance.

cert-manager also ensures certificates remain valid and up to date, attempting to renew certificates at an appropriate time before expiry to reduce the risk of outages and remove toil.

cert-manager high level overview diagram

Documentation

Documentation for cert-manager can be found at cert-manager.io.

For the common use-case of automatically issuing TLS certificates for Ingress resources, see the cert-manager nginx-ingress quick start guide.

For a more comprensive guide to issuing your first certificate, see our getting started guide.

Installation

Installation is documented on the website, with a variety of supported methods.

Troubleshooting

If you encounter any issues whilst using cert-manager, we have a number of ways to get help:

If you believe you've found a bug and cannot find an existing issue, feel free to open a new issue! Be sure to include as much information as you can about your environment.

Community

The cert-manager-dev Google Group is used for project wide announcements and development coordination. Anybody can join the group by visiting here and clicking "Join Group". A Google account is required to join the group.

Meetings

We have several public meetings which any member of our Google Group is more than welcome to join!

Check out the details on our website. Feel free to drop in and ask questions, chat with us or just to say hi!

Contributing

We welcome pull requests with open arms! There's a lot of work to do here, and we're especially concerned with ensuring the longevity and reliability of the project. The contributing guide will help you get started.

Coding Conventions

Code style guidelines are documented on the coding conventions page of the cert-manager website. Please try to follow those guidelines if you're submitting a pull request for cert-manager.

Importing cert-manager as a Module

⚠️ Please note that cert-manager does not currently provide a Go module compatibility guarantee. That means that most code under pkg/ is subject to change in a breaking way, even between minor or patch releases and even if the code is currently publicly exported.

The lack of a Go module compatibility guarantee does not affect API version guarantees under the Kubernetes Deprecation Policy.

For more details see Importing cert-manager in Go on the cert-manager website.

The import path for cert-manager versions 1.8 and later is github.com/cert-manager/cert-manager.

For all versions of cert-manager before 1.8, including minor and patch releases, the import path is github.com/jetstack/cert-manager.

Security Reporting

Security is the number one priority for cert-manager. If you think you've found a security vulnerability, we'd love to hear from you.

Follow the instructions in SECURITY.md to make a report.

Changelog

Every release on GitHub has a changelog, and we also publish release notes on the website.

History

cert-manager is loosely based upon the work of kube-lego and has borrowed some wisdom from other similar projects such as kube-cert-manager.

Logo design by Zoe Paterson

approver-policy's People

Contributors

aidy avatar bradfordwagner avatar charlieegan3 avatar dependabot[bot] avatar erikgb avatar inteon avatar irbekrm avatar jetstack-bot avatar joshvanl avatar maelvls avatar markmont avatar rickymulder avatar sgtcodfish avatar thatsmrtalbot avatar wallrj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

approver-policy's Issues

Should initialize controller-runtime logging

When approver-policy starts, this is logged on startup. Probably related to controller-runtime 0.15.0 upgrade.

I0620 06:50:37.637698       1 controller.go:219] controller-manager "msg"="Starting workers" "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "worker count"=1
[controller-runtime] log.SetLogger(...) was never called, logs will not be displayed:
goroutine 291 [running]:
runtime/debug.Stack()
	/usr/local/go/src/runtime/debug/stack.go:24 +0x65
sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/log.go:59 +0xbd
sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc0003e3a40, {0x1e9c6f9, 0x9})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/log/deleg.go:147 +0x4c
github.com/go-logr/logr.Logger.WithName(...)
	/go/pkg/mod/github.com/go-logr/[email protected]/logr.go:336
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).getLogger.func1()
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/webhook.go:182 +0x63
sync.(*Once).doSlow(0x0?, 0xc000101040?)
	/usr/local/go/src/sync/once.go:74 +0xc2
sync.(*Once).Do(...)
	/usr/local/go/src/sync/once.go:65
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).getLogger(0xc00026d720?, 0xc0003b4400?)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/webhook.go:180 +0x53
sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP(0xc00026d720, {0x7fea55a33ef8?, 0xc0004735e0}, 0xc000517200)
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/webhook/admission/http.go:96 +0xc34
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1({0x7fea55a33ef8, 0xc0004735e0}, 0x2173d00?)
	/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:60 +0xd4
net/http.HandlerFunc.ServeHTTP(0x2173da0?, {0x7fea55a33ef8?, 0xc0004735e0?}, 0xc0013fb828?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1({0x2173da0?, 0xc000170a80?}, 0xc000517200)
	/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:147 +0xc5
net/http.HandlerFunc.ServeHTTP(0x7d9365?, {0x2173da0?, 0xc000170a80?}, 0x40dd0a?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerDuration.func2({0x2173da0, 0xc000170a80}, 0xc000517200)
	/go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:109 +0xc7
net/http.HandlerFunc.ServeHTTP(0xc000170a80?, {0x2173da0?, 0xc000170a80?}, 0x1ea32e6?)
	/usr/local/go/src/net/http/server.go:2122 +0x2f
net/http.(*ServeMux).ServeHTTP(0xc000538ee8?, {0x2173da0, 0xc000170a80}, 0xc000517200)
	/usr/local/go/src/net/http/server.go:2500 +0x149
net/http.serverHandler.ServeHTTP({0x2164928?}, {0x2173da0, 0xc000170a80}, 0xc000517200)
	/usr/local/go/src/net/http/server.go:2936 +0x316
net/http.(*conn).serve(0xc001129dd0, {0x2174b70, 0xc000568a50})
	/usr/local/go/src/net/http/server.go:1995 +0x612
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:3089 +0x5ed

Simplify configuration by creating RBAC by default

Currently, it is quite complex to configure approver-policy due to all the necessary additional RBAC.

The following RBAC could be added to the Helm chart to simplify usage (easy-mode):

  1. allow approver-policy to approve all issuer types:
- apiGroups: ["cert-manager.io"]
  resources: ["signers"]
  verbs: ["approve"]
...
- kind: ServiceAccount
  name: {{ include "cert-manager-approver-policy.name" . }}
  namespace: {{ .Release.Namespace }}
  1. make all policies applicable to the cert-manager SA by default (use selector for filtering instead):
- apiGroups: ["policy.cert-manager.io"]
  resources: ["certificaterequestpolicies"]
  verbs: ["use"]
...
- kind: ServiceAccount
  name: cert-manager
  namespace: {{ .Release.Namespace }}

cc @wallrj @JoshVanL

failed to create subjectaccessreview

Certificate approbation doesn't work with custom approval-policy on a certificate / certificate-request in kube-system namespace but work on others namespaces

Here are the logs of approver policy pod :

On custon namespace

I1107 09:56:03.021039       1 controller.go:315] controller-manager "msg"="Reconciling" "CertificateRequest"={"name":"xxx-tls-clxs6","namespace":"xxx"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="xxx-clxs6" "namespace"="xxx" "reconcileID"="745d0fb8-2e36-4004-9297-72404aed1c87"
I1107 09:56:03.022066       1 certificaterequests.go:172] controller/certificaterequests "msg"="syncing certificaterequest" "name"="xxx-clxs6" "namespace"="xxx"
I1107 09:56:03.026323       1 certificaterequests.go:195] controller/certificaterequests "msg"="approving request" "name"="xxx-clxs6" "namespace"="xxx"
I1107 09:56:03.026393       1 conditions.go:263] Setting lastTransitionTime for CertificateRequest "" condition "Approved" to 2023-11-07 09:56:03.026386343 +0000 UTC m=+1189.231032754I1107 09:56:03.026795       1 recorder.go:104] controller-manager/events "msg"="Approved by CertificateRequestPolicy: \"ccc-policy\"" "object"={"kind":"CertificateRequest","namespace":"xxx","name":"xxx-clxs6","uid":"1e602535-53f9-4fa2-a81f-e3802b9f043a","apiVersion":"cert-manager.io/v1","resourceVersion":"23855088"} "reason"="Approved" "type"="Normal"
I1107 09:56:03.166309       1 controller.go:344] controller-manager "msg"="Reconcile successful" "CertificateRequest"={"name":"xxx-clxs6","namespace":"xxx"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="xxx-clxs6" "namespace"="xxx" "reconcileID"="745d0fb8-2e36-4004-9297-72404aed1c87"

On kube-system

I1107 10:03:19.879028       1 controller.go:315] controller-manager "msg"="Reconciling" "CertificateRequest"={"name":"yyy-tls-b7rbx","namespace":"kube-system"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="yyy-tls-b7rbx" "namespace"="kube-system" "reconcileID"="fb0c3690-106c-47fe-8db7-0d1d5b3d5894"
I1107 10:03:19.879075       1 certificaterequests.go:172] controller/certificaterequests "msg"="syncing certificaterequest" "name"="yyy-tls-b7rbx" "namespace"="kube-system"
I1107 10:03:19.879130       1 controller.go:344] controller-manager "msg"="Reconcile successful" "CertificateRequest"={"name":"yyy-tls-b7rbx","namespace":"kube-system"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="yyy-tls-b7rbx" "namespace"="kube-system" "reconcileID"="fb0c3690-106c-47fe-8db7-0d1d5b3d5894"
I1107 10:03:20.445465       1 controller.go:315] controller-manager "msg"="Reconciling" "CertificateRequest"={"name":"yyy-tls-v6z27","namespace":"kube-system"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="yyy-tls-v6z27" "namespace"="kube-system" "reconcileID"="533af2b5-72e5-448a-8aaf-581f6c8a2b1c"
I1107 10:03:20.445507       1 certificaterequests.go:172] controller/certificaterequests "msg"="syncing certificaterequest" "name"="yyy-tls-v6z27" "namespace"="kube-system"
I1107 10:03:20.449591       1 recorder.go:104] controller-manager/events "msg"="approver-policy failed to review the request and will retry" "object"={"kind":"CertificateRequest","namespace":"kube-system","name":"yyy-tls-v6z27","uid":"32d7ec7c-26d2-42e0-a9dc-813b5cb8670f","apiVersion":"cert-manager.io/v1","resourceVersion":"23857628"} "reason"="EvaluationError" "type"="Warning"
E1107 10:03:20.450255       1 controller.go:329] controller-manager "msg"="Reconciler error" "error"="failed to perform predicate on policies: failed to create subjectaccessreview: .authorization.k8s.io \"\" is invalid: spec.user: Invalid value: \"\": at least one of user or group must be specified" "CertificateRequest"={"name":"yyy-tls-v6z27","namespace":"kube-system"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="yyy-tls-v6z27" "namespace"="kube-system" "reconcileID"="533af2b5-72e5-448a-8aaf-581f6c8a2b1c"

Cert manager v1.12.2
approver policy v0.8.0

Add Custom Annotations

Allow custom annotations to be added to cert-manager-approver deployment. Our metrics are gathered using a prometheus scraper on the from the /metrics endpoint using auto discovery (annotations-based). Opening this up to entering custom annotations would additionally allow vault webhook injector, etc annotations to be added if necessary.

values.yaml to add:

# -- Optional allow custom annotations to be placed on cert-manager-approver pod
podAnnotations: {}

Improve CRD fields for specifying key requirements

We discussed curve parameters and checking for curves in standup earlier, and I said I'd look into approver-policy because checking curves is an important part of that. This issue documents what I found.

approver-policy uses just the field size when checking ECDSA curves, which isn't ideal since it opens the possibility of a cert using a bizarre non-standard curve over a field of the same size yet being accepted.

As a user of approver-policy, 99.999999% of the time I don't want to allow this. I want to ensure that certs use one of the standard curves.

More than that, though, the whole API for policy around key types isn't ideal.

It makes sense to allow "minSize" and "maxSize" for RSA, but those are largely meaningless for the other 2 key types.

For Ed25519 there's no meaningful other parameter to check, and for ECDSA it would be better to check for exact named curves for reasons given above.

Checking named curves obviously means that if a new curve is added we'd need to update approver-policy to support it, but that seems a worthwhile tradeoff.

This section of the CRD might be better if it had different properties for each curve type, e.g.:

RSA

    privateKey:
      algorithm: RSA # unchanged, nothing wrong with this
      minSize: 2048
      maxSize: 4096

ECDSA

    privateKey:
      algorithm: ECDSA
      allowedCurves: ["P-256", "P-384"] # don't allow P521 or any possible future curves

EdDSA

    privateKey:
      algorithm: Ed25519
      # nothing else makes sense here

Helm chart rendering error: converting YAML to JSON: yaml: line 61: did not find expected key

volumeMounts:
  - name: ca-cert-example-volume
    mountPath: "/etc/ssl/certs/ca-cert-example-ca.crt"
    subPath: ca.crt
    readOnly: true
volumes:
  - name: ca-cert-example-volume
    configMap:
      name: ca-cert-example
      optional: false
$ helm template cert-manager-approver-policy --repo https://charts.jetstack.io  --version v0.6.2 --values values.yaml
Error: YAML parse error on cert-manager-approver-policy/templates/deployment.yaml: error converting YAML to JSON: yaml: line 61: did not find expected key

Use --debug flag to render out invalid YAML

Probably caused by incorrect indentation of the included yaml, here:

volumeMounts:
{{- with .Values.volumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
- mountPath: /tmp
name: temp-dir

Introduced in:

/kind bug

[CertificateRequestPolicy] `selector.issuerRef` incorrect example list instead of map

Ref https://cert-manager.io/docs/projects/approver-policy/#configuration under Selector the example issuerRef has an example formatted as a list. The correct is that it is a map.

Error

spec:
  ...
  selector:
    issuerRef:
    - name: "my-ca"
      kind: "*Issuer"
      group: "cert-manager.io"

Correction

spec:
  ...
  selector:
    issuerRef:
      name: "my-ca"
      kind: "*Issuer"
      group: "cert-manager.io"

Example error message on apply

CertificateRequestPolicy/selfsigned-issuer dry-run failed,
error: failed to create typed patch object (/selfsigned-issuer; [policy.cert-manager.io/v1alpha1](http://policy.cert-manager.io/v1alpha1),
Kind=CertificateRequestPolicy): .spec.selector.issuerRef:
expected map, got &{[map[group:[cert-manager.io](http://cert-manager.io/) kind:ClusterIssuer name:selfsigned-issuer]]}

Feature: Take control of approval for the whole cluster

Today, approver-policy can't explicitly deny any certs by default because it has to account for the possibility that there's another approver working in the cluster which might make an approval decision for that CR.

As a user who doesn't intend to ever install a separate approver, though, that might not be ideal - I'd maybe rather have approver-policy explicitly deny everything with a message like "CertificateRequest is denied because no CertificateRequestPolicy matched it" or "CertificateRequest is denied because it wasn't approved by any matching CertificateRequestPolicy resource".

Essentially, it would allow us to help users debug policy more accurately.

Open problems with this idea:

  • We probably can't default to it because it might be breaking in the case where another approver is already installed
  • In the case that another approver is installed, we will currently race with that other approver (which is why we don't deny-all in the first place)
    • This might not ever be solvable without some other mechanism such as a leader election for approval (or moving approval into core cert-manager, which I don't think is on the table)

No option for `privateKey.rotationPolicy` in crp

These looks to be the only available options under spec.constraints.privateKey:

# k explain crp.spec.constraints.privateKey
KIND:     CertificateRequestPolicy
VERSION:  policy.cert-manager.io/v1alpha1

RESOURCE: privateKey <Object>

DESCRIPTION:
     PrivateKey defines the shape of permissible private keys that may be used
     for the request with this policy. An omitted field or value of `nil`
     permits the use of any private key by the requestor.

FIELDS:
   algorithm	<string>
     Algorithm defines the allowed crypto algorithm that is used by the
     requestor for their private key in their request. An omitted field or value
     of `nil` permits any Algorithm.

   maxSize	<integer>
     MaxSize defines the maximum key size a requestor may use for their private
     key. Values are inclusive (i.e. a min value of `2048` will accept a size of
     `2048`). MaxSize and MinSize may be the same value. An omitted field or
     value of `nil` permits any maximum size.

   minSize	<integer>
     MinSize defines the minimum key size a requestor may use for their private
     key. Values are inclusive (i.e. a min value of `2048` will accept a size of
     `2048`). MinSize and MaxSize may be the same value. An omitted field or
     value of `nil` permits any minimum size.

Also I couldn't see anything relevant in the spec.allowed section either.
I would like a crp to evaluate a cr based on this being set on the certificate resource:

spec:
  privateKey:
    rotationPolicy: Never # Or Always

Probably more commonly I'd enforce it to be "Always". The use case is generally to match TPP (maybe vault) where it is default to enforce rotation of the private key.

Currently this seems like it can't be evaluated at policy and therefore fails later in the regular cert-manager flow.

Attempt to update status.conditions denied by cert-manager webhook

We have been running cert-manager for a long time in multi-tenant clusters - with the default built-in approver. Yesterday we promoted approver-policy to one of our busiest cluster (TEST). While everything seems to work as it should: new certificate requests are approved by approver-policy, we notice significant log noise in approver policy logs. Here is an example:

E1031 08:40:15.331193 1 controller.go:329] controller-manager "msg"="Reconciler error" "error"="failed to apply connection patch: admission webhook \"webhook.cert-manager.io\" denied the request: status.conditions: Forbidden: 'Approved' condition may not be modified once set" "CertificateRequest"={"name":"switchstatus-adapter-tls-53","namespace":"int-switchstatus"} "controller"="certificaterequest" "controllerGroup"="cert-manager.io" "controllerKind"="CertificateRequest" "name"="switchstatus-adapter-tls-53" "namespace"="int-switchstatus" "reconcileID"="00c940e8-87d9-4ae9-ae46-5c258203e7af"

CertificateRequest (after the error is logged):

apiVersion: cert-manager.io/v1
kind: CertificateRequest
metadata:
  annotations:
    cert-manager.io/certificate-name: switchstatus-adapter-tls
    cert-manager.io/certificate-revision: '53'
    cert-manager.io/private-key-secret-name: switchstatus-adapter-tls-z7w74
  resourceVersion: '3060778645'
  name: switchstatus-adapter-tls-53
  uid: b93e2a35-89bb-4186-9332-3c5e971eba0a
  creationTimestamp: '2023-10-30T13:44:46Z'
  generation: 1
  managedFields:
    - apiVersion: cert-manager.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            'k:{"type":"Approved"}':
              .: {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
      manager: approver-policy
      operation: Apply
      subresource: status
      time: '2023-10-30T13:45:09Z'
    - apiVersion: cert-manager.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:ca': {}
          'f:certificate': {}
          'f:conditions':
            'k:{"type":"Approved"}':
              .: {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
      manager: cert-manager-certificaterequests-issuer-vault
      operation: Apply
      subresource: status
      time: '2023-10-30T13:45:09Z'
    - apiVersion: cert-manager.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:cert-manager.io/certificate-name': {}
            'f:cert-manager.io/certificate-revision': {}
            'f:cert-manager.io/private-key-secret-name': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"286e32ea-6d28-49fa-85c0-eff3666537c1"}': {}
        'f:spec':
          .: {}
          'f:duration': {}
          'f:issuerRef':
            .: {}
            'f:group': {}
            'f:kind': {}
            'f:name': {}
          'f:request': {}
          'f:usages': {}
      manager: cert-manager-certificates-request-manager
      operation: Update
      time: '2023-10-30T13:44:46Z'
  namespace: int-switchstatus
  ownerReferences:
    - apiVersion: cert-manager.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: Certificate
      name: switchstatus-adapter-tls
      uid: 286e32ea-6d28-49fa-85c0-eff3666537c1
  labels:
    app.kubernetes.io/managed-by: application-operator
    app.kubernetes.io/name: switchstatus-adapter
spec:
  duration: 336h0m0s
  extra:
    authentication.kubernetes.io/pod-name:
      - cert-manager-849dff9745-k5s6h
    authentication.kubernetes.io/pod-uid:
      - 075d6fef-7a32-457a-a5a9-9d3756680d72
  groups:
    - 'system:serviceaccounts'
    - 'system:serviceaccounts:cert-manager'
    - 'system:authenticated'
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: vault-spiffe-issuer
  request: >-
    <REDACTED>
  uid: 256a8935-1197-4f9a-915d-5dfd38d5f0c6
  usages:
    - digital signature
    - key encipherment
  username: 'system:serviceaccount:cert-manager:cert-manager'
status:
  ca: >-
    <REDACTED>
  certificate: >-
    <REDACTED>
  conditions:
    - lastTransitionTime: '2023-10-30T13:45:09Z'
      message: 'Approved by CertificateRequestPolicy: "vault-tls-cert"'
      reason: policy.cert-manager.io
      status: 'True'
      type: Approved
    - lastTransitionTime: '2023-10-30T13:45:09Z'
      message: Certificate fetched from issuer successfully
      reason: Issued
      status: 'True'
      type: Ready

I suspect this is a transitional problem that will disappear after all the certificates in our cluster are renewed, and possibly related to approver-policy attempting to modify conditions previously added by the default built-in approver. As everything seems to work as it should, I will leave the setup as it is now. As most of our certificates are only valid for a relatively short period (a.t.m. 6 days), we should see if the problems disappear by the end of this week.

But I still think this error-logging is annoying and should be avoided. Please let me know if more details about our setup is needed.

Unable to expose webhook on hostnetwork

EKS with a custom CNI (see: not the vpc-cni) requires all webhooks to be exposed on the host network with a security group rule allowing the control plane access to said webhook port.

Currently the chart template/deployment.yaml doesn't expose the options to put the approver-policy webhook pod on the hostnetwork, nor provide nodeSelector/Affinity/tolerations.

'twould be quite simple to expose these like so at the bottom of the current template:

      hostNetwork: {{ .Values.app.webhook.hostNetwork | default "false" }}
      dnsPolicy: {{ .Values.app.webhook.dnsPolicy | default "ClusterFirst" }}
      {{- with .Values.app.webhook.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.app.webhook.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.app.webhook.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}

The risk of doing this in this form is the metrics port would also be exposed on the hostNetwork. Unnecessary in 99% of instances, might be not-an-issue given logical security groups.

And of course those default options can simply be set in the default values.yaml instead. I'm hacking out the chart in my environment so that I can do this prior to the MR that will fix this.

Constraints for allowed fields

I am not sure how this would be handled best, but we have a use case to only approve certificate requests if they contain many of the fields in the allowed spec (e.g. must provide subject + email fields). However if the fields are omitted the requests are approved as that is the current behaviour of the allowed fields, so we sort of need them to also be supported in the constraints or allow the allowed fields to be treated like requirements.

/kind: feature

Helm chart JSON schema validation prevents it being used as a sub-chart

JSON schema validation was added to the Helm chart in #340

After releasing https://github.com/cert-manager/approver-policy/releases/tag/v0.13.0-alpha.1 to test this I discovered that it prevents me using the chart as a sub-chart in my approver-policy plugin.

The problem is that the schema validation in the approver-policy chart rejects the global chart values that are added by Helm.

The problem has been reported to the Helm maintainers by @howardjohn in helm/helm#10392 but they have dismissed it.

A work around has been suggested by @ogarciacar in helm/helm#10392 (comment):

I am adding global to my sub chart json validation schema and keeping additionalProperties=false. WDYT?

      "additionalProperties": false,
      "properties": {
       "global": {
            "type": "object",
            "description": "Define global in subchart json schema",
            "default": {}
        },

Istio used this fix in istio/istio#36896

CertificateRequest approved but stuck with empty status

I installed approver-policy 0.4.2 with cert-manager 1.10.0 and when issuing a certificate, it creates the CertficateRequest but then get stuck without any status (whole status section missing). I can see that it was approved by the policy in the events of the CR, but the status section is missing. Looks like approver-policy is not setting the status once approved ? Find below some details.

$> kubectl describe cr Istio-ca-lzjr8
...
Events:
  Type    Reason    Age                    From                    Message
  ----    ------    ----                   ----                    -------
  Normal  Approved  2m36s (x17 over 8m4s)  policy.cert-manager.io  Approved by CertificateRequestPolicy: "my-root"

$> kubectl get cr Istio-ca-lzjr8
NAMESPACE      NAME             APPROVED   DENIED   READY   ISSUER   REQUESTOR                                         AGE
istio-system   istio-ca-lzjr8                               root     system:serviceaccount:cert-manager:cert-manager   9m7s

Here is the content of my policy (which works fine. I can see in the events that depending on the certificate spec, the CR gets approved or not depending on wether it complies with the CertificateRequestPolicy).

apiVersion: policy.cert-manager.io/v1alpha1
kind: CertificateRequestPolicy
metadata:
  name: my-root
spec:
  allowed:
    isCA: true
    dnsNames:
      required: true
      values:
        - example.com
    subject:
      organizations:
        values:
          - cluster.local
          - cert-manager
  constraints:
    minDuration: 8760h # 1year
    maxDuration: 87600h # 10year
    privateKey:
      algorithm: RSA
      minSize: 2048
      maxSize: 2048
  selector:
    issuerRef:
      name: root
      kind: KMSIssuer
      group: cert-manager.skyscanner.net

Setting .Values.nameOverride makes the pod not have rights to update secret cert-manager-approver-policy-tls

Description

When setting helm parameter .Values.nameOverride to anything else than it's default value cert-manager-approver-policy the approver fails to generate it's tls certificate during startup.

The role allows access to one secret with a specific name which (when .Values.nameOverride is set to smuda) would be smuda-tls. However, in pkg/internal/webhook/tls/tls.go the name of the secret seems hard coded to cert-manager-approver-policy-tls.

To reproduce:

helm repo add jetstack https://charts.jetstack.io 
helm install cert-manager-approver jetstack/cert-manager-approver-policy --set nameOverride=smuda

Expected result

That the approver pod would startup and respond happily to the readiness-probe.

Result

The approver pod looks for and tries to update secret cert-manager-approver-policy-tls while the role allows smuda-tls. The pod is unhappy.

I0303 17:47:18.371313       1 webhook.go:67] webhook "msg"="running tls bootstrap process..." 
W0303 17:47:18.373066       1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:18.373122       1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:19.378513       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
W0303 17:47:19.595334       1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:19.595408       1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:20.372423       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:21.373552       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:22.372740       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
W0303 17:47:22.726563       1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:22.726622       1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:23.373272       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:24.372112       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:25.373125       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:26.372917       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
W0303 17:47:26.407488       1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:26.407557       1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cert-manager-approver-policy-tls" is forbidden: User "system:serviceaccount:addon-cert-manager:smuda" cannot list resource "secrets" in API group "" in the namespace "addon-cert-manager"
E0303 17:47:27.372600       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:28.372665       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:29.372708       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:30.373261       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:31.372485       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:32.372828       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:33.372578       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:34.372749       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:35.372694       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"
E0303 17:47:36.372690       1 tls.go:130] webhook/tls "msg"="failed to generate initial serving certificate, retrying..." "error"="failed verifying CA keypair: tls: failed to find any PEM data in certificate input" "interval"="1s"

The created role smuda:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  annotations:
    meta.helm.sh/release-name: cert-manager-approver
    meta.helm.sh/release-namespace: addon-cert-manager
  creationTimestamp: "2023-03-03T17:47:14Z"
  labels:
    app.kubernetes.io/instance: cert-manager-approver
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: smuda
    app.kubernetes.io/version: v0.6.2
    helm.sh/chart: cert-manager-approver-policy-v0.6.2
  name: smuda
  namespace: addon-cert-manager
  resourceVersion: "1654"
  uid: 4e8f5114-4353-4c53-aa0d-cc174c58fe71
rules:
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - policy.cert-manager.io
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resourceNames:
  - smuda-tls
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
  - create
  - update

Approver-policy does not reconcile CRs on RBAC updates

I was manually deploying cert-manager and approver-policy resources and made a mistake in RBAC, so the CertificateRequest ended up being 'unprocessed' (no matching policy). I corrected the RBAC resources, but the CR was not reconciled and I had to delete it after which the new one got correctly approved.

Would it make sense for the certificaterequests controller to reconcile unprocessed CertificateRequests on related (Cluster)Role/(Cluster)RoleBinding events similarly to how it's currently done on CertificateRequestPolicy events?

Steps to reproduce:

  1. Create an Issuer, Certificate and CertificateRequestPolicy matching the issuer (but no RBAC)
  2. Observe in approver-policy logs that no matching policy was found and the CR is neither approved nor denied
  3. Create the RBAC
  4. Observe that the CertificateRequest still does not get approved/denied
  5. Delete the CertificateRequest and observe the new one get correctly approved

Webhook Custom CA

Is there any way to inject a custom CA for the webhook ? I can see these in the values.yaml:

...
# -- Optional extra volume mounts. Useful for mounting custom root CAs
volumeMounts: []
#- name: my-volume-mount
#  mountPath: /etc/approver-policy/secrets

# -- Optional extra volumes.
volumes: []
#- name: my-volume
#  secret:
#    secretName: my-secret
...

Then these volumes are mounted in the webhook container, here:

...
       {{- if .Values.volumeMounts }}
        volumeMounts:
{{ toYaml .Values.volumeMounts | indent 10 }}
        {{- end }}

        resources:
          {{- toYaml .Values.resources | indent 12 }}

      {{- if .Values.volumes }}
      volumes:
{{ toYaml .Values.volumes | indent 6 }}
      {{- end }}

But the mounted volumes are never taken into account. In case of an extra arg which would be available to take into account the mounted CAs, I checked the cert-manager-approver-policy bin options available here, but I don't see anything for this purpose.

Instead in the Go code, it uses the default and unique behaviour, which is currently to generate a self-signed CA dynamically, see the cert-manager authority pkg.

Am I missing something obvious or there is currently no mechanism to assign a custom webhook CA ? (It must feed secret cert-manager-approver-policy-tls as the ValidatingWebhookConfiguration is injecting the CA from the secret thanks to cert-manager-cainjector, see here).

Include binary artifacts your releases.

Please consider including the binary of the approver in your releases (similar to cert-manager). This will greatly simplify our docker image build process. It would be great just to point to the release in the dockerfile vs. having to build.
Happy to contribute to this. Perhaps a GitHub action?

CertificateRequestPolicy based on which namespace the certificate request belongs to

My use case is to only allow certificate requests with dnsName which are service-names from the namespace to which the service-name belongs to, for a certain cluster-issuer. For example the dnsName my-service.my-space.svc should only be allowed when requested from the "my-space" namespace.

Preferably, I'd also like to restrict namespaces for a certain cluster issuer.

I don't seem to find how to do that in the current implementation. Am I missing something or is this an uncommon usecase?

Flakey Tests in pull-cert-manager-approver-policy-verify

These tests seem to fail intermittently. I haven't investigated why.

approver-policy-controllers:  RBAC if a RoleBinding is created which binds the user, the request  should be re-reconciled and approved expand_less | 5s
-- | --
/home/prow/go/src/github.com/cert-manager/approver-policy/pkg/internal/controllers/test/rbac.go:200 Timed out after 1.001s. expected approval Expected     <bool>: false to be true /home/prow/go/src/github.com/cert-manager/approver-policy/pkg/internal/controllers/test/util.go:49

https://prow.build-infra.jetstack.net/view/gs/jetstack-logs/pr-logs/pull/cert-manager_approver-policy/57/pull-cert-manager-approver-policy-verify/1527159739677413376

[JustBeforeEach] RBAC
  /home/prow/go/src/github.com/cert-manager/approver-policy/pkg/internal/controllers/test/rbac.go:158
�[1mSTEP�[0m: Created Policy Namespace: test-policy-bxcbf
�[1mSTEP�[0m: Running Policy controller
�[1mSTEP�[0m: Waiting for Leader Election
�[1mSTEP�[0m: Waiting for Informers to Sync
[It] if a RoleBinding is created which binds the user, the request should be re-reconciled and approved
  /home/prow/go/src/github.com/cert-manager/approver-policy/pkg/internal/controllers/test/rbac.go:200
[JustAfterEach] RBAC
  /home/prow/go/src/github.com/cert-manager/approver-policy/pkg/internal/controllers/test/rbac.go:171
�[1mSTEP�[0m: Deleting all CertificateRequests after test
�[1mSTEP�[0m: Deleting all CertificateRequestPolicies after test
�[1mSTEP�[0m: Deleting Policy Namespace: test-policy-bxcbf

blob:https://prow.build-infra.jetstack.net/9e81f29e-cc99-4f4a-93de-370b96e33236

/kind bug

group 'cert-manager.io' does not work

Chart patch-operator install a self signed issuer here, yet the following policy is not considered by the approver policy (Request is not applicable for any policy so ignoring when describing the CR). It works by changing the group: cert-manager.io to group: '*'. It should work with group: cert-manager.io as described in the policy examples of your repository, is this due to the fact that is a self-signed issuer ?

And as you can see in the link of the patch-operator chart I put, the self-signed issuer is of type Issuer from group cert-manager.io.

Here is the policy:

apiVersion: policy.cert-manager.io/v1alpha1
kind: CertificateRequestPolicy
metadata:
  name: patch-operator
spec:
  allowed:
    isCA: false
    dnsNames:
      required: true
      values:
        - patch-operator-webhook-service.patch-operator.svc
        - patch-operator-webhook-service.patch-operator.svc.cluster.local
        - patch-operator-controller-manager-metrics-service.patch-operator.svc
        - patch-operator-controller-manager-metrics-service.patch-operator.svc.cluster.local
  constraints:
    minDuration: 2160h
    maxDuration: 2160h
    privateKey:
      algorithm: RSA
      minSize: 2048
      maxSize: 2048
  selector:
    issuerRef:
      name: selfsigned-issuer
      kind: Issuer
      group: cert-manager.io # needs to be '*' otherwise 'cert-manager.io' does not work, bug. Maybe specific to selfsigned issuers?

Dependabot downgraded github.com/cert-manager/cert-manager v1.8.0 => v0.7.2 as a result of upgrading k8s.io/component-base

We enabled dependabot and now it is creating PRs with package upgrades, but strangely, it seems also to be downgrading cert-manager to v0.7.2 E.g.

I get the same result when I manually update that dependency:

$ go version
go version go1.18.3 linux/amd64

$ go get -u k8s.io/component-base
go: downgraded github.com/cert-manager/cert-manager v1.8.0 => v0.7.2
go: upgraded github.com/prometheus/client_golang v1.11.0 => v1.12.1
go: upgraded github.com/prometheus/common v0.28.0 => v0.32.1
go: upgraded github.com/prometheus/procfs v0.6.0 => v0.7.3
go: upgraded golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871 => v0.0.0-20220214200702-86341886e292
go: upgraded golang.org/x/mod v0.5.0 => v0.6.0-dev.0.20220106191415-9b9b3d81d5e3
go: upgraded golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e => v0.0.0-20220209214540-3681064d5158
go: upgraded golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac => v0.0.0-20220210224613-90d013bbcef8
go: upgraded golang.org/x/tools v0.1.6-0.20210820212750-d4cc65f0b2ff => v0.1.10-0.20220218145154-897bd77cd717
go: upgraded k8s.io/api v0.23.6 => v0.24.1
go: upgraded k8s.io/apimachinery v0.23.6 => v0.24.1
go: upgraded k8s.io/client-go v0.23.6 => v0.24.1
go: upgraded k8s.io/component-base v0.23.6 => v0.24.1
go: upgraded k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 => v0.0.0-20220328201542-3ee0da9b0b42
go: upgraded sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 => v0.0.0-20211208200746-9f7c6b3444d2

diff --git a/go.mod b/go.mod
index ce43443..b0d5a26 100644
--- a/go.mod
+++ b/go.mod
@@ -3,7 +3,7 @@ module github.com/cert-manager/approver-policy
 go 1.18
 
 require (
-       github.com/cert-manager/cert-manager v1.8.0
+       github.com/cert-manager/cert-manager v0.7.2
        github.com/go-logr/logr v1.2.3
        github.com/onsi/ginkgo v1.16.5
        github.com/onsi/gomega v1.19.0
@@ -11,13 +11,13 @@ require (
        github.com/spf13/cobra v1.4.0
        github.com/spf13/pflag v1.0.5
        github.com/stretchr/testify v1.7.1
-       k8s.io/api v0.23.6
+       k8s.io/api v0.24.1
        k8s.io/apiextensions-apiserver v0.23.6
-       k8s.io/apimachinery v0.23.6
+       k8s.io/apimachinery v0.24.1
        k8s.io/cli-runtime v0.23.6
-       k8s.io/client-go v0.23.6
+       k8s.io/client-go v0.24.1
        k8s.io/code-generator v0.23.6
-       k8s.io/component-base v0.23.6
+       k8s.io/component-base v0.24.1
        k8s.io/klog/v2 v2.60.1
        k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9
        sigs.k8s.io/controller-runtime v0.11.2

Limit number of SANs by policy

Hi all,

I've seen some discussion on the subject here. I'm wondering if this is possible yet? I want to limit the number of SANs in a certificate to ~10, but I'm not sure how to access the DNSNames list.

Thanks in advance!

Add Helm option to create RBAC allowing approval for all issuers

When a user creates a custom issuer, they'll currently need to give permission to approver-policy to approve CertificateRequests from that issuer, which will look something like the below:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: example-role
rules:
- apiGroups:
  - cert-manager.io
  resourceNames:
  - issuer.example.com/*
  resources:
  - signers
  verbs:
  - approve

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: example-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: example-role
subjects:
- kind: ServiceAccount
  name: cert-manager-approver-policy
  namespace: myns

It's possible to conjure situations where users might want to restrict these permissions, but for most users installing approver-policy it's reasonable for them to want it to be able to approve CRs from any issuer.

Maybe for security reasons we wouldn't want to default open (although we might yet consider defaulting open!), but we could at least add a Helm option to create an allow-all role which applies to approver-policy in this case.

/kind feature

Hardcoded namespace in code

Thank you for this great project.

Adding it to cert-manager caused an issue running the solution in a namespace other then cert-manager.
The TLS connection fails due to an incorrect SAN name in the generated certificate for the webhook.
Investigation showed an hardcoded DNS Name in go package pkg/internal/webhook/tls/tls.go

    template := &x509.Certificate{
            Version:            2,
            PublicKeyAlgorithm: x509.ECDSA,
            PublicKey:          pk.Public(),
            DNSNames:           []string{"cert-manager-approver-policy.cert-manager.svc"},
            KeyUsage:           x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment,
            ExtKeyUsage:        []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
    }

"cert-manager-approver-policy.cert-manager.svc" should be parameterised to be able to use in an alternative namespace.

Typo in error message: connection patch should say CertificateRequest.Status patch

In #297 I noticed a typo in the error message that @erikgb reported:

failed to apply connection patch: admission webhook "webhook.cert-manager.io" denied the request

Should say "failed to apply CertificateRequest.Status patch.

The word "connection" is also misused elsewhere:

Originally posted by @wallrj in #186 (comment)

Regex to disallow wildcard certificates

Is it currently possible? From what I read, I thought it might but when I tested it did not work as I expected.

https://cert-manager.io/docs/projects/approver-policy/

spec:
  allowed:
    commonName:
      required: true
      value: '*'
    dnsNames:
      required: true
      values:
      - "\*.domain.com"

I would like a certificate request consisting of a dnsName with the suffix domain.com to be allowed, but not a wildcard itself *.domain.com.

  - lastTransitionTime: "2022-10-25T18:41:53Z"
    message: 'No policy approved this request: [default-policy: spec.allowed.dnsNames.values:
      Invalid value: []string{"test-cert.domain.com"}: 
      \*.domain.com]'

v0.1

  • E2E smoke test: smoke test to spin up policy-approver and test sanity
  • Release docker image for v1.0

Error: YAML parse error on cert-manager-approver-policy/templates/deployment.yaml: error converting YAML to JSON: yaml: line 48: mapping values are not allowed in this context

I install cert-manager-approver-policy by use helm3,but i have some problems:

k8s environment:
kubectl version -o json
{
"clientVersion": {
"major": "1",
"minor": "25",
"gitVersion": "v1.25.4",
"gitCommit": "872a965c6c6526caa949f0c6ac028ef7aff3fb78",
"gitTreeState": "clean",
"buildDate": "2022-11-09T13:36:36Z",
"goVersion": "go1.19.3",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.7",
"serverVersion": {
"major": "1",
"minor": "25",
"gitVersion": "v1.25.4",
"gitCommit": "872a965c6c6526caa949f0c6ac028ef7aff3fb78",
"gitTreeState": "clean",
"buildDate": "2022-11-09T13:29:58Z",
"goVersion": "go1.19.3",
"compiler": "gc",
"platform": "linux/amd64"
}
}

helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}


when i run follow command:
helm template -n cert-manager jetstack/cert-manager-approver-policy -f values.yaml --debug
Error: YAML parse error on cert-manager-approver-policy/templates/deployment.yaml: error converting YAML to JSON: yaml: line 48: mapping values are not allowed in this context
helm.go:84: [debug] error converting YAML to JSON: yaml: line 48: mapping values are not allowed in this context
YAML parse error on cert-manager-approver-policy/templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
helm.sh/helm/v3/pkg/action/action.go:165
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
helm.sh/helm/v3/pkg/action/install.go:259
main.runInstall
helm.sh/helm/v3/cmd/helm/install.go:278
main.newTemplateCmd.func2
helm.sh/helm/v3/cmd/helm/template.go:82
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:872
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:990
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:918
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571


I see deployment template:

Source: cert-manager-approver-policy/templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager-approver-policy
labels:
app.kubernetes.io/name: cert-manager-approver-policy
helm.sh/chart: cert-manager-approver-policy-v0.5.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "v0.5.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app: cert-manager-approver-policy
template:
metadata:
labels:
app: cert-manager-approver-policy
spec:
serviceAccountName: cert-manager-approver-policy
containers:
- name: cert-manager-approver-policy
image: "quay.io/jetstack/cert-manager-approver-policy:v0.5.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 10250
- containerPort: 9402
readinessProbe:
httpGet:
port: 6060
path: "/readyz"
initialDelaySeconds: 3
periodSeconds: 7
command: ["cert-manager-approver-policy"]
args:
- --log-level=1

      - --metrics-bind-address=:9402
      - --readiness-probe-bind-address=:6060

      - --webhook-host=0.0.0.0
      - --webhook-port=10250
      - --webhook-service-name=cert-manager-approver-policy
      - --webhook-ca-secret-namespace=cert-manager
      - --webhook-certificate-dir=/tmp

    **resources:            limits:
          cpu: 200m
          memory: 256Mi
        requests:
          cpu: 100m
          memory: 128Mi**

my values.yaml
cat values.yaml

-- Number of replicas of approver-policy to run.

replicaCount: 1

image:

-- Target image repository.

repository: quay.io/jetstack/cert-manager-approver-policy

-- Target image version tag (if empty, Chart AppVersion will be used)

tag: ""

-- Kubernetes imagePullPolicy on Deployment.

pullPolicy: IfNotPresent

-- Optional secrets used for pulling the approver-policy container image.

imagePullSecrets: []

app:

-- Verbosity of approver-policy logging.

logLevel: 1 # 1-5

-- Extra CLI arguments that will be passed to the approver-policy process.

extraArgs: []

-- List if signer names that approver-policy will be given permission to

approve and deny. CertificateRequests referencing these signer names can be

processed by approver-policy. See:

https://cert-manager.io/docs/concepts/certificaterequest/#approval

approveSignerNames:

  • "issuers.cert-manager.io/*"
  • "clusterissuers.cert-manager.io/*"

metrics:
# -- Port for exposing Prometheus metrics on 0.0.0.0 on path '/metrics'.
port: 9402
# -- Service to expose metrics endpoint.
service:
# -- Create a Service resource to expose metrics endpoint.
enabled: true
# -- Service type to expose metrics.
type: ClusterIP
# -- ServiceMonitor resource for this Service.
servicemonitor:
enabled: false
prometheusInstance: default
interval: 10s
scrapeTimeout: 5s
labels: {}

readinessProbe:
# -- Container port to expose approver-policy HTTP readiness probe on
# default network interface.
port: 6060

webhook:
# -- Host that the webhook listens on.
host: 0.0.0.0
# -- Port that the webhook listens on.
port: 10250
# -- Timeout of webhook HTTP request.
timeoutSeconds: 5
# -- Directory to read and store the webhook TLS certificate key pair.
certificateDir: /tmp
# -- Type of Kubernetes Service used by the Webhook
service:
type: ClusterIP

-- Optional extra volume mounts. Useful for mounting custom root CAs

volumeMounts: []
#- name: my-volume-mount

mountPath: /etc/approver-policy/secrets

-- Optional extra volumes.

volumes: []
#- name: my-volume

secret:

secretName: my-secret

resources:

-- Kubernetes pod resource limits for approver-policy.

limits:
cpu: 200m
memory: 256Mi

-- Kubernetes pod memory resource requests for approver-policy.

requests:
cpu: 100m
memory: 128Mi


my deployment.yaml
cat templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "cert-manager-approver-policy.name" . }}
labels:
{{ include "cert-manager-approver-policy.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ include "cert-manager-approver-policy.name" . }}
template:
metadata:
labels:
app: {{ include "cert-manager-approver-policy.name" . }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "cert-manager-approver-policy.name" . }}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: devops
operator: In
values:
- "true"
tolerations:
- effect: NoSchedule
key: devops
operator: Equal
value: "true"
containers:
- name: {{ include "cert-manager-approver-policy.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.app.webhook.port }}
- containerPort: {{ .Values.app.metrics.port }}
readinessProbe:
httpGet:
port: {{ .Values.app.readinessProbe.port }}
path: "/readyz"
initialDelaySeconds: 3
periodSeconds: 7
command: ["cert-manager-approver-policy"]
args:
- --log-level={{.Values.app.logLevel}}

      {{- range .Values.app.extraArgs }}
      - {{ . }}
      {{- end  }}

      - --metrics-bind-address=:{{.Values.app.metrics.port}}
      - --readiness-probe-bind-address=:{{.Values.app.readinessProbe.port}}

      - --webhook-host={{.Values.app.webhook.host}}
      - --webhook-port={{.Values.app.webhook.port}}
      - --webhook-service-name={{ include "cert-manager-approver-policy.name" . }}
      - --webhook-ca-secret-namespace={{.Release.Namespace}}
      - --webhook-certificate-dir={{.Values.app.webhook.certificateDir}}

    {{- if .Values.volumeMounts }}
    volumeMounts:

{{ toYaml .Values.volumeMounts | indent 10 }}
{{- end }}

    resources:
      {{- toYaml .Values.resources | indent 12 }}

  {{- if .Values.volumes }}
  volumes:

{{ toYaml .Values.volumes | indent 6 }}
{{- end }}

Who can help me look at this problem,thanks very much

Infinite loop causing increase in certificaterequestpolicies version

Description: I have encountered an issue where an infinite loop is causing an increase in the certificate request policy version. I found that the root cause of the infinite loop is related to the setCertificateRequestPolicyCondition() function in the certificaterequestpolicies.go file.

First problem is here: https://github.com/cert-manager/approver-policy/blob/main/pkg/internal/controllers/certificaterequestpolicies.go#L208, empty list is created which will cause setCertificateRequestPolicyCondition() to always append condition in the list and try to patch object in Reconcile(). Furthermore, I believe if object is patched even when it is already in a ready state, that this behavior leads to an infinite loop, as the object is modified and triggers a new event for the patched object (shouldn't we patch object only if it's not ready?):

// If this update doesn't contain a state transition, we don't update
// the conditions LastTransitionTime to Now()
if existingCondition.Status == condition.Status {
	condition.LastTransitionTime = existingCondition.LastTransitionTime
}

(*conditions)[idx] = condition
return

I've tested fix locally and something like this should be solution:

func (c *certificaterequestpolicies) setCertificateRequestPolicyCondition(
	conditions *[]policyapi.CertificateRequestPolicyCondition,
	generation int64,
	condition policyapi.CertificateRequestPolicyCondition,
) bool {
	condition.LastTransitionTime = &metav1.Time{Time: c.clock.Now()}
	condition.ObservedGeneration = generation

	for idx, existingCondition := range *conditions {
		// Skip unrelated conditions
		if existingCondition.Type != condition.Type {
			continue
		}

		// If this update doesn't contain a state transition, we don't update object
		if existingCondition.Status == condition.Status {
			return false
		}

		(*conditions)[idx] = condition
		return true
	}

	// If we've not found an existing condition of this type, we simply insert
	// the new condition into the slice.
	*conditions = append(*conditions, condition)
	return true
}

Then reconcile should patch object only if it's not in desired state:

func (c *certificaterequestpolicies) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	result, patch, policyStatus, resultErr := c.reconcileStatusPatch(ctx, req)
	// fmt.Println("Patching object", req.Name)
	if patch {
		// fmt.Println("Patching object", req.Name)
		crp, patch, err := ssa_client.GenerateCertificateRequestPolicyStatusPatch(req.Name, req.Namespace, policyStatus)
		if err != nil {
			err = fmt.Errorf("failed to generate CertificateRequestPolicy.Status patch: %w", err)
			return ctrl.Result{}, utilerrors.NewAggregate([]error{resultErr, err})
		}

		if err := c.client.Status().Patch(ctx, crp, patch, &client.SubResourcePatchOptions{
			PatchOptions: client.PatchOptions{
				FieldManager: "approver-policy",
				Force:        pointer.Bool(true),
			},
		}); err != nil {
			err = fmt.Errorf("failed to apply CertificateRequestPolicy.Status patch: %w", err)
			return ctrl.Result{}, utilerrors.NewAggregate([]error{resultErr, err})
		}
	}

	return result, resultErr
}

How to reproduce:

  1. Start the cert-manager-approver-policy application and monitor the resource version at regular intervals (e.g., every x.y seconds).
  2. Enable higher log verbosity and observe that the object is consistently being patched.

If you confirm that this is indeed a bug, I am willing to assist in fixing it and creating a pull request (PR) if you need assistance.
/kind bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.