Giter VIP home page Giter VIP logo

autocert's Introduction

Autocert architecture diagram

Autocert

GitHub release Go Report Card

GitHub stars Twitter followers Join the Discord

Autocert is a Kubernetes add-on that automatically injects TLS/HTTPS certificates into your containers, so they can communicate with each other securely.

To get a certificate simply annotate your pods with a name. An X.509 (TLS/HTTPS) certificate is automatically created and mounted at /var/run/autocert.step.sm/ along with a corresponding private key and root certificate (everything you need for mTLS).

The certificates are signed by an internal step-ca Certificate Authority (CA) or by Certificate Manager (see Tutorial & Demo).

By the way, we also have a cert-manager Certificate Issuer called step-issuer that works directly with either your step-ca server or our cloud CA product. While Autocert volume mounts certificates and keys directly into Pods, step-issuer makes them available via Secrets.

We ❤️ feedback, bugs, and enhancement suggestions. We also have an #autocert channel on our Discord.

Autocert demo gif

Motivation

Autocert exists to make it easy to use mTLS (mutual TLS) to improve security within a cluster and to secure communication into, out of, and between kubernetes clusters.

TLS (and HTTPS, which is HTTP over TLS) provides authenticated encryption: an identity dialtone and end-to-end encryption for your workloads. It makes workloads identity-aware, improving observability and enabling granular access control. Perhaps most compelling, mTLS lets you securely communicate with workloads running anywhere, not just inside kubernetes.

Connect with mTLS diagram

Unlike VPNs & SDNs, deploying and scaling mTLS is pretty easy. You're (hopefully) already using TLS, and your existing tools and standard libraries will provide most of what you need.

There's just one problem: you need certificates issued by your own certificate authority (CA). Building and operating a CA, issuing certificates, and making sure they're renewed before they expire is tricky. Autocert does all of this for you.

Features

First and foremost, autocert is easy. You can get started in minutes.

Autocert runs step-ca to internally generate keys and issue certificates. This process is secure and automatic, all you have to do is install autocert and annotate your pods.

Features include:

  • A fully featured private CA for workloads running on kubernetes and elsewhere
  • RFC5280 and CA/Browser Forum compliant certificates that work for TLS
  • Namespaced installation into the step namespace so it's easy to lock down your CA
  • Short-lived certificates with fully automated enrollment and renewal
  • Private keys are never transmitted across the network and aren't stored in etcd

Because autocert is built on step-ca you can easily extend access to developers, endpoints, and workloads running outside your cluster, too.

Tutorial & Demo

smallstep-cm-autocert-demo-keyframe

In this tutorial video, Smallstep Software Engineer Andrew Reed shows how to use autocert alongside Smallstep Certificate Manager hosted CA.

Installation

Prerequisites

All you need to get started is kubectl and a cluster running kubernetes with admission webhooks enabled:

$ kubectl version
Client Version: v1.26.1
Kustomize Version: v4.5.7
Server Version: v1.25.3
$ kubectl api-versions | grep "admissionregistration.k8s.io/v1"
admissionregistration.k8s.io/v1

Install via kubectl

To install autocert run:

kubectl run autocert-init -it --rm --image cr.step.sm/smallstep/autocert-init --restart Never

💥 installation complete.

You might want to check out what this command does before running it. You can also install autocert manually if that's your style.

Install via Helm

Autocert can also be installed using the Helm package manager, to install the repository and autocert run:

helm repo add smallstep https://smallstep.github.io/helm-charts/
helm repo update
helm install smallstep/autocert

You can see all the configuration options at https://hub.helm.sh/charts/smallstep/autocert.

Usage

Using autocert is also easy:

  • Enable autocert for a namespace by labelling it with autocert.step.sm=enabled, then
  • Inject certificates into containers by annotating pods with autocert.step.sm/name: <name>

Enable autocert (per namespace)

To enable autocert for a namespace it must be labelled autocert.step.sm=enabled.

To label the default namespace run:

kubectl label namespace default autocert.step.sm=enabled

To check which namespaces have autocert enabled run:

$ kubectl get namespace -L autocert.step.sm
NAME          STATUS   AGE   AUTOCERT.STEP.SM
default       Active   59m   enabled
...

Annotate pods to get certificates

To get a certificate you need to tell autocert your workload's name using the autocert.step.sm/name annotation (this name will appear as the X.509 common name and SAN).

It's also possible to define the duration of the certificate using the annotation autocert.step.sm/duration, a duration is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Take into account that the container will crash if the duration is not between the limits defined by the used provisioner, the defaults are 5m and 24h.

By default the certificate, key and root will be owned by root and world-readable (0644). Use the autocert.step.sm/owner and autocert.step.sm/mode annotations to set the owner and permissions of the files. The owner annotation requires user and group IDs rather than names because the images used by the containers that create and renew the certificates do not have the same user list as the main application containers.

Let's deploy a simple mTLS server named hello-mtls.default.svc.cluster.local:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata: {name: hello-mtls, labels: {app: hello-mtls}}
spec:
  replicas: 1
  selector: {matchLabels: {app: hello-mtls}}
  template:
    metadata:
      annotations:
        # AUTOCERT ANNOTATION HERE -v ###############################
        autocert.step.sm/name: hello-mtls.default.svc.cluster.local #
        # AUTOCERT ANNOTATION HERE -^ ###############################
      labels: {app: hello-mtls}
    spec:
      containers:
      - name: hello-mtls
        image: smallstep/hello-mtls-server-go:latest
EOF

In our new container we should find a certificate, private key, and root certificate mounted at /var/run/autocert.step.sm:

$ export HELLO_MTLS=$(kubectl get pods -l app=hello-mtls -o jsonpath='{$.items[0].metadata.name}')
$ kubectl exec -it $HELLO_MTLS -c hello-mtls -- ls /var/run/autocert.step.sm
root.crt  site.crt  site.key

We're done. Our container has a certificate, issued by our CA, which autocert will automatically renew.

Now let's deploy another server with a autocert.step.sm/duration, autocert.step.sm/owner and autocert.step.sm/mode:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata: {name: hello-mtls-1h, labels: {app: hello-mtls-1h}}
spec:
  replicas: 1
  selector: {matchLabels: {app: hello-mtls-1h}}
  template:
    metadata:
      annotations:
        autocert.step.sm/name: hello-mtls-1h.default.svc.cluster.local
        autocert.step.sm/duration: 1h
        autocert.step.sm/owner: "999:999"
        autocert.step.sm/mode: "0600"
      labels: {app: hello-mtls-1h}
    spec:
      containers:
      - name: hello-mtls
        image: smallstep/hello-mtls-server-go:latest
EOF

The container will have the certificates and key owned by user/group 999 with permission to read/write restricted to the owner, and the certificate duration will be valid for one hour and will be autorenewed:

$ export HELLO_MTLS_1H=$(kubectl get pods -l app=hello-mtls-1h -o jsonpath='{$.items[0].metadata.name}')
$ kubectl exec -it $HELLO_MTLS_1H -c hello-mtls -- ls -ln /var/run/autocert.step.sm
-rw------- 1 999 999  623 Jun  6 21:17 root.crt
-rw------- 1 999 999 1470 Jun  6 21:37 site.crt
-rw------- 1 999 999  227 Jun  6 21:17 site.key
$ kubectl exec -it $HELLO_MTLS_1H -c hello-mtls -- cat /var/run/autocert.step.sm/site.crt | step certificate inspect --short -
X.509v3 TLS Certificate (ECDSA P-256) [Serial: 3182...1140]
  Subject:     hello-mtls-1h.default.svc.cluster.local
  Issuer:      Autocert Intermediate CA
  Provisioner: autocert [ID: A1lX...ty1Q]
  Valid from:  2020-04-30T01:58:17Z
          to:  2020-04-30T02:58:17Z

Durations are specially useful if the step-ca provisioner is configured with a maximum duration larger than the default one, it can be used by services that cannot handle the reload of the certificates in a graceful way.

✅ Certificates.

Hello mTLS

It's easy to deploy certificates using autocert, but it's up to you to use them correctly. To get you started, hello-mtls demonstrates the right way to use mTLS with various tools and languages (contributions welcome :). If you're a bit fuzzy on how mTLS works, the hello-mtls README is a great place to start.

To finish out this tutorial let's keep things simple and try curling the server we just deployed from inside and outside the cluster.

Connecting from inside the cluster

First, let's expose our workload to the rest of the cluster using a service:

kubectl expose deployment hello-mtls --port 443

Now let's deploy a client, with its own certificate, that curls our server in a loop:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata: {name: hello-mtls-client, labels: {app: hello-mtls-client}}
spec:
  replicas: 1
  selector: {matchLabels: {app: hello-mtls-client}}
  template:
    metadata:
      annotations:
        # AUTOCERT ANNOTATION HERE -v ######################################
        autocert.step.sm/name: hello-mtls-client.default.pod.cluster.local #
        # AUTOCERT ANNOTATION HERE -^ ######################################
      labels: {app: hello-mtls-client}
    spec:
      containers:
      - name: hello-mtls-client
        image: smallstep/hello-mtls-client-curl:latest
        env: [{name: HELLO_MTLS_URL, value: https://hello-mtls.default.svc.cluster.local}]
EOF

Note that the authority portion of the URL (the HELLO_MTLS_URL env var) matches the name of the server we're connecting to (both are hello-mtls.default.svc.cluster.local). That's required for standard HTTPS and can sometimes require some DNS trickery.

Once deployed we should start seeing the client log responses from the server saying hello:

$ export HELLO_MTLS_CLIENT=$(kubectl get pods -l app=hello-mtls-client -o jsonpath='{$.items[0].metadata.name}')
$ kubectl logs $HELLO_MTLS_CLIENT -c hello-mtls-client
Thu Feb  7 23:35:23 UTC 2019: Hello, hello-mtls-client.default.pod.cluster.local!
Thu Feb  7 23:35:28 UTC 2019: Hello, hello-mtls-client.default.pod.cluster.local!

For kicks, let's exec into this pod and try curling ourselves:

$ kubectl exec $HELLO_MTLS_CLIENT -c hello-mtls-client -- curl -sS \
       --cacert /var/run/autocert.step.sm/root.crt \
       --cert /var/run/autocert.step.sm/site.crt \
       --key /var/run/autocert.step.sm/site.key \
       https://hello-mtls.default.svc.cluster.local
Hello, hello-mtls-client.default.pod.cluster.local!

✅ mTLS inside cluster.

Connecting from outside the cluster

Connecting from outside the cluster is a bit more complicated. We need to handle DNS and obtain a certificate ourselves. These tasks were handled automatically inside the cluster by kubernetes and autocert, respectively.

That said, because our server uses mTLS only clients that have a certificate issued by our certificate authority will be allowed to connect. That means it can be safely and easily exposed directly to the public internet using a LoadBalancer service type:

kubectl expose deployment hello-mtls --name=hello-mtls-lb --port=443 --type=LoadBalancer

To connect we need a certificate. There are a couple different ways to get one, but for simplicity we'll just forward a port.

kubectl -n step port-forward $(kubectl -n step get pods -l app=ca -o jsonpath={$.items[0].metadata.name}) 4443:4443

In another window we'll use step to grab the root certificate, generate a key pair, and get a certificate.

To follow along you'll need to install step if you haven't already. You'll also need your admin password and CA fingerprint, which were output during installation (see here and here if you already lost them :).

$ export CA_POD=$(kubectl -n step get pods -l app=ca -o jsonpath='{$.items[0].metadata.name}')
$ step ca root root.crt --ca-url https://127.0.0.1:4443 --fingerprint <fingerprint>
$ step ca certificate mike mike.crt mike.key --ca-url https://127.0.0.1:4443 --root root.crt
✔ Key ID: H4vH5VfvaMro0yrk-UIkkeCoPFqEfjF6vg0GHFdhVyM (admin)
✔ Please enter the password to decrypt the provisioner key: 0QOC9xcq56R1aEyLHPzBqN18Z3WfGZ01
✔ CA: https://127.0.0.1:4443/1.0/sign
✔ Certificate: mike.crt
✔ Private Key: mike.key

Now we can simply curl the service:

If you're using minikube or docker for mac the load balancer's "IP" might be localhost, which won't work. In that case, simply export HELLO_MTLS_IP=127.0.0.1 and try again.

$ export HELLO_MTLS_IP=$(kubectl get svc hello-mtls-lb -ojsonpath={$.status.loadBalancer.ingress[0].ip})
$ curl --resolve hello-mtls.default.svc.cluster.local:443:$HELLO_MTLS_IP \
       --cacert root.crt \
       --cert mike.crt \
       --key mike.key \
       https://hello-mtls.default.svc.cluster.local
Hello, mike!

Note that we're using --resolve to tell curl to override DNS and resolve the name in our workload's certificate to its public IP address. In a real production infrastructure you could configure DNS manually, or you could propagate DNS to workloads outside kubernetes using something like ExternalDNS.

✅ mTLS outside cluster.

Cleanup & uninstall

To clean up after running through the tutorial remove the hello-mtls and hello-mtls-client deployments and services:

kubectl delete deployment hello-mtls
kubectl delete deployment hello-mtls-client
kubectl delete service hello-mtls
kubectl delete service hello-mtls-lb

See the runbook for instructions on uninstalling autocert.

How it works

Architecture

Autocert is an admission webhook that intercepts and patches pod creation requests with some YAML to inject an init container and sidecar that handle obtaining and renewing certificates, respectively.

Autocert architecture diagram

Enrollment & renewal

It integrates with step certificates and uses the one-time token bootstrap protocol from that project to mutually authenticate a new pod with your certificate authority, and obtain a certificate.

Autocert bootstrap protocol diagram

Tokens are generated by the admission webhook and transmitted to the injected init container via a kubernetes secret. The init container uses the one-time token to obtain a certificate. A sidecar is also installed to renew certificates before they expire. Renewal simply uses mTLS with the CA.

FAQs

Wait, so any pod can get a certificate with any identity? How is that secure?

  1. Don't give people kubectl access to your production clusters
  2. Use a deploy pipeline based on git artifacts
  3. Enforce code review on those git artifacts

If that doesn't work for you, or if you have a better idea, we'd love to hear! Please open an issue!

Why do I have to tell you the name to put in a certificate? Why can't you automatically bind service names?

Mostly because monitoring the API server to figure out which services are associated with which workloads is complicated and somewhat magical. And it might not be what you want.

That said, we're not totally opposed to this idea. If anyone has strong feels and a good design please open an issue.

Doesn't Kubernetes already ship with a CA?

Kubernetes needs several certificates for different sorts of control plane communication. It ships with a very limited CA and integration points that allow you to use an alternative CA.

The built-in Kuberenetes CA is limited to signing certificates for kubeconfigs and kubelets. Specifically, the controller-manager will sign CSRs in some cases.

See our blog Automating TLS in Kubernetes The Hard Way to learn a lot more.

While you could use the Kubernetes CA for service-to-service data plane and ingress certificates, we don't recommend it. Having two CAs will give you a crisp cryptographic boundary.

What permissions does autocert require in my cluster and why?

Autocert needs permission to create and delete secrets cluster-wide. You can check out our RBAC config here. These permissions are needed in order to transmit one-time tokens to workloads using secrets, and to clean up afterwards. We'd love to scope these permissions down further. If anyone has any ideas please open an issue.

Why does autocert create secrets?

The autocert admission webhook needs to securely transmit one-time bootstrap tokens to containers. This could be accomplished without using secrets. The webhook returns a JSONPatch response that's applied to the pod spec. This response could patch the literal token value into our init container's environment.

Unfortunately, the kubernetes API server does not authenticate itself to admission webhooks by default, and configuring it to do so requires passing a custom config file at apiserver startup. This isn't an option for everyone (e.g., on GKE) so we opted not to rely on it.

Since our webhook can't authenticate callers, including bootstrap tokens in patch responses would be dangerous. By using secrets an attacker can still trick autocert into generating superflous bootstrap tokens, but they'd also need read access to cluster secrets to do anything with them.

Hopefully this story will improve with time.

Why not use kubernetes service accounts instead of bootstrap tokens?

Great idea! This should be pretty easy to add using the TokenRequest API.

Can I lengthen the duration of the bootstrap tokens?

If you're facing deployment times longer than five minutes, use the annotation autocert.step.sm/init-first: "true", which will force the bootstrapper to run before any other initContainer. As long as the CA is available, you will get a certificate valid for 24h that should be enough for initializing the rest of the deployment. After the bootstrapper, it will run the rest of the initContainers that can wait for the dependencies to be ready. See smallstep/autocert#108 for more details.

Too. many. containers. Why do you need to install an init container and a sidecar?

We don't. It's just easier for you. Your containers can generate key pairs, exchange them for certificates, and manage renewals themselves. This is pretty easy if you install step in your containers, or integrate with our golang SDK. To support this we'd need to add the option to inject a bootstrap token without injecting these containers.

That said, the init container and sidecar are both super lightweight.

Why are keys and certificates managed via volume mounts? Why not use a Secret or some custom resource?

Because, by default, kubernetes Secrets are stored in plaintext in etcd and might even be transmitted unencrypted across the network. Even if Secrets were properly encrypted, transmitting a private key across the network violates PKI best practices. Key pairs should always be generated where they're used, and private keys should never be known by anyone but their owners.

That said, there are use cases where a certificate mounted in a Secret resource is desirable (e.g., for use with a kubernetes Ingress). For that, we recommend step-issuer.

(Add a 👍 to #48 I'd like autocert to expose Secrets in the future.)

How is this different than cert-manager

Cert-manager is a great project, but it's design is focused on managing Web PKI certificates issued by Let's Encrypt's public certificate authority. These certificates are useful for TLS ingress from web browsers. Autocert is purpose-built to manage certificates issued by your own private CA to support the use of mTLS for service-to-service communication.

What sorts of keys are issued and how often are certificates rotated?

Autocert builds on step certificates which issues ECDSA certificates using the P256 curve with ECDSA-SHA256 signatures by default. If this is all Greek to you, rest assured these are safe, sane, and modern defaults that are suitable for the vast majority of environments.

What crypto library is under the hood?

https://golang.org/pkg/crypto/

Building

This project is based on four container images:

  • autocert-controller (the admission webhook)
  • autocert-bootstrapper (the init container that generates a key pair and exchanges a bootstrap token for a certificate)
  • autocert-renewer (the sidecar that renews certificates)
  • autocert-init (the install script)

They use multi-stage builds so all you need in order to build them is docker.

To build all of the images, run:

docker build -t smallstep/autocert-controller:latest -f controller/Dockerfile .
docker build -t smallstep/autocert-bootstrapper:latest -f bootstrapper/Dockerfile .
docker build -t smallstep/autocert-renewer:latest -f renewer/Dockerfile .
docker build -t smallstep/autocert-init:latest -f init/Dockerfile .

If you build your own containers you'll probably need to install manually. You'll also need to adjust which images are deployed in the deployment yaml.

Contributing

If you have improvements to autocert, send us your pull requests! For those just getting started, GitHub has a howto. A team member will review your pull requests, provide feedback, and merge your changes. In order to accept contributions we do need you to sign our contributor license agreement.

If you want to contribute but you're not sure where to start, take a look at the issues with the "good first issue" label. These are issues that we believe are particularly well suited for outside contributions, often because we probably won't get to them right now. If you decide to start on an issue, leave a comment so that other people know that you're working on it. If you want to help out, but not alone, use the issue comment thread to coordinate.

If you've identified a bug or have ideas for improving autocert that you don't have time to implement, we'd love to hear about them. Please open an issue to report a bug or suggest an enhancement!

Further Reading

License

Copyright 2023 Smallstep Labs

Licensed under the Apache License, Version 2.0

autocert's People

Contributors

alanchrt avatar alevsk avatar areed avatar aviaviavi avatar dependabot[bot] avatar devadvocado avatar dopey avatar fastlorenzo avatar github-actions[bot] avatar hslatman avatar juneezee avatar maraino avatar roberttheprofessional avatar step-ci avatar tashian avatar testwill avatar tiagoposse avatar voxeljorge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autocert's Issues

Recommended way to handle autocert failures

Apologies if this is obvious (I'm new to k8s in general) but what do you recommend for managing cert acquisition failures? I don't know why we see this so frequently but in a non-trivial number of pod boot-ups we see some kind of failure in sourcing certs and get into a perpetual (or at least long-running) CrashLoopBackoff on the pod, the cert container doesn't recover:

Screen Shot 2021-12-13 at 1 29 16 PM

Looks like we're using a slightly older version of the helm chart, I'll update and see if that decreases the incidence rate. Meanwhile, here are some logs from the renewer:

~ % kubectl logs sidekiq-6f5f9557c-scgq4 -n sqjobs --container autocert-renewer
open /var/run/autocert.step.sm/site.crt: no such file or directory
error loading certificates
github.com/smallstep/cli/command/ca.renewCertificateAction
	/home/travis/gopath/src/github.com/smallstep/cli/command/ca/renew.go:239
github.com/smallstep/cli/command.ActionFunc.func1
	/home/travis/gopath/src/github.com/smallstep/cli/command/command.go:48
github.com/urfave/cli.HandleAction
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/app.go:521
github.com/urfave/cli.Command.Run
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/command.go:174
github.com/urfave/cli.(*App).RunAsSubcommand
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/app.go:404
github.com/urfave/cli.Command.startApp
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/command.go:373
github.com/urfave/cli.Command.Run
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/command.go:102
github.com/urfave/cli.(*App).Run
	/home/travis/gopath/pkg/mod/github.com/urfave/[email protected]/app.go:276
main.main
	/home/travis/gopath/src/github.com/smallstep/cli/cmd/step/main.go:93
runtime.main
	/home/travis/.gimme/versions/go1.14.2.linux.amd64/src/runtime/proc.go:203
runtime.goexit
	/home/travis/.gimme/versions/go1.14.2.linux.amd64/src/runtime/asm_amd64.s:1373

Thanks!

Allow users to define the expiration period for bootstrapper tokens

What would you like to be added

Allow users to define the expiration period for the tokens issued by the autocert pod and consumed by the autocert-bootstrapper initContainer.

Why this is needed

I'm working on a project that spawns multiple pods, all of which need to wait for another deployment to become available. This is achieved by adding an initContainer to them that polls to the API about the status of said deployment.
The issue I'm facing is that this deployment takes longer to become available than the expiration period set for the tokens issued by autocert, so by the time the autocert-bootstrapper initializes, the token has expired and has been automatically removed, making my pods wait for a condition that will never be met, hanging indefinitely until I manually restart the deployments.

Certificates are not injected anymore after master node migration

Subject of the issue

I have recently migrated master node from on-prem to EC2 by using old master's etcd snapshot following below tutorial on RKE2.

Seems like the overall cluster information is migrated & restored properly however autocert certificates are not anymore injected to pods :(

I have deployed autocert through a helm chart initially and to fix this issue tried both autocert uninstall & helm chart Reinstall but everything seems running without errors except the certificates are not anymore injected....

Really appreciate your help

Many Thanks in advance!

Environment

  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.9+rke2r2", GitCommit:"6df4433e288edc9c40c2e344eb336f63fad45cd2", GitTreeState:"clean", BuildDate:"2022-04-28T19:11:38Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.9+rke2r2", GitCommit:"6df4433e288edc9c40c2e344eb336f63fad45cd2", GitTreeState:"clean", BuildDate:"2022-04-28T19:11:38Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    c5a.xlarge from AWS EC2
  • OS (e.g., from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Kernel (e.g., uname -a):
    Linux ip-172-32-74-108 5.13.0-1023-aws #25~20.04.1-Ubuntu SMP Mon Apr 25 19:28:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • Other:

Steps to reproduce

Install autocert via helm chart & migrate K8s master node

John92630

Subject of the issue

Describe your issue here

Environment

  • Kubernetes version:
  • Cloud provider or hardware configuration:
  • OS (e.g., from /etc/os-release):
  • Kernel (e.g., uname -a):
  • Install tools:
  • Other:

Steps to reproduce

Tell us how to reproduce this issue

Expected behaviour

Tell us what should happen

Actual behaviour

Tell us what happens instead

Additional context

Add any other context about the problem here

0.18.0 has no support arm64

Subject of the issue

As discussed for a previous version in #15, the latest docker images do not support arm architectures. @dopey created an 0.17.2-rc1 image that did support both arm and amd. Since then, the issue has gone cold.

Environment

  • Kernel (e.g., uname -a): arm64

Steps to reproduce

Try to run autocert v0.18.0 on an arm64 node

Expected behaviour

Everything works as expected

Actual behaviour

  Normal   Pulling    12s (x4 over 104s)  kubelet            Pulling image "smallstep/autocert-controller:0.18.0"
  Warning  Failed     9s (x4 over 101s)   kubelet            Failed to pull image "smallstep/autocert-controller:0.18.0": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/smallstep/autocert-controller:0.18.0": no match for platform in manifest: not found
  Warning  Failed     9s (x4 over 101s)   kubelet            Error: ErrImagePull

How to adjust the duration for a cert?

How long can a cert last for by default? Align with step?

  1. site.crt
  2. root.crt

How can we adjust the duration for both?
How and when renewal of certs will be executed?
It is not that easy for an App to cater the cert renewal in fact.

This does not run on RPI (aarm64)

What would you like to be added

Support for aarm64

default pod/autocert-step-certificates-0 0/1 CrashLoopBackOff 6 9m13s
default pod/autocert-854c8956dd-mp7j5 0/1 Running 6 9m13s
default pod/autocert-lg46f 0/1 Error 0 9m13s
default pod/autocert-wgrqd 0/1 Error 0 7m40s
default pod/autocert-l6crm 0/1 Error 0 6m32s
default pod/autocert-mtz2z 0/1 Error 0 5m52s

Looks like one or two containers might be running but others are failing.

Error loading provisioner

Subject of the issue

I followed the manual install steps provided here and get an error when deploying autocert pod:
Error loading provisioner: client GET https://ca.step.svc.cluster.local/health failed: Get \"https://ca.step.svc.cluster.local/health\": dial tcp: lookup ca.step.svc.cluster.local: Try again

  {
    "config": {
      "Address": "",
      "Service": "",
      "LogFormat": "json",
      "CaURL": "https://ca.step.svc.cluster.local",
      "CertLifetime": "24h",
      "Bootstrapper": {
        "name": "autocert-bootstrapper",
        "image": "cr.step.sm/smallstep/autocert-bootstrapper:0.15.1",
        "resources": {
          "requests": {
            "cpu": "10m",
            "memory": "20Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "certs",
            "mountPath": "/var/run/autocert.step.sm"
          }
        ],
        "imagePullPolicy": "IfNotPresent"
      },
      "Renewer": {
        "name": "autocert-renewer",
        "image": "cr.step.sm/smallstep/autocert-renewer:0.15.1",
        "resources": {
          "requests": {
            "cpu": "10m",
            "memory": "20Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "certs",
            "mountPath": "/var/run/autocert.step.sm"
          }
        ],
        "imagePullPolicy": "IfNotPresent"
      },
      "CertsVolume": {
        "name": "certs",
        "emptyDir": {}
      },
      "RestrictCertificatesToNamespace": false,
      "ClusterDomain": "cluster.local",
      "RootCAPath": "",
      "ProvisionerPasswordPath": ""
    },
    "level": "info",
    "msg": "Loaded config",
    "time": "2022-04-07T17:29:14Z"
  },
  {
    "level": "info",
    "msg": "Loaded provisioner configuration",
    "provisionerKid": "",
    "provisionerName": "autocert",
    "time": "2022-04-07T17:29:14Z"
  },
  {
    "level": "error",
    "msg": "Error loading provisioner: client GET https://ca.step.svc.cluster.local/health failed: Get \"https://ca.step.svc.cluster.local/health\": dial tcp: lookup ca.step.svc.cluster.local: Try again",
    "time": "2022-04-07T17:29:19Z"
  }

I experienced this issue both on amd64 and arm64 (using the images given here by fastlorenzo)

I tried to use directly IP instead of domain name, without success. If I try to resolve the DNS manually, it give the correct IP. So it doesn't seems to be a DNS problem.

> dig +short ca.step.svc.cluster.local @10.96.0.10
10.108.222.249

I don't see anybody here having this issue and I can't find any more logs about the problem. Any help is very appreciate.

> kubectl describe pod -n step autocert-6fd76c4d97-cj9z9
Name:         autocert-6fd76c4d97-cj9z9
Namespace:    step
Priority:     0
Node:         bhasher-desktop/192.168.1.2
Start Time:   Thu, 07 Apr 2022 19:12:34 +0200
Labels:       app=autocert
              pod-template-hash=6fd76c4d97
Annotations:  <none>
Status:       Running
IP:           10.44.0.2
IPs:
  IP:           10.44.0.2
Controlled By:  ReplicaSet/autocert-6fd76c4d97
Containers:
  autocert:
    Container ID:   docker://917e0f80b09934ab3040fb9946ed5e7cfeeee3a88ef6b9ce46ad3bafeb30c222
    Image:          cr.step.sm/smallstep/autocert-controller:0.15.1
    Image ID:       docker-pullable://cr.step.sm/smallstep/autocert-controller@sha256:b085bbcfff8631d37152ab44390ce36da82f9dc21b23cf494ee7729998993d16
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 07 Apr 2022 19:39:50 +0200
      Finished:     Thu, 07 Apr 2022 19:39:55 +0200
    Ready:          False
    Restart Count:  11
    Requests:
      cpu:      100m
      memory:   20Mi
    Liveness:   http-get https://:4443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get https://:4443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      PROVISIONER_NAME:  autocert
      NAMESPACE:         step (v1:metadata.namespace)
    Mounts:
      /home/step/autocert from autocert-config (ro)
      /home/step/certs from certs (ro)
      /home/step/config from config (ro)
      /home/step/password from autocert-password (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hk5wm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      config
    Optional:  false
  certs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      certs
    Optional:  false
  autocert-password:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  autocert-password
    Optional:    false
  autocert-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      autocert-config
    Optional:  false
  kube-api-access-hk5wm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  30m                  default-scheduler  Successfully assigned step/autocert-6fd76c4d97-cj9z9 to bhasher-desktop
  Normal   Pulling    30m                  kubelet            Pulling image "cr.step.sm/smallstep/autocert-controller:0.15.1"
  Normal   Pulled     30m                  kubelet            Successfully pulled image "cr.step.sm/smallstep/autocert-controller:0.15.1" in 4.538132898s
  Normal   Created    30m (x3 over 30m)    kubelet            Created container autocert
  Normal   Started    30m (x3 over 30m)    kubelet            Started container autocert
  Warning  Unhealthy  30m (x8 over 30m)    kubelet            Readiness probe failed: Get "https://10.44.0.2:4443/healthz": dial tcp 10.44.0.2:4443: connect: connection refused
  Warning  Unhealthy  30m (x2 over 30m)    kubelet            Liveness probe failed: Get "https://10.44.0.2:4443/healthz": dial tcp 10.44.0.2:4443: connect: connection refused
  Normal   Pulled     29m (x3 over 30m)    kubelet            Container image "cr.step.sm/smallstep/autocert-controller:0.15.1" already present on machine
  Warning  BackOff    30s (x141 over 30m)  kubelet            Back-off restarting failed container

Environment

  • Kubernetes version: v1.23.5

  • Cloud provider or hardware configuration: Raspberry Pi & Linux Manjaro Desktop (amd64)

  • OS (e.g., from /etc/os-release): Debian Linux

  • Kernel (e.g., uname -a): Linux raspberrypi 5.15.30-v8+ #1535 SMP PREEMPT aarch64 GNU/Linux

  • Install tools: manual install

  • Other: Weave & MetalLB

Steps to reproduce

Follow manual install guide.

Expected behaviour

Having a running pod.

Actual behaviour

Pod crash and reboot with error message : Back-off restarting failed container

no alternative certificate subject name matches target host name 'hello-mtls.default.svc.cluster.local'

Subject of the issue

When following the steps to setup autocert for the hello-mtls server and hello-mtls-client pods, the following error occurs on the hello-mtls client pod:

"no alternative certificate subject name matches target host name 'hello-mtls.default.svc.cluster.local'"

The same occurs when curling manually via kubectl exec on the hello-mtls-client pod.
curl -sS
--cacert /var/run/autocert.step.sm/root.crt
--cert /var/run/autocert.step.sm/site.crt
--key /var/run/autocert.step.sm/site.key
https://hello-mtls.default.svc.cluster.local

Is this down to lack of support for AKS perhaps?

Environment

  • Kubernetes version: v1.18.14
  • Cloud provider or hardware configuration: Azure Kubernetes Service
  • OS (e.g., from /etc/os-release): Ubuntu 18.04
  • Install tools: helm

Steps to reproduce

Followed the helm install steps here
https://artifacthub.io/packages/helm/smallstep/autocert

And the annotate pods steps here
https://github.com/smallstep/autocert#annotate-pods-to-get-certificates

And the hello mtls steps here
https://github.com/smallstep/autocert#hello-mtls

Expected behaviour

The following should be logged in the hello-mtls-client pod:
Hello, hello-mtls-client.default.pod.cluster.local!

Actual behaviour

the hello-mtls-client pod logs the following error:
"no alternative certificate subject name matches target host name 'hello-mtls.default.svc.cluster.local'"

Additional context

Add any other context about the problem here

How to copy certificates to different location

Currently certificates stored in /var/run/autocert.step.sm on pod

I am trying to use certificate in pgadmin4, but it can see files only in /var/lib/pgadmin/storage folder.

How can I copy generated certificates to another location automatically?

Here are pod mounts:

       /var/lib/pgadmin from pgadmin-data (rw)                                                                                                                       │
       /var/run/autocert.step.sm from certs (ro) 

best way to create certificate for each pod in a deployment

How would I go about creating per pod certificate annotations?

Looking through the kubernetes / helm docs I'm not seeing an easy way. From what i can see i can only add the annotations after the pod has been created but then the certificates won't get created/injected since the init container for pod will have already run. I can create a annotation for the deployment but that just creates a single certificate for the cluster / workload and not for each individual pod.

Would it be possible to add an autocert.step.sm/enabled and if autocert.step.sm/name is not set it defaults to creating and injecting a certificate for each pod in the deployment/statefulset, etc or alternatively issue a san cert that has the cn for each pod in the deployment?

Autocert + sidecar proxy

Hey all ! I wonder if you would consider adding a sidecar proxy to the feature list of Autocert.

I was thinking that :

  • on an annotation (autocert.step.sm/inject: true), the operator could add the sidecar (e.g envoy) and provide certificates.
  • The sidecar pod mount certificates, takes on network and does TLS proxy passthrough
  • Optionnaly, the proxy is able to refresh itself when certs are renewed

This is very close to service-mesh I know, but LOT more simple and could resolve use cases in which the application cannot present certificates or auto-refresh when certificates are renewed.
Both ways (with sidecar/without sidecar) could still work together.

I might be interested to contribute on that If you consider it worth/doable.

Regards,

Add Support for Kubernetes v1.22+

Subject of the issue

Hi!

I recently tried setting up autocert! I really like the idea and think the project itself is pretty cool! However, I encountered a problem when trying to setup Kubernetes in my Kubernetes v1.23.3 cluster and I'd appreciate if you could have a look.

The general problem is that Kubernetes removed the admissionregistration.k8s.io/v1beta1 API - instead recommends to use admissionregistration.k8s.io/v1. (see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md#removal-of-several-beta-kubernetes-apis). This causes your autocert webhook to not work anymore in Kubernetes versions from 1.22 (creation of the webhook fails).

I've managed to create the webhook using admissionregistration.k8s.io/v1 in the definition, but even though the webhook is working the Kubernetes API pod does not accept its patch response (I suppose because it's still using admissionregistration.k8s.io/v1beta1).

Environment

  • Kubernetes version: v1.23.3
  • Cloud provider or hardware configuration: Hosted vServer
  • OS (e.g., from /etc/os-release): Debian 5.10.92-1 (2022-01-18) x86_64
  • Kernel (e.g., uname -a): 5.10.0-11-amd64
  • Install tools: -
  • Other: -

Steps to reproduce

Setup a Kubernetes v1.22+ cluster and try to get autocert working.

Expected behaviour

The webhook should be created and responding properly so that the Pod creation requests is patched correctly.

Actual behaviour

The webhook can not be created and Kubernetes API refuses the JSON response from the Autocert Controller pod.

Thanks a lot in advance!

If you have any questions, feel free to get back to me!

Best regards,
Lukas

ImagePullBackOff?

Subject of the issue

When following the helm instructions, all the pods result in ImagePullBackOff statuses. I didn't notice a step to authenticate with a a private repo or anything, does this project require enterprise support to use?

Environment

GKE 1.23

Steps to reproduce

helm -n autocert install autocert smallstep/autocert

Expected behaviour

The images pull down correctly

Actual behaviour

14:08:21 ▶ kubectl get pod -n autocert
NAME                           READY   STATUS             RESTARTS   AGE
autocert-696d9c6f78-fdzs5      0/1     ImagePullBackOff   0          3m58s
autocert-7b658fb6cb-jwhl9      0/1     ImagePullBackOff   0          120m
autocert-step-certificates-0   0/1     ImagePullBackOff   0          120m

Cipher order prevents certs pod from starting

Subject of the issue

Cipher order complaints from step-certificates nixes startup

Environment

  • Kubernetes version: 1.19 Server
  • Cloud provider or hardware configuration: EKS/Fargate
  • OS (e.g., from /etc/os-release):
    NAME="Alpine Linux"
    ID=alpine
    VERSION_ID=3.11.11
    PRETTY_NAME="Alpine Linux v3.11"
  • Kernel: Linux 4.14.243-185.433.amzn2.x86_64 #1 SMP x86_64 Linux
  • Install tools: Helm via AWS CDK
  • Other:

Steps to reproduce

Deploy Autocert Helm Chart with

(in step-certificates chart)
ca.db.enabled = false
ca.db.persistent = false

Expected behaviour

Step certs pod should boot with default HELM chart ?

Actual behaviour

Pod dies after this error:
2021/09/13 19:24:02 unexpected error: http2: TLSConfig.CipherSuites index 1 contains an HTTP/2-approved cipher suite (0xc02b), but it comes after unapproved cipher suites. With this configuration, clients that don't support previous, approved cipher suites may be given an unapproved one and reject the connection.
image

Additional context

This order is the default and problematic:
"tls": { "cipherSuites": [ "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" ],
If I patch the ConfigMap (step/configmaps/autocert-step-certificates-config) and reverse the Ciphers, all is well. I'm trying to do this in CDK as a permanent workaround, but not successful yet. I'm struggling to make out where ca.json comes from in the first place.

Autocert install directory reference old versions

Subject of the issue

The install directory contain yaml files with reference to old versions of step-ca:0.15.11 instead of 0.17.2 and autocert-renewer/bootstrapper:0.14.0 instead of 0.15.1

Expected behaviour

Refer to the updated version as described in the latest release: 0.17.2

Actual behaviour

Refer to the old versions

Manual Installation Documentation Issue

Subject of the issue

Following the manual steps for installing autocert into a kubernetes cluster we are asked to generate a /tmp/step.xxx directory. After generating the CA and Provisioner, creating the configmaps/secrets, and deploying the ca and autocert pods, I've run into an issue where the CA pod references /tmp/step.xxx directory insted of /home/step

Environment

  • Kubernetes version:
    Client Version: v1.21.5, Server Version: v1.21.5-gke.1805
  • Cloud provider or hardware configuration:
    GKE
  • OS (e.g., from /etc/os-release):
    MACOS Monterey 12.1
  • Kernel (e.g., uname -a):
    Darwin Kernel Version 21.2.0: Sun Nov 28 20:28:54 PST 2021; root:xnu-8019.61.5~1/RELEASE_X86_64 x86_64 i386 MacBookPro16,1 Darwin
  • Install tools:
    Smallstep CLI/0.18.2 (darwin/amd64), Release Date: 2022-03-02 00:47 UTC

Steps to reproduce

NAME                        READY   STATUS             RESTARTS   AGE
autocert-777f499f6f-h5x6r   0/1     CrashLoopBackOff   6          9m14s
ca-664ccd54d8-8tbrt         0/1     CrashLoopBackOff   5          4m11s
  • Look at pod logs
    Error opening database of Type badgerv2 with source /tmp/step.fkH/db: error opening Badger database: Error Creating Dir: "/tmp/step.fkH/db": mkdir /tmp/step.fkH/db: no such file or directory
  • See that /tmp/step.xxx is refrenced
  • Look at config configMap and see the refrence to /tmp/step.xxx
kubectl -n step get configmap config -o yaml

apiVersion: v1
data:
  ca.json: "{\n\t\"root\": \"/tmp/step.fkH/certs/root_ca.crt\",\n\t\"federatedRoots\":
    null,\n\t\"crt\": \"/tmp/step.fkH/certs/intermediate_ca.crt\",\n\t\"key\": \"/tmp/step.fkH/secrets/intermediate_ca_key\",\n\t\"address\":
    \":4443\",\n\t\"insecureAddress\": \"\",\n\t\"dnsNames\": [\n\t\t\"ca.step.svc.cluster.local\",\n\t\t\"127.0.0.1\"\n\t],\n\t\"logger\":
    {\n\t\t\"format\": \"text\"\n\t},\n\t\"db\": {\n\t\t\"type\": \"badgerv2\",\n\t\t\"dataSource\":
    \"/tmp/step.fkH/db\",\n\t\t\"badgerFileLoadingMode\": \"\"\n\t},\n\t\"authority\":
    {\n\t\t\"provisioners\": [\n\t\t\t{\n\t\t\t\t\"type\": \"JWK\",\n\t\t\t\t\"name\":
    \"admin\",\n\t\t\t\t\"key\": {\n\t\t\t\t\t\"use\": \"sig\",\n\t\t\t\t\t\"kty\":
    \"EC\",\n\t\t\t\t\t\"kid\": \"WKBOaYE72IvFzh1bOtRpeM6cl9aQDx5A8Rthrov7wDI\",\n\t\t\t\t\t\"crv\":
    \"P-256\",\n\t\t\t\t\t\"alg\": \"ES256\",\n\t\t\t\t\t\"x\": \"CJTqxPoBSpkMAlA3s1CWCag4eR8ERnmplLVzH4tV2io\",\n\t\t\t\t\t\"y\":
    \"hYGU4_n3oMdjVf_Qry6-WwYAP7A-MpL9IacocaBw5zE\"\n\t\t\t\t},\n\t\t\t\t\"encryptedKey\":
    \"eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjEwMDAwMCwicDJzIjoiYlVaNF9pQ3BzR1pjY3g3YUZ5RVJwZyJ9.R1xJvMScLN6dpIfi9HHlDtdM3MDk-VCXT3icZa5uH51_lVt_g7OFAw.Eg4Rs2if6EEUQJmq.oOL_4q_GS1HGh3o22AVWerSGmUL8AtxN9KQFYGXkgs32d_Xmsbl_GTpJL-UFCXY4tBiieK-1Lf4QEpsRv7uxBzWVfVzgTzlwuZKrER9yN8AoJqdHJ0mPCQtRvK-ZuiW489OcAz8aGslnwk3IdE61pq3XLQMkdsMvvf__XF07YDoQL9bMhfaUR5l2gxWSqjDXeBaX0Ms58Txf47kQRNYoKotAG8i1ffmwWVgHAtmuvr2Pu5rIlYNLeUooADFUg3G5IX8AFKgxkio_BMOoQGbKDtPxI-82toRMWa5ewPvEp9eAnkryWj9Sh__69IkQ0RhNFYLbQpjwkckFNNo0inI.vPNWIJFJDUdIhWbYxoasfA\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"type\":
    \"JWK\",\n\t\t\t\t\"name\": \"autocert\",\n\t\t\t\t\"key\": {\n\t\t\t\t\t\"use\":
    \"sig\",\n\t\t\t\t\t\"kty\": \"EC\",\n\t\t\t\t\t\"kid\": \"ggp9W9moHEgPNgap2LFjQiTUbTQnwvp1IiV1RJG59ro\",\n\t\t\t\t\t\"crv\":
    \"P-256\",\n\t\t\t\t\t\"alg\": \"ES256\",\n\t\t\t\t\t\"x\": \"BRX4y2glUdC-DET-tCVrn9mJ3ZkJkPpN0sxcFddJXgE\",\n\t\t\t\t\t\"y\":
    \"_YCTwPekM8HweS3JJ9x7SNdK2VqbQRIRrr5daIYJJs0\"\n\t\t\t\t},\n\t\t\t\t\"encryptedKey\":
    \"eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjEwMDAwMCwicDJzIjoiNjFsNXJ3TWhobzJyT0pKbkJmM1VQUSJ9.jEa5eWzgAyfSKjAhIEMMPnlfVNQoU3hkJOd6fygv3J7sJZhNhAHwGg.R3MFjU-t-0sUnl6k.yvXl2HDfZgo9JKK6mvVMrytFc4RZmVnCzI_-1e3SKLrRgfDmMs3v4raVXYpZNfusucjmZMVUtMgGmmvWMzKeAsnhd3m_nyTju06gpOWaUmnJedwJRLxU8UhCa7iqgzDPdZUHgmk-PdypKBnglgJ7f2KrXD0sO1SPX_Vv0DIIm7I0ZpkTgPTMQxP-aKKKkp0s-wj50_JLZvOajQ-Vwa5RfFdkvqB428LpX1wT-HF0o3gCU2xq755mJeCzhkWLveDAWCp1qOnAa9p8nvBbfY-D5BlDJQpA45GfGKFKpIEazdGeWVD5omMkhac18_JAwbptb9r4HZkg-aZL5Qqbb28.vPezhzHn7UKBvDf5ycFliA\"\n\t\t\t}\n\t\t],\n\t\t\"template\":
    {},\n\t\t\"backdate\": \"1m0s\"\n\t},\n\t\"tls\": {\n\t\t\"cipherSuites\": [\n\t\t\t\"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\",\n\t\t\t\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"\n\t\t],\n\t\t\"minVersion\":
    1.2,\n\t\t\"maxVersion\": 1.3,\n\t\t\"renegotiation\": false\n\t}\n}\n"
  defaults.json: "{\n\t\"ca-url\": \"ca.step.svc.cluster.local\",\n\t\"ca-config\":
    \"/tmp/step.fkH/config/ca.json\",\n\t\"fingerprint\": \"897511bee3fe5b7419177fadfa6b791202b4c72829963aab756ca9bce660660b\",\n\t\"root\":
    \"/tmp/step.fkH/certs/root_ca.crt\"\n}"
kind: ConfigMap
metadata:
  creationTimestamp: "2022-04-06T15:17:58Z"
  name: config
  namespace: step
  resourceVersion: "52429"
  uid: feab43fe-06e0-4b48-b3c3-744acd680400

Expected behaviour

Pods should come up with references to the correct mount points

Actual behaviour

Pods come up with references to temp directory

Additional context

Add any other context about the problem here

Timeline for a new release

Hello, I wanted to ask if you have a timeline for releasing next version of autocert, or a minor release that would include #198 - I have a cluster deployed fully on Graviton ARM instances, and would love to use autocert, but right now it's blocked and the last release seems to have happened in November 2023.

Thank you in advance, and I appreciate the project and the maintainers! <3

Autocert can't be built since certificates >0.10.0 and cli>0.10.0

Subject of the issue

Trying to build autocert leads to a lot of issues :

  • We encounter a known issue between blackfriday and dep. This can be fixed by adding an override in Gopkg.toml to force blackfriday 1.5.2 and run dep ensure -update
  • We then get an issue compiling nosql/badger. No clue what's going on here, only way to move forward was to remove all references to badger from github.com/smallstep/nosql/nosql.go :
badger/badger.go:27:4: bo.Dir undefined (type func(string) badger.Options has no field or method Dir)
badger/badger.go:29:5: bo.ValueDir undefined (type func(string) badger.Options has no field or method ValueDir)
badger/badger.go:31:5: bo.ValueDir undefined (type func(string) badger.Options has no field or method ValueDir)
badger/badger.go:34:26: cannot use bo (type func(string) badger.Options) as type badger.Options in argument to badger.Open
badger/badger.go:122:12: assignment mismatch: 2 variables but 1 values
badger/badger.go:122:25: not enough arguments in call to item.Value
        have ()
        want (func([]byte) error)
badger/badger.go:196:11: assignment mismatch: 2 variables but 1 values
badger/badger.go:196:24: not enough arguments in call to item.Value
        have ()
        want (func([]byte) error)
badger/badger.go:230:29: too many arguments in call to badgerTxn.Commit
        have (nil)
        want ()
  • We can finally start compiling autocert, but there is a breaking change in certificates > 0.10.0 (smallstep/certificates#80), which gives the error :
./main.go:579:23: cannot use config.GetRootCAPath() (type string) as type []byte in argument to ca.NewProvisioner
./main.go:579:23: cannot use password (type []byte) as type ca.ClientOption in argument to ca.NewProvisioner
  • After forcing certificates to 0.10.0 in Gopkg.toml, another breaking change, this time in cli > 0.10.0 (smallstep/cli#131) :
vendor/github.com/smallstep/certificates/authority/provisioner/jwk.go:145:16: assignment mismatch: 2 variables but 3 values

I could not get past this one, as I reach a dependency conflict if I try to force cli to 0.10.0:

Solving failure: No versions of github.com/smallstep/cli met constraints:
        v0.10.1: Could not introduce github.com/smallstep/[email protected], as it has a dependency on github.com/smallstep/certificates with constraint master, which has no overlap with existing constraint ^0.10.0 from (root)
        v0.10.0: Could not introduce github.com/smallstep/[email protected], as it has a dependency on github.com/smallstep/certificates with constraint master, which has no overlap with existing constraint ^0.10.0 from (root)

Environment

# go version
go version go1.11.6 linux/amd64

Steps to reproduce

git clone
cd autocert/controller
dep init
go build  -v  .

Expected behaviour

Tell us what should happen

Actual behaviour

Tell us what happens instead

Additional context

Add any other context about the problem here

Istio certs ?

What would you like to be added

Few options:

  • expose the Istio CA gRPC interface, using the K8S JWT with istio-ca audience.
  • add an option to change the mount path for certs to the well-known path where istio-agent is looking for certs

Also it would be nice if the certs included the spiffe identity ( using a trust domain configured at install time),
and maybe an option to restrict the DNS names to NAME.NAMESPACE.SUFFIX - where the suffix is specified at install
time, namespace is the pod namespace - and name may be the only thing customized by the user (can default
the the service account name for example).

Why this is needed

  • Good to have options - Istio does have an integration with CertManager and I know autocert has a signer for cert manager, but more direct integration is providing more choices for users.
  • current mechanism of arbitrary names is fine for users with OPA or strict access, but a more strict naming would work for
    everyone else.

Autocert: Add more docs for restrictCertificatesToNamespace

What would you like to be added

Now that autocert can restrict certificates SANs to only be issued within its respective namespace documentation is required to explain the feature and its usage. Let's add this to the docs.

Why this is needed

While this feature is beneficial to enforce namespace-level security, it might have impacts on the user experience which is why we want to highlight it better in the docs.

Potentially include docs in runbooks.md and/or FAQs

Allow option to expose mounted certs as secret

What would you like to be added

The ability to either have autocert create a secret in the namespace with the cert credentials (similar to what step-issuer does) or to write out to the volume mount a kubernetes secret containing cert file contents instead of the crt files themselves.

Why this is needed

In some cases it's nice to be able to have certs available as a secret in addition to or instead of three flat files made available via a volume mount. One use case I ran into is when I was bootstrapping timescaledb-kubernetes. In this helm chart, you can only specify certs as environment variables (secrets.certificate) or as an existing secret (secrets.certificateSecretName) when bootstrapping: https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/admin-guide.md

There are some alternative solutions I could explore to continue to use volume mounted certs in this case - but none of them were particularly great as far as I could tell. Some variations to solve my problem that I explored include:

  • add some configmap with some bash script that would pull the file contents into environment variables to be used by the helm chart
  • modify the timescaledb-kubernetes helm chart to also take volume mount as input to read the cert contents
  • install some other automation to pull the info out into an existing secret

I might have gone down one of those routes if autocert wrote out the contents of the cert files as a kubernetes secret for me. There is a well documented path to using secrets in your app that are in the pod filesystem provided they are written out in the kubernetes secret format.

In the end for my use case I just manually created a secret with the certs for this use-case using cert-manager, step-issuer, step-certificates and referenced that secret in the timescaledb-kubernetes helm chart values. It would be nice to not have to have that workflow, however, and be able to use autocert to manage that.

IdentityServer4 integration (C#) mTLS

I'm using IdentityServer4 (C#): is an OpenID Connect and OAuth 2.0 framework for ASP.NET Core.

I will host IdentityServer4 using Kubernetes + SQL Database for storing client information. In order to validate the client and issuing the JWT tokens, we need to register the client information on the Database. This requires to store the ClientId and Client Secret (thumbprint of the Certificate or Certificate Name)

Why is needed

IdentityServer4 is a centralized generic way of securing API communication providing multiple protocols but has no features of issuing certificates, and mTLS requires this.

What is needed

  • Produce client certificates and register them on IdentityServer Database (client id + client secret and claims)
  • Deliver client certificates to clients running in Kubernetes, using either Kubernetes secrets or volumes.
  • Deliver client certificates to external clients (windows users) "On-demand"
  • Automate client certificates renewal (will this affect the thumbprint store in the Database? or even the Certificate name?)

Is possible to do the above topics using Autocert? I've been reading about, Autocert, step-certificates, and Cert-manager.
I think the above solutions are possible to achieve with step-certificates right? But step-certificates won't auto-renewal the certificates and won't deliver them to the Kubernetes containers right? How can I achieve this, does Autocert helps on this, I don't think I can use Autocert since I require to register the certificate name/thumbprint on IdentityServer4 Database, is it possible to do it, How?

Allow using own root cert

What would you like to be added

The option to use our own private root CA with autocert.

Why this is needed

We currently use step-certificates with our own root CA from our of our internal PKI. I'd like to be able to do the same with autocert so that all our clients trust the certs generated by autocert.

I've tried to do this by updating the following items:

  • autocert-autocert-config
  • autocert-step-certificates-certs
  • autocert-step-certificates-config
  • autocert-step-certificates-secrets
  • autocert-step-certificates-ca-password
  • autocert-step-certificates-provisioner-password

But autocert keeps restarting as it doesn't trust the our root cert. I'm probably missing a configmap or secret somewhere, just cannot see where.

Autocert for updating the secret that has the certificates

Hi - The simplicity of your implementation is indeed very impressive.

I have a use case. My Istio ingress certificates are stored in a secret and then the Ingress pod consumes it from the secret and then pushes it to the memory of the target service. For example:

kubectl -n istio-system create secret generic httpbin-keys --from-file=key=$HOME/step/httpbin.key --from-file=cert=$HOME/step/httpbin.crt --from-file=cacert=$HOME/step/ca-chain.crt

Now, the above will be consumed by the Istio Ingress gateway automatically since I have defined a gateway that defines the secret name httpbin-keys for the httpbin.istio.io - which resolves to a local IP address - Just for testing. For example:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: mygateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
  - port:
      number: 443
      name: bookinfo
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: bookinfo-keys
    hosts:
    - bookinfo.istio.io
  - port:
      number: 443
      name: httpbin
      protocol: HTTPS
    tls:
      mode: MUTUAL
      credentialName: httpbin-keys
    hosts:
    - httpbin.istio.io

Since the secret is in istio-system namespace, it is protected as the users will not have access to this namespace and the certificates are then loaded in the memory of the pod instead of mounting them as files for security reasons through Secret Discovery Service of Istio.

Is there a way, you could recommend as what is the best way to automatically update the secret based upon when TTL of the x.509 is about the expire.

If this is easy to do, it will solve one of the problem that I am facing is to automatically renew the certificates for the external names for the virtual services.

Thank you.

Autocert: support multiple roots feature

What would you like to be added

Need a way to also keep sync the latest root bundles from step-ca server. A simple idea is that maybe we can use step ca new --exec to execute step ca roots.

Why this is needed

To support step-ca support multiple roots feature.

Init container failing on old pods

Subject of the issue

We've been using autocert for a few months now and mostly things seem to work well, with one exception. Periodically we find that a number of pods enter into PodInitializing state and never move to Running. The pods this happen to are usually many days old, and on inspecting the container logs it seems that the autocert-bootstrapper container is failing.

Environment

  • Kubernetes version: 1.24
  • Cloud provider or hardware configuration: EKS
  • OS (e.g., from /etc/os-release): EKS Managed nodes
  • Kernel (e.g., uname -a): EKS Managed nodes
  • Install tools: N/A
  • Other: N/A

Steps to reproduce

These are theorized as I've not come up with a good way to reproduce this intentionally yet. See "Additional context" below for more info

  • Set up autocert
  • Create a pod with the correct autocert annotation
  • Wait for autocert-bootstrapper credentials to expire
  • force init cleaners to be cleaned up (not sure how to do this in EKS yet)
  • observe the failure?

Expected behaviour

Pods with the autocert annotation should not fail to start up

Actual behaviour

Pods with the autocert annotation fail start up

Additional context

The init container failure itself is not super surprising given that autocert-bootstrapper is only really intended to work shortly after pod creation. I suspect that this issue is related to the following kubernetes/kubernetes#67261

My theory here is that something is causing the init containers to get cleaned up, which is triggering Kubernetes to re-execute them. At the time they are re-executed the autocert-bootstrapper only has expired credentials which causes it to exit with an error, causing the stuck PodInitializing state. I'm currently looking for a way to force pods to be cleaned up in EKS to try to reproduce the failure.

A simple workaround that would probably be a reasonable addition in any case would be to modify the bootstrapper.sh script to check to see if credentails already exist, and exit 0 if they do. This would at least allow the init container to re-run non-destructively.

Boostrapper keeps failing due to missing secret

Subject of the issue

I'm using autocert in 3 of my pods. In 2 of them everything works great, the cert is injected into the container with no problems, however in one of them (the only one that has custom init containers) I'm constantly getting Error: secret "podname-d8dt8" not found error.

From the kubectl describe pod I see:

STEP_TOKEN: <set to the key 'token' in secret 'podname-d8dt8'> Optional: false

for autocert-bootstrapper container.

In my main container I have:

/var/run/secrets/kubernetes.io/serviceaccount from podname-token-629hw (ro)

The only difference between working pods and the one that keeps failing is the one that keeps failing has a custom ServiceAccount assigned that is named exactly after the pod. Here is the config for ServiceAccount:


apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: podname
subjects:
- kind: ServiceAccount
  name: podname
  namespace: $NAMESPACE
roleRef:
  kind: Role
  name: podname
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: podname
  namespace: $NAMESPACE
  labels:
    app: podname
rules:
- apiGroups: ["*"]
  resources:
  - jobs
  - jobs/status
  - pods
  - deployments
  - deployments/status 
  verbs:
  - get
  - watch
  - list
  - status
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: podname
  namespace: $NAMESPACE
  labels:
    app: podname
---

Environment

  • Kubernetes version: 1.20.15
  • Cloud provider or hardware configuration: EKS

restrictCertificatesToNamespace behavior is counter intuitive

The current config parameter restrictCertificatesToNamespace behavior is to reject an annotation that match another namespace, but accept any annotation that doesn't match clusterDomain.
Eg : if my cluster domain is cluster.local, and the namespace is default, it will prevent me to get a cert for other-namespace.svc.cluster.local, but will accept things like default.svc.cluster.tld or www.google.com.

What would you like to be added

I would like restrictCertificatesToNamespace to restrict all requests to current namespace, and any that doesn't match should be rejected.

Why this is needed

restrictCertificatesToNamespace appears to be a security feature to prevent service in a namespace to impersonate other namespaces services. It should also prevent impersonation of external services.

Capture And Process mTLS TCPDUMP

I think it will great to see how the HTTPS traffic can be captured via tcpdump and processed via wireshark or similar tool. I could not find much info about this.

autocert-step-certificates-0 crashloop

Subject of the issue

Hi, we tried deploying it through helm in one of our AKS cluster. Whenever we install it, autocert-step-certificates-0 crashloop and the logs show this :

open /home/step/db/LOCK: permission denied
Cannot write pid file "/home/step/db/LOCK"
github.com/smallstep/certificates/vendor/github.com/dgraph-io/badger.acquireDirectoryLock
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/dgraph-io/badger/dir_unix.go:75
github.com/smallstep/certificates/vendor/github.com/dgraph-io/badger.Open
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/dgraph-io/badger/db.go:204
github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/badger.(*DB).Open
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/badger/badger.go:34
github.com/smallstep/certificates/vendor/github.com/smallstep/nosql.New
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/nosql.go:72
github.com/smallstep/certificates/db.New
	/home/travis/gopath/src/github.com/smallstep/certificates/db/db.go:49
github.com/smallstep/certificates/authority.(*Authority).init
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:63
github.com/smallstep/certificates/authority.New
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:46
github.com/smallstep/certificates/ca.(*CA).Init
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:74
github.com/smallstep/certificates/ca.New
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:65
main.startAction
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:190
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.Command.Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/command.go:165
main.main.func4
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:152
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.(*App).Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:268
main.main
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:155
runtime.main
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/proc.go:200
runtime.goexit
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/asm_amd64.s:1337
error opening Badger database
github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/badger.(*DB).Open
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/badger/badger.go:35
github.com/smallstep/certificates/vendor/github.com/smallstep/nosql.New
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/smallstep/nosql/nosql.go:72
github.com/smallstep/certificates/db.New
	/home/travis/gopath/src/github.com/smallstep/certificates/db/db.go:49
github.com/smallstep/certificates/authority.(*Authority).init
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:63
github.com/smallstep/certificates/authority.New
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:46
github.com/smallstep/certificates/ca.(*CA).Init
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:74
github.com/smallstep/certificates/ca.New
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:65
main.startAction
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:190
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.Command.Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/command.go:165
main.main.func4
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:152
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.(*App).Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:268
main.main
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:155
runtime.main
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/proc.go:200
runtime.goexit
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/asm_amd64.s:1337
Error opening database of Type badger with source /home/step/db
github.com/smallstep/certificates/db.New
	/home/travis/gopath/src/github.com/smallstep/certificates/db/db.go:52
github.com/smallstep/certificates/authority.(*Authority).init
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:63
github.com/smallstep/certificates/authority.New
	/home/travis/gopath/src/github.com/smallstep/certificates/authority/authority.go:46
github.com/smallstep/certificates/ca.(*CA).Init
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:74
github.com/smallstep/certificates/ca.New
	/home/travis/gopath/src/github.com/smallstep/certificates/ca/ca.go:65
main.startAction
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:190
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.Command.Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/command.go:165
main.main.func4
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:152
github.com/smallstep/certificates/vendor/github.com/urfave/cli.HandleAction
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:501
github.com/smallstep/certificates/vendor/github.com/urfave/cli.(*App).Run
	/home/travis/gopath/src/github.com/smallstep/certificates/vendor/github.com/urfave/cli/app.go:268
main.main
	/home/travis/gopath/src/github.com/smallstep/certificates/cmd/step-ca/main.go:155
runtime.main
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/proc.go:200
runtime.goexit
	/home/travis/.gimme/versions/go1.12.4.linux.amd64/src/runtime/asm_amd64.s:1337

but if i deploy it with minikube using the same Kubenetes version i have no issue.

Any idea?

Environment

  • Kubernetes version: 1.13.10
  • Cloud provider or hardware configuration: Azure AKS
  • OS (e.g., from /etc/os-release):
  • Kernel (e.g., uname -a):
  • Install tools: helm
  • Other:

Steps to reproduce

Spawn an AKS cluster version 1.13.10

helm upgrade -i autocert --namespace autocert .

Expected behaviour

NAME                           READY   STATUS      RESTARTS   AGE
autocert-886f4cc7f-klb7r       1/1     Running     1          20h
autocert-skp7f                 0/1     Completed   0          20h
autocert-step-certificates-0   1/1     Running     0          20h

Actual behaviour

NAME                           READY   STATUS             RESTARTS   AGE
autocert-6847ddb7c7-hfddf      0/1     CrashLoopBackOff   5          6m13s
autocert-step-certificates-0   0/1     CrashLoopBackOff   5          6m13s
autocert-vxvm9                 0/1     Completed          0          6m13s

Additional context

Add any other context about the problem here

Ingress

What would you like to be added

Secure kubernetes ingresses with autocert.

Why this is needed

To make it easy to issue certificates to an ingress controller and terminate tls at the ingress.

[How can I make my services to use renewed certificates automatically?]

Subject of the issue

I have installed 'autocert' through the helm chart, which works very well. (Thank you to the community)

I am then providing these certificates to my other k8s services such as code-server or Elastic Kibana so they can have the TLS support. I know that the maximum duration of the certificate is 24h and the renewer is renewing them, however, how can I let those services pick up the new certificate instead of the old one?

Unless I restart the pod, they are still using the old certificate, i.e expired.

For an alternative solution, I also tried to adjust the duration of the certificate to one year by following this issue, but somehow the certificates are not being injected by the admission webhook.

Environment

  • Kubernetes version:
    rke2 version v1.22.9+rke2r2 (d7c26a45b92cf3f76c063e93f8c6448fde7b2456) go version go1.16.14b7
  • Cloud provider or hardware configuration:
    AWS EC2
  • OS (e.g., from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
* Kernel (e.g., `uname -a`):
Linux ip-172-32-74-108 5.13.0-1023-aws #25~20.04.1-Ubuntu SMP Mon Apr 25 19:28:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
* Install tools:
lens (Kube IDE), helm chart, autocert
* Other:

Steps to reproduce

Tell us how to reproduce this issue

Expected behaviour

Tell us what should happen

Actual behaviour

Tell us what happens instead

Additional context

Add any other context about the problem here

Autocert: Consider adding verification web hook for restrictCertificatesToNamespace

What would you like to be added

Consider adding a verification web hook to check validity of pod identity annotation against namespace to bubble up errors to kubectl at manifest application time.

Why this is needed

Currently, users have to tail the autocert controller logs for potential errors with namespace/identity mismatches and other errors. kubectl will fail more or less silently.

Fail to build from source: Confused paths in Dockerfile

Subject of the issue

autocert/controller fails to build from source:

Environment

  • Kubernetes version: v1.23.6+k3s1
  • Cloud provider or hardware configuration: Mythic Beasts, 2 core/4GB
  • OS (e.g., from /etc/os-release): Debian GNU/Linux 11 (bullseye)
  • Kernel (e.g., uname -a): 5.10.0-14-amd64
  • Install tools:
  • Other:

Steps to reproduce

$ git clone [email protected]:smallstep/autocert.git
remote: Enumerating objects: 403, done.
remote: Counting objects: 100% (98/98), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 403 (delta 55), reused 49 (delta 45), pack-reused 305
Receiving objects: 100% (403/403), 12.30 MiB | 10.68 MiB/s, done.
Resolving deltas: 100% (186/186), done.
$ cd autocert/controller
$ ls
client.go  Dockerfile  main.go  main_test.go
$  podman build -t registry.example.com/autocert-controller:latest .
STEP 1: FROM golang:alpine AS build-env
STEP 2: RUN apk update && apk upgrade &&     apk add --no-cache git
--> Using cache f39dccf9375503088d2edd71a3a33f4b8faee9f848d8674d5dfb6a848553f623
--> f39dccf9375
STEP 3: WORKDIR $GOPATH/src/github.com/autocert/controller
--> Using cache 098b974975a4aa137a0d25808f2a5eb11c4a87ce89910e40ab23ce1afb351c53
--> 098b974975a
STEP 4: COPY go.mod go.sum ./
STEP 5: FROM smallstep/step-cli:0.17.2
Error: error building at STEP "COPY go.mod go.sum ./": error adding sources [/tmp/autocert/controller/go.mod /tmp/autocert/controller/go.sum]: error checking on source /tmp/autocert/controller/go.mod under "/tmp/autocert/controller": copier: stat: "/go.mod": no such file or directory
$ 

Expected behaviour

autocert/controller should build in a container

Actual behaviour

build fails as go.mod and go.sum are in the parent directory.

Additional context

Copying go.mod and go.sum to autocert/controller doesn't help, since the Dockerfile then expects to find client.go and main.go in a folder called controller (relative to the Dockerfile).

My guess is that the Dockerfile used to be in the root of the project and has been moved?

Autocert: respect federation root bundles

What would you like to be added

Deploy federation bundles alongside root CA bundle and leaf certificate into containers. Both for first cert enrollment and successive cert renewals.

Why this is needed

step already supports federation and autocert users should have the ability to take advantage of connecting across federated roots.

File not found when run with java

Cannot locate the file when start up java gRPC server with Spring Boot. I can list the 3 files if my app not using them which run smoothly that I can get into the container to list those 3 files.
Here is the deployment yml:
image

image

image

autocert doesn't create certificates for kubernetes statefulset workloads

Subject of the issue

kubernetes StatefulSet do not get certificates requested / deployed to the pod

Environment

Kubernetes version:
client: 1.15.4
server: 1.12.8
Cloud provider or hardware configuration:
gke

  • OS (e.g., from /etc/os-release):
    Centos 7
  • Kernel (e.g., uname -a):
    Linux elasticsearch-master-0 4.14.127+ #1 SMP Tue Jun 18 23:08:40 PDT 2019 x86_64 x86_64 x86_64 GNU/Linux

Install tools:
helm

Steps to reproduce

helm repo add elastic https://helm.elastic.co
wget -4 https://raw.githubusercontent.com/elastic/helm-charts/master/elasticsearch/values.yaml
edit values.yaml and add autocert.step.sm/name: test123.default.svc.cluster.local to podAnnotations key

helm install --name elasticsearch elastic/elasticsearch

Expected behaviour

cert gets issued and deployed to the pod

Actual behaviour

cert is neither issued nor deployed to the pod

Additional context

kubectl describe pod elasticsearch-master-0
Name: elasticsearch-master-0
Namespace: kubeprod
Priority: 0
Node: gke-apollo-default-pool-3fc7c75e-hll3/10.138.15.221
Start Time: Wed, 25 Sep 2019 21:26:38 +0000
Labels: app=elasticsearch-master
chart=elasticsearch
controller-revision-hash=elasticsearch-master-78644d5858
heritage=Tiller
release=elasticsearch
statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations: autocert.step.sm/name: test123.kubeapps.svc.cluster.local
configchecksum: 3c3d7b85a691a47a1dc120aa0ec8135f70fe1471cd12803212bfef7e83cf931
Status: Running

kubectl exec -it elasticsearch-master-0 ls /var/run/
blkid cryptsetup lock secrets setrans user
console faillock log sepermit systemd utmp

However, both autocert and the smallstep ca logs don't have any entries for requesting/issueing any certificates.

Private key permissions too permissive

Subject of the issue

Postgres fails to start because the permission on the key is too loose (0644)

Environment

  • Kubernetes version: k3s version v1.23.6+k3s1
  • Cloud provider or hardware configuration: Mythic Beasts
  • OS (e.g., from /etc/os-release): Debian 11
  • Kernel (e.g., uname -a): 5.10.0-14-amd64
  • Install tools:
  • Other:

Steps to reproduce

Install autocert
Create a deployment based on docker.io/postgres:14.3-alpine
Update postgres config to point at autocert cert/key
Start postgres

Expected behaviour

Postgres should start with TLS using auto cert keys

Actual behaviour

Postgres fails to start:

2022-06-01 18:24:51.139 GMT [37] FATAL:  private key file "/var/run/autocert.step.sm/site.key" has group or world access
2022-06-01 18:24:51.139 GMT [37] DETAIL:  File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2022-06-01 18:24:51.139 GMT [37] LOG:  database system is shut down

Additional context

The bootstrapper script explictly sets the permissions on the cert and key to 644.
Given that the duration attribute is passed through to bootstrapper, could you add a set of attributes for cert/key owner/group/mode?

Init container uses out of date api version

Subject of the issue

Describe your issue here

Environment

kind k8s 1.25.3

Steps to reproduce

kubectl run autocert-init -it --rm --image cr.step.sm/smallstep/autocert-init --restart Never

Expected behaviour

Script should finish successfully

Actual behaviour

Deploying autocert...
service/autocert created
configmap/autocert-config created
deployment.apps/autocert created
clusterrole.rbac.authorization.k8s.io/autocert-controller created
clusterrolebinding.rbac.authorization.k8s.io/autocert-controller created
Waiting for deployment "autocert" rollout to finish: 0 of 1 updated replicas are available...
deployment "autocert" successfully rolled out
error: unable to recognize "STDIN": no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
pod "autocert-init" deleted
pod default/autocert-init terminated (Error)

Additional context

The code in github is correct but the container seems to be out of date.

autocert health check fails because of wrong name

Subject of the issue

The Autocert container fails to start when the health check GET returns a certificate with the wrong name.

{"level":"error","msg":"Error loading provisioner: client GET https://autocert-step-certificates.default.svc.cluster.local/health failed: Get \"https://autocert-step-certificates.default.svc.cluster.local/health\": x509: certificate is valid for mypublicdomain.com, not autocert-step-certificates.default.svc.cluster.local","time":"2021-08-20T18:58:40Z"}

Environment

  • Kubernetes version: K3s 1.21
  • Cloud provider or hardware configuration: Single node on dedicated 64-bit home server
  • OS (e.g., from /etc/os-release): Debian 10 / Buster
  • Kernel (e.g., uname -a): Linux k3s 5.6.0-1-amd64 #1 SMP Debian 5.6.7-1 (2020-04-29) x86_64 GNU/Linux
  • Install tools: k3s, helm, autocert-1.15.0
  • Other: NA

Steps to reproduce

Starting with an empty K3s cluster, issue helm install autocert autocert --repo=https://smallstep.github.io/helm-charts/

Expected behaviour

3 pods should start, finishing in Completed or Ready status.

Actual behaviour

Autocert pod status is CrashLoopBackOff, autocert-step-certificates is Pending.

Additional context

Log of steps taken:

[user@laptop]~/step-ca% helm install autocert autocert --repo=https://smallstep.github.io/helm-charts/
NAME: autocert
LAST DEPLOYED: Fri Aug 20 13:57:12 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Thanks for installing Autocert.

1. Enable Autocert in your namespaces:
   kubectl label namespace default autocert.step.sm=enabled

2. Check the namespaces where Autocert is enabled:
   kubectl get namespace -L autocert.step.sm

3. Get the PKI and Provisioner secrets running these commands:
   kubectl get -n default -o jsonpath='{.data.password}' secret/autocert-step-certificates-ca-password | base64 --decode
   kubectl get -n default -o jsonpath='{.data.password}' secret/autocert-step-certificates-provisioner-password | base64 --decode

4. Get the CA URL and the root certificate fingerprint running this command:
   kubectl -n default logs job.batch/autocert

5. Delete the configuration job running this command:
   kubectl -n default delete job.batch/autocert

[user@laptop]~/step-ca% helm list
NAME    	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART          	APP VERSION
autocert	default  	1       	2021-08-20 13:57:12.491955374 -0500 CDT	deployed	autocert-1.15.0	0.15.0 

[user@laptop]~/step-ca% kubectl get pods
NAME                           READY   STATUS             RESTARTS   AGE
autocert-step-certificates-0   0/1     Pending            0          94s
autocert-jxn6g                 0/1     Completed          0          96s
autocert-548575c945-drf8m      0/1     CrashLoopBackOff   3          96s
[user@laptop]~/step-ca% kubectl logs autocert-548575c945-drf8m
{"config":{"Address":":4443","Service":"autocert","LogFormat":"json","CaURL":"https://autocert-step-certificates.default.svc.cluster.local","CertLifetime":"24h","Bootstrapper":{"name":"autocert-bootstrapper","image":"cr.step.sm/smallstep/autocert-bootstrapper:0.15.0","resources":{"requests":{"cpu":"10m","memory":"20Mi"}},"volumeMounts":[{"name":"certs","mountPath":"/var/run/autocert.step.sm"}],"imagePullPolicy":"IfNotPresent"},"Renewer":{"name":"autocert-renewer","image":"cr.step.sm/smallstep/autocert-renewer:0.15.0","resources":{"requests":{"cpu":"10m","memory":"20Mi"}},"volumeMounts":[{"name":"certs","mountPath":"/var/run/autocert.step.sm"}],"imagePullPolicy":"IfNotPresent"},"CertsVolume":{"name":"certs","emptyDir":{}},"RestrictCertificatesToNamespace":false,"ClusterDomain":"cluster.local","RootCAPath":"/home/step/certs/root_ca.crt","ProvisionerPasswordPath":"/home/step/password/password"},"level":"info","msg":"Loaded config","time":"2021-08-20T18:58:40Z"}
{"level":"info","msg":"Loaded provisioner configuration","provisionerKid":"","provisionerName":"admin","time":"2021-08-20T18:58:40Z"}
{"level":"error","msg":"Error loading provisioner: client GET https://autocert-step-certificates.default.svc.cluster.local/health failed: Get \"https://autocert-step-certificates.default.svc.cluster.local/health\": x509: certificate is valid for mypublicdomain.com, not autocert-step-certificates.default.svc.cluster.local","time":"2021-08-20T18:58:40Z"}

Add AUTO_START flag to init/autocert.sh

What would you like to be added

Add and environment variable AUTO_START to avoid the need for a user to type a key when running the kubectl run autocert-init -it --rm --image smallstep/autocert-init --restart Never

Why this is needed

  • for automating the installation of autocert using kubectl commands on a bash script, ie: kubectl run autocert-init -it --rm --image smallstep/autocert-init --env="AUTO_START=true" --restart Never

PR

#9

Document use case in order to solve" x509 certificate signed by an unknown authority"

I'm trying to run argo-workflows with sso mode,
When running it with sso mode , the argo-server runs into crashloopback state , the logs of it throws: "x509: certificate signed by unknown authority"

I've found some discussion regarding the same problem here:
argoproj/argo-workflows#4447 (comment)

There's a suggetion for workaround to fix it here:
argoproj/argo-workflows#4447 (comment)

The workaround basically is to generate ca certs and mount them in the container root ca path,

After some research online , I'm assuming that autocert will be a good way to fix this issue,

So.. as an example for use cases..
and for solving this specific issue..

Can someone please document how to fix this specific issue, using autocert?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.