Giter VIP home page Giter VIP logo

kube-oidc-proxy's Introduction

kube-oidc-proxy

⚠️

The kube-oidc-project has been archived, checkout the maintained fork by Tremolo Security.

⚠️

kube-oidc-proxy is a reverse proxy server to authenticate users using OIDC to Kubernetes API servers where OIDC authentication is not available (i.e. managed Kubernetes providers such as GKE, EKS, etc).

This intermediary server takes kubectl requests, authenticates the request using the configured OIDC Kubernetes authenticator, then attaches impersonation headers based on the OIDC response from the configured provider. This impersonated request is then sent to the API server on behalf of the user and it's response passed back. The server has flag parity with secure serving and OIDC authentication that are available with the Kubernetes API server as well as client flags provided by kubectl. In-cluster client authentication is also available when running kube-oidc-proxy as a pod.

Since the proxy server utilises impersonation to forward requests to the API server once authenticated, impersonation is disabled for user requests to the API server.

The following is a diagram of the request flow for a user request. kube-oidc-proxy request flow

Tutorial

Directions on how to deploy OIDC authentication with multi-cluster can be found here. or there is a helm chart.

Quickstart

Deployment yamls can be found in ./deploy/yaml and will require configuration to an exiting OIDC issuer.

This quickstart demo will assume you have a Kubernetes cluster without OIDC authentication, as well as an OIDC client created with your chosen provider. We will be using a Service with type LoadBalancer to expose it to the outside world. This can be changed depending on what is available and what suites your set up best.

Firstly deploy kube-oidc-proxy and it's related resources into your cluster. This will create it's Deployment, Service Account and required permissions into the newly created kube-oidc-proxy Namespace.

$ kubectl apply -f ./deploy/yaml/kube-oidc-proxy.yaml
$ kubectl get all --namespace kube-oidc-proxy

This deployment will fail until we create the required secrets. Notice we have also not provided any client flags as we are using the in-cluster config with it's Service Account.

We now wait until we have an external IP address provisioned.

$ kubectl get service --namespace kube-oidc-proxy

We need to generate certificates for kube-oidc-proxy to securely serve. These certificates can be generated through cert-manager, more information about this project found here.

Next, populate the OIDC authenticator Secret using the secrets given to you by your OIDC provider in ./deploy/yaml/secrets.yaml. The OIDC provider CA will be different depending on which provider you are using. The easiest way to obtain the correct certificate bundle is often by opening the providers URL into a browser and fetching them there (typically output by clicking the lock icon on your address bar). Google's OIDC provider for example requires CAs from both https://accounts.google.com/.well-known/openid-configuration and https://www.googleapis.com/oauth2/v3/certs.

Apply the secret manifests.

kubectl apply -f ./deploy/yaml/secrets.yaml

You can restart the kube-oidc-proxy pod to use these new secrets now they are available.

kubectl delete pod --namespace kube-oidc-proxy kube-oidc-proxy-*

Finally, create a Kubeconfig to point to kube-oidc-proxy and set up your OIDC authenticated Kubernetes user.

apiVersion: v1
clusters:
- cluster:
    certificate-authority: *
    server: https://[url|ip:443]
  name: *
contexts:
- context:
    cluster: *
    user: *
  name: *
kind: Config
preferences: {}
users:
- name: *
  user:
    auth-provider:
      config:
        client-id: *
        client-secret: *
        id-token: *
        idp-issuer-url: *
        refresh-token: *
      name: oidc

Configuration

Development

NOTE: building kube-oidc-proxy requires Go version 1.12 or higher.

To help with development, there is a suite of tools you can use to deploy a functioning proxy from source locally. You can read more here.

kube-oidc-proxy's People

Contributors

alejandroesc avatar allanhung avatar binboum avatar davidcollom avatar dcherman avatar dependabot[bot] avatar inteon avatar jetstack-bot avatar joshvanl avatar justinas-b avatar ltagliamonte-dd avatar maelvls avatar mattbates avatar mattiasgees avatar mhrabovcin avatar phanama avatar saiteja313 avatar simonswine avatar wallrj avatar x-way avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-oidc-proxy's Issues

Improve Unauthed response

Currently, when we fail a authentication on a request, we simply reply with a 403 with a "Unauthorized" response body. We may want to change this to instead return a k8s JSON object, similar to how the API server is doing it:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\": No policy matched.",
  "reason": "Forbidden",
  "details": {
    "kind": "pods"
  },
  "code": 403
}

There is some discussion to have around this:

  • On one hand this will make sure that k8s based apps play nice with the proxy as they may be expecting/require this kind of response
  • On the other hand, this makes the proxy even more k8s specific which might not be what we want for non-kubernetes based backend targets

/cc @munnerz

Support for Impersonation headers

It might be possible for the proxy to accept the impersonation headers, e.g. to support kubectl --as, with the use of a SubjectAccessReview which would authorize the user to make an impersonated call. Of course, the proxy itself uses impersonation, and in this scenario it would use the supplied user/group (after a successful review).

The key scenario would be to support kubectl --as and --as-group.

Does it seem reasonable that the proxy could do this? Does it parse the incoming request sufficiently to formulate a SubjectAccessReview (ref)?

Demo/Walk through

Create demo/walk though with scripts and possible terraform to deploy the proxy and set it up.

Unable to connect to the server: x509: certificate signed by unknown authority

Hi there, I am attempting to use this project along with dex + dex-k8s-authenticator on EKS. Whenever I run any kubectl commands I get the following error: Unable to connect to the server: x509: certificate signed by unknown authority

Here are values override for kube-oidc-proxy

oidc:
   clientId: k8s-dex-authenticator
   issuerUrl: my-dex-url
   usernameClaim: email
ingress:
   enabled: true
   annotations:
     kubernetes.io/ingress.class: nginx
   hosts:
     - host: oidc.endpoint.url
       paths: 
       - /

kubeconfig file looks like this;

- cluster:
    certificate-authority: certs/dev/k8s-ca.crt
    server: https://oidc.endpoint.url
  name: dev
users:
- name: admin-dev
  user:
    auth-provider:
      config:
        client-id: k8s-dex-authenticator
        client-secret: super-secret..........
        id-token: id-token-xxxx.......
        idp-issuer-url: my-dex-url
        refresh-token: refresh-token.....
      name: oidc
contexts:
- context:
    cluster: dev
    user: admin-dev
  name: admin-dev

oidc-proxy logs

I0704 22:39:40.211455       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0704 22:40:03.113429       1 probe.go:69] OIDC provider initialized, proxy ready

Note: I am terminating TLS on my load balancer

Any pointers?

helm chart install : problem with 127.0.0.1:8080

I deployed the helm chart, and started the port forwarding as suggested in the helm output.
When navigating to http://127.0.0.1:8080/ , the browser tells me

This page isn’t working
127.0.0.1 didn’t send any data.
ERR_EMPTY_RESPONSE

And the port-forward messages give:

E0103 20:23:11.509155   11270 portforward.go:400] an error occurred forwarding 8080 -> 80: error forwarding port 80 to pod 88fd7849efde6c4bbed4ea689bcfb4b3bdf155b29259fd0f5c1c5a1d6ef570eb, uid : exit status 1: 2020/01/03 19:23:11 socat[28745] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused

Any idea if I am doing something wrong ?

Can we use jetstack/kube-oidc-proxy without Dex And Gangway

Can we use jetstack/kube-oidc-proxy without Dex And Gangway to authenticate directly with OIDC providers like Google, Microsoft or Okta

The reason for my question is there is no option to specify client_secret in the OIDC config and more over its not clear how does the kubeconfig look like

The README says we need to provide id-token, refresh-token but not sure how to get these values without something like DEX automatically..

Investigate why NGINX doesn't work

When deploying behind an nginx, the kube-oidc-proxy becomes unreachable - due to some streaming error.

This is basically a requirement for a cert-manager cert bootstrap.

Logs TBD.

partially full response buffers never flush

Expect

Non-full buffers should flush in a reasonable amount of time

Observed

  • Partial buffer fills on the response are not flushed to oidc-proxy client.
  • From kubectl client perspective this will often show up as partial json obj... which will fail to decode.
  • Pod delete will hang until something terminates the streaming connection (ie LB timeout)
  • setting proxyHandler.FlushInterval = -1 resolves the issue

Symptoms

  • kubectl delete pod hangs until api server connection is terminated

Fix
I set the proxy to flush immediately and the issue resolves. I'm happy to submit a PR if you would like. We could flush immediately, pick a sane value, or make it configurable.

// proxy.go: 128
// set up proxy handler using proxy
proxyHandler := httputil.NewSingleHostReverseProxy(url)
proxyHandler.Transport = p
proxyHandler.ErrorHandler = p.Error

// ----------------------------------------------------
// pod delete (anything that calls w/ watch=true ) can hang 
// setting FlushInterval < 0 immediately flushes 
// and solves issue
// ----------------------------------------------------
proxyHandler.FlushInterval = -1

Kube-OIDC-Proxy 404 error on EKS cluster using Istio

I get this error when i run a curl command on kube-oidc-proxy url deployed on istio.

The certs are generated by cert-manager and the URL for kube-oidc-proxy is hosted on Istio Ingress Gateway instead of Load Balancer

Pods running on kube-oidc-proxy

NAME READY STATUS RESTARTS AGE
kube-oidc-proxy-6ddf69485b-phln4 2/2 Running 0 41m

$ kubectl logs kube-oidc-proxy-6ddf69485b-phln4 -n kube-oidc-proxy -c kube-oidc-proxy

I0608 13:25:16.526357 1 secure_serving.go:178] Serving securely on [::]:443
I0608 13:25:39.737367 1 probe.go:69] OIDC provider initialized, proxy ready

$ curl https://oidc.v4.xxxxx.com -v

  • Trying 18.211.247.153...
  • TCP_NODELAY set
  • Connected to oidc.v4.xxxxxx.com (xxxxxxx) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/cert.pem
    CApath: none
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
  • ALPN, server accepted to use h2
  • Server certificate:
  • subject: CN=oidc.v4.xxxxxx.com
  • start date: Jun 8 06:50:47 2020 GMT
  • expire date: Sep 6 06:50:47 2020 GMT
  • subjectAltName: host "oidc.v4.xxxxxx.com" matched cert's "oidc.v4.xxxxxx.com"
  • issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
  • SSL certificate verify ok.
  • Using HTTP2, server supports multi-use
  • Connection state changed (HTTP/2 confirmed)
  • Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
  • Using Stream ID: 1 (easy handle 0x7fa04680a400)

GET / HTTP/2
Host: oidc.v4.xxxxxx.com
User-Agent: curl/7.64.1
Accept: /

  • Connection state changed (MAX_CONCURRENT_STREAMS == 4294967295)!
    < HTTP/2 404
    < date: Mon, 08 Jun 2020 13:41:26 GMT
    < server: istio-envoy
    <
  • Connection #0 to host oidc.v4.fpcomplete.com left intact
  • Closing connection 0

Can someone suggest what am i do wrong ?

audit `get pods <pod-name>` doesnt work

Hello,

Thanks for such a useful tool. want to raise an issue about audit feature not working for get pods <pod-name>. I dont see anything in the proxy output console. I have configured the audit as follows:

 --secure-port=443 --tls-cert-file=./tls/crt.pem --tls-private-key-file=./tls/key.pem --oidc-client-id=dd-auth --oidc-issuer-url='https://dex.xxx.com' --oidc-username-claim=email --extra-user-header-client-ip --oidc-ca-file=./ca/oidc-ca.pem --oidc-groups-claim=groups --oidc-signing-algs=RS256 --audit-policy-file=./audit.yaml --audit-log-path="-" --kubeconfig=/Users/xxx.config

I see that test for this usecase has been written. however, the same doesnt execute because of absence of ./pkg/proxy/audit/audit_test.go in the code.

I debugged the issue further to find the root cause. Its happening due to (misconfigured ?) serverConfig.RequestInfoResolver

return genericapifilters.WithRequestInfo(handler, a.serverConfig.RequestInfoResolver)

Its only resolving /apis/* as resource requests. While for get pods <pod-name> the api is /api/v1/...

Hope I am correct.
Thanks,

Issues with stern / kubectl log {pod} -f

It seems that logging with the follow flag doesn't work. If I use stern it's not able to match any pods either.

Setup:
AKS cluster. K8s version 1.14.8.
Cert manager for TLS certificates.
NGINX for ingress.
kube-oidc-proxy deployed from the helm chart within this repo.

Expected behaviour:
image

Actual behaviour:
image

Please let me know if you need more information regarding this issue. :)

Forgot to add my role / rolebinding as well:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: log-reader
  namespace: oidc-proxy
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "pods/exec"]
  verbs: ["get", "list", "watch", "create"]

And the rolebinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: pod-log-binding
  namespace: oidc-proxy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: log-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: [email protected]

Why not an Authenticating Proxy

Forgive me this ignorant question as I'm a relative Kubernetes n00b.

Why is this not implemented as an Authenticating Proxy instead of the way it is? Wouldn't that be cleaner and avoid the whole impersonation thing?

Feels cleaner to me, so I'm probably missing a crucial detail?

Redefine "readiness" from "serving" to "serving and OIDC authenticator successfully initialized"

I started the proxy while the OIDC provider was unavailable; the proxy reported itself as Ready:

$ kubectl -n authentication logs -l=app=kube-oidc-proxy
I0911 23:39:33.681175       1 proxy.go:51] waiting for oidc provider to become ready...
I0911 23:39:43.681530       1 secure_serving.go:116] Serving securely on [::]:30003
I0911 23:39:43.681628       1 proxy.go:95] proxy ready
E0911 23:39:43.695487       1 oidc.go:232] oidc authenticator: initializing plugin: 503 Service Unavailable: <html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>

$ kubectl -n authentication get deploy -l=app=kube-oidc-proxy
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
kube-oidc-proxy   1/1     1            1           100m

In my view, I think the proxy should report itself as ready only after the OIDC authenticator is initalized successfully.

Disabling TLS

Hey folks, is there a way to disable tls on the pod level?
I want my aws load balancer to be responsible for the tls, terminating the tls and inside the cluster I just want to use http.

"x509: certificate signed by unknown authority" when using Ingress

I've set up kube-oidc-proxy and enabled ingress with cert-manager generating letsencrypt certs for the endpoint. I used the helm chart to deploy.

However, when I try & connect with my generated kubeconfig, the CA obviously doesn't match up - I'm providing the root CA from my EKS cluster (sourced from /var/run/secrets...etc) to an endpoint that's got letsencrypt certs:

>>> kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority

What's the correct way to set this up? Is it actually possible to use ingress with ACME certs or is that my issue?

OAuth2 Issue using dex

I am having an issue where I want to deploy the infrastructure on AWS. The following issues I have are -

  1. As you are creating client_id and client_secret in https://github.com/jetstack/kube-oidc-proxy/blob/master/demo/infrastructure/modules/oauth2-secrets/secrets.tf for oauth2, so I have a doubt that you again say to create a Github OAuth Apps part in which it also creates client_id and client_secret in this part https://aws.amazon.com/blogs/opensource/consistent-oidc-authentication-across-multiple-eks-clusters-using-kube-oidc-proxy/, so my question is why to create client_id and client_secret again if you have already created using terraform in the first link?.

  2. As you told in README file which is in demo directory, first deploy the google part then deploy the AWS part, but I want to deploy only AWS part i.e I want to create only 2 cluster of AWS not google, so is their any way to create only AWS part not Google part?.

Cert bootstrap

Figure out how we handle boot strapping cert etc, maybe cert manager?

Testing

Add online and offline testing.

Want to test logic offline.
Test permissions online.

Fix dev cluster for OSX

OXS + docker is not routable from the host. Instead we need to add extraPorts to kind on boot and expose the proxy on a node port so we can route from localhost.

/assign
/cc @munnerz

Auditing - proposal

Write a proposal on how we can expose requests to enable auditing and monitoring.

No impersonation, OIDC pass-through mode - proposal

Consider the case where an org
a) already has OIDC auth enabled in their clusters.
and
b) wants to use kube-oidc-proxy solely to improve their security posture by preventing non-OIDC auth being used from outside the cluster, e.g. leaked SA tokens and X509 certs.

I wonder for this use-case if there is some value in an option to pass-through OIDC and not use impersonation? In this mode, kube-oidc-proxy's service account would no longer need to be bound to a role with the ability to impersonate users, groups, SAs and scopes.

With the Kube API server now validating the OIDC token itself again and the removal of the impersonation privileges from kube-oidc-proxy, it is harder to ague against putting kube-oidc-proxy in path.

Any implementation would have to consider how to work coherently with #72 (which is almost the antithesis of this proposal).

dns.tf in amazon demo references google-config.json which does not exist

When running through the demo for Amazon EKS, running CLOUD=amazon_cluster_1 make terraform_apply generates the following error:

Error: failed to execute "jq": jq: error: Could not open file ../../manifests/google-config.json: No such file or directory


  on dns.tf line 1, in data "external" "cert_manager":
   1: data "external" "cert_manager" {



Error: failed to execute "jq": jq: error: Could not open file ../../manifests/google-config.json: No such file or directory


  on dns.tf line 6, in data "external" "externaldns":
   6: data "external" "externaldns" {


make: *** [terraform_apply] Error 1

The dns TF script is referencing a file google-config.json that does not exist at the path in the script:

data "external" "cert_manager" {
  program = ["jq", ".cert_manager", "../../manifests/google-config.json"]
  query   = {}
}

data "external" "externaldns" {
  program = ["jq", ".externaldns", "../../manifests/google-config.json"]
  query   = {}
}

Watching with client-go hangs

Hi,
I have Go CLI that watch CRD resources. For unknown reasons, it's unable to watch the resources.

This line in my go app, never return:

w, err := client.MyCustomResources("default").Watch(metav1.ListOptions{})

Should the watch be supported through the kube-oidc-proxy?

userinfo.extras add client-ip

Hi Folks,
Thanks for a very useful product. Have a small query, let me know if this isnt right platform for the same.

I am looking to append client-ip from which request originated, as part of x-remote-extra- headers from oidc-proxy to api-server.
Is there a way to achieve the same ?

many thanks !

Support HTTPS_PROXY env variable

I would really love to see some option to let kube-oidc-proxy know that it should use a specific HTTP proxy for communication with the upstream Kubernetes API.

I tried the usual HTTP_PROXY and https_proxy ENV variables but without success. Is there a way i'm not aware of?

Thanks for looking into this.

error: You must be logged in to the server

Hey guys,
We forked this project and seeing next error - error: You must be logged in to the server
when use kubectl.

curl works just fine with bearer token.

Any suggesting what it could be?

It's also weird I don't see kubectl sending JWT token, but it works just fine directly with api.

Any help is appreciated

Thanks

Use go modules

We should use go modules to keep in lined with Kubernetes upstream

Error in aws provider

When running CLOUD=amazon_cluster_1 terraform_apply the following errors occur:

Error: Provider produced inconsistent final plan

When expanding the plan for
module.cluster.module.eks.aws_autoscaling_group.workers[1] to include new
values learned so far during apply, provider "aws" produced an invalid new
value for .initial_lifecycle_hook: planned set element
cty.ObjectVal(map[string]cty.Value{"default_result":cty.UnknownVal(cty.String),
"heartbeat_timeout":cty.UnknownVal(cty.Number),
"lifecycle_transition":cty.UnknownVal(cty.String),
"name":cty.UnknownVal(cty.String),
"notification_metadata":cty.UnknownVal(cty.String),
"notification_target_arn":cty.UnknownVal(cty.String),
"role_arn":cty.UnknownVal(cty.String)}) does not correlate with any element in
actual.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: Provider produced inconsistent final plan

When expanding the plan for
module.cluster.module.eks.aws_autoscaling_group.workers[0] to include new
values learned so far during apply, provider "aws" produced an invalid new
value for .initial_lifecycle_hook: planned set element
cty.ObjectVal(map[string]cty.Value{"default_result":cty.UnknownVal(cty.String),
"heartbeat_timeout":cty.UnknownVal(cty.Number),
"lifecycle_transition":cty.UnknownVal(cty.String),
"name":cty.UnknownVal(cty.String),
"notification_metadata":cty.UnknownVal(cty.String),
"notification_target_arn":cty.UnknownVal(cty.String),
"role_arn":cty.UnknownVal(cty.String)}) does not correlate with any element in
actual.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

make: *** [terraform_apply] Error 1

The EKS cluster control plane is provisioned by the nodes are not.

kubectl exec through kube-oidc-proxy fails

I have a working kube-oidc-proxy instance but when I try to run kubectl exec -it I get a failure. Is kubectl exec -it supported by the proxy?

Here is what I ran and the result:

$ kubectl exec -it mypod -n mynamespace -- /bin/sh
 error: error sending request: Post https://kube-oidc-proxy.mydomain.com/api/v1/namespaces/mynamespace/pods/mypod/exec?command=%2Fbin%2Fsh&container=maincontainer&stdin=true&stdout=true&tty=true: EOF

If make a request directly to the apiserver, it succeeds and I get a shell prompt inside the pod. But of course they are different users, since the direct call is using x509 authentication. However the kube-oidc-proxy user has a role with wildcards for every rule, I don't think permissions are the cause. There are no entries related to the request in the kube-oidc-proxy pod logs.

The image I'm using is quay.io/jetstack/kube-oidc-proxy:v0.2.0

Auditing Exec Sessions

It would be nice to be able to audit exec, and other session data such as port-forwards when using the proxy. Ideally, this can create an exec playback to be able to run through the command run and their outputs.

/assign

Upgrade openssl to 1.1.1g-r0

Current build(v0.3.0) uses alpine:3.10 as its base image.
The alpine version comes with an outdated openssl preinstalled.
I'll raise a PR to do the upgrade & address this vulnerability CVE-2020-1967.

➜ trivy quay.io/jetstack/kube-oidc-proxy:v0.3.0
2020-08-06T16:26:19.354+0530    INFO    Detecting Alpine vulnerabilities...

quay.io/jetstack/kube-oidc-proxy:v0.3.0 (alpine 3.10.4)
=======================================================
Total: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 1, HIGH: 0, CRITICAL: 0)

+---------+------------------+----------+-------------------+---------------+--------------------------------+
| LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |             TITLE              |
+---------+------------------+----------+-------------------+---------------+--------------------------------+
| openssl | CVE-2020-1967    | MEDIUM   | 1.1.1d-r2         | 1.1.1g-r0     | openssl: Segmentation fault in |
|         |                  |          |                   |               | SSL_check_chain causes denial  |
|         |                  |          |                   |               | of service                     |
+---------+------------------+----------+-------------------+---------------+--------------------------------+

Ready Probe.

Create ready probe for Kubernetes deployments.

Live option reload

We should be able to live reload options for kube-oidc-proxy, i.e. live reload certs when the file changes

Client plugins not registered.

Client auth plugins get registered to the local client via an init() func in their package.

e.g.

	if err := restclient.RegisterAuthProviderPlugin("oidc", newOIDCAuthProvider); err != nil {
		klog.Fatalf("Failed to register oidc auth plugin: %v", err)
	}

Since we don't currently do this, trying to auth in this way causes for example;
error: No Auth Provider found for name "oidc"

We just need to import the plugin directories to get them registered.

This needs to be backported to 0.1

/cc @ltagliamonte-dd
/assign
/kind bug

Need the ability to prefix the users and groups that we impersonante

Kubernetes API server OIDC provides the options oidc-username-prefix and oidc-groups-prefix.

At our site, for example we set

oidc-username-prefix=oidc:
oidc-groups-prefix=oidc:

This is a good practice as it makes it very obvious what sort of user and group we're dealing with, for example when looking at audit logs.

All of our role bindings for human users are based on this. Unfortunately this breaks when we use kube-oidc-proxy because although the full suite of oidc options are listed when you run kube-oidc-proxy -h, not all of them take effect.

It seems reasonable that kube-oidc-proxy should implement oidc-username-prefix and oidc-groups-prefix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.