Giter VIP home page Giter VIP logo

pomerium-operator's Introduction

pomerium chat Build Status Go Report Card Maintainability Documentation LICENSE codecov Docker Pulls

⚠️ Deprecation Notice

We've just released a new Ingress Controller (docs here), which supersedes the operator.

Pomerium Ingress Controller addresses shortcomings in the operator and allows Pomerium to directly handle Ingress resources without the need for an external/third-party ingress controller. Additionally, the ingress controller supports Pomerium's new policy language and other features introduced in the last year or so.

As such, pomerium-operator will no longer be receiving updates. Most practically, the operator will not be supported on Kubernetes v1.22+ due to the deprecation of the v1beta1/Ingress API.

While it is possible to deploy the ingress controller in an "operator compatible" manner, the new project is meant to function as a first class ingress controller and we strongly recommend migrating to the native functionality. This provides higher performance, stronger security guarantees, lower complexity, and reduced error opportunities compared to using a third party ingress integration via forward-auth.

See https://github.com/pomerium/pomerium-helm/tree/master/charts/pomerium#2500-1 for upgrade steps if you'd like to continue using forward-auth and a separate proxy.

Note: Beginning in Helm chart v25.0.0, the operator deployment has been replaced with Pomerium Ingress Controller.

About

An operator for running Pomerium on a Kubernetes cluster.

pomerium-operator intends to be the way to automatically configure pomerium based on the state of Ingress, Service and CRD resources in the Kubernetes API Server. It has aspects of both an Operator and a Controller and in many ways functions as an add-on Ingress Controller.

Initial discussion

pomerium/pomerium#273

pomerium/pomerium#425

Installing

The pomerium operator should be installed with the pomerium helm chart at https://helm.pomerium.io.

The operator may be run from outside the cluster for development or testing. In this case, it will use the default configuration at ~/.kube/config, or you may specify a kubeconfig via the KUBECONFIG env var. Your current context from the config will be used in either case.

Using

Due to current capabilities, the pomerium-operator is most useful when utilizing forward auth. At this time, you must provide the appropriate annotations for your ingress controller to have pomerium protect your endpoint. Examples can be found in the pomerium documentation.

How it works

With the operator installed on your cluster (typically via helm chart), it will begin watching Ingress and Service resources in all namespaces or the namespace specified by the namespace flag. Following standard ingress controller behavior, pomerium-operator will respond only to resources that match the configured kubernetes.io/ingress.class and kubernetes.io/service.class annotations, or resources without any annotation at all.

For a given matching resource, pomerium-operator will process all ingress.pomerium.io/* annotations and create a policy based on ingress host rules (from in pomerium policy) and backend service names (to in pomerium policy).

Annotations will apply to all rules defined by an ingress resource.

Services must have an ingress.pomerium.io/from annotation or they will be ignored as invalid.

Annotations

pomerium-operator uses a similar syntax for proxying to endpoints based on both Ingress and Service resources.

Policy is set by annotation, as are typical Ingress Controller semantics.

Key Description
kubernetes.io/ingress.class standard kubernetes ingress class
kubernetes.io/service.class class for service control. effectively signals pomerium-operator to watch/configure this resource
pomerium.ingress.kubernetes.io/backend-protocol set backend protocol to http or https. similar to nginx
ingress.pomerium.io/[policy_config_key] policy_config_key is mapped to a policy configuration of the same name in yaml form. eg, ingress.pomerium.io/allowed_groups is mapped to allowed_groups in the policy block for all service targets in this Ingress. This value should be JSON format.

Example

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.pomerium.io/allowed_domains: '["pomerium.io"]'
    nginx.ingress.kubernetes.io/auth-signin: https://forwardauth.pomerium.io/?uri=$scheme://$host$request_uri
    nginx.ingress.kubernetes.io/auth-url: https://forwardauth.pomerium.io/verify?uri=$scheme://$host$request_uri
  labels:
    app: grafana
    chart: grafana-4.3.2
    heritage: Tiller
    release: prometheus
  name: prometheus-grafana
spec:
  rules:
  - host: grafana.pomerium.io
    http:
      paths:
      - backend:
          serviceName: prometheus-grafana
          servicePort: 80
        path: /

This ingress:

  1. Sets up external auth for nginx-ingress via the nginx.ingress.kubernetes.io annotations
  2. Maps grafana.pomerium.io to the service at prometheus-grafana
  3. Permits all users from domain pomerium.io to access this endpoint

The appropriate policy entry will be generated and injected into the pomerium config Secret:

apiVersion: v1
stringData:
  config.yaml: |
    policy:
    - from: https://grafana.pomerium.io
      to: http://grafana.default.svc.cluster.local:80
      allowed_domains:
       - pomerium.io

Development

Building

pomerium-operator utilizes go-task for development related tasks:

task build

Roadmap

  • Basic CM update functionality. Provide enough functionality to implement the Forward Auth deployment model. Basically this is just policy updates being automated and compatible with the current helm chart.

  • Introduce a mutating webhook that speaks the 3 forward auth dialects and annotates your Ingress for you. Maybe introduce this configuration via CRD.

  • Get "table stakes" Ingress features into pomerium. Target model is Inverted Double Ingress or Simple Ingress. We need cert handling up to snuff, but load balancing and path based routing can be offloaded to a next-hop ingress controller or kube-proxy via Service. CRD maps which "next-hop" service to use for the IDI model from the ingress class.

  • Introduce backend load balancing via Endpoint discovery to allow for skipping a second ingress for most configurations.

  • Allow non-Ingress/Service based policy via CRD. Helm chart does conversion on the backend.

  • Pomerium deployment itself is managed by CRD. The helm chart becomes a wrapper to this CRD. Move the templating and resource generation logic into pomerium-operator.

pomerium-operator's People

Contributors

abuzhynsky avatar dependabot[bot] avatar renovate-bot avatar renovate[bot] avatar rkt2spc avatar travisgroth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pomerium-operator's Issues

Use Kubernetes as the direct source of truth for configuration without going through Kubernetes secrets

Kubernetes secrets have a limit of 1MB.
This marks a hard limit for pomerium which makes pomerium-operator not viable for deployments at big scale, having to fall back to manage the file through the filesystem in other ways.

Ideally, pomerium should reach out directly to the Kubernetes API (through a ClusterRole which allows get access to ingresses), to be able to read the config for the ingress in question, the same way nginx-ingress does the same and then reconfigures itself.

Operator doesn't start with Kubernetes >1.22

Hi folks,

extensions/v1beta1 has been removed in Kubernetes 1.22 and the Operator is looking for Ingress in this API group.

{"level":"debug","ts":1635628643.363504,"logger":"pomerium-operator","msg":"started with config","config":{"BaseConfigFile":"/etc/pomerium/config.yaml","Debug":true,"Election":true,"ElectionConfigMap":"pomerium-operator-election","ElectionNamespace":"default","IngressClass":"pomerium","MetricsAddress":":8080","HealthAddress":":8081","Namespace":"","PomeriumSecret":"pomerium","PomeriumNamespace":"default","PomeriumDeployments":["pomerium-authenticate","pomerium-authorize","pomerium-proxy","pomerium-databroker"],"ServiceClass":"pomerium"}}
{"level":"debug","ts":1635628643.3635736,"logger":"pomerium-operator","msg":"loading kubeconfig"}
{"level":"debug","ts":1635628643.3637104,"logger":"pomerium-operator","msg":"found kubeconfig","api-server":"https://10.96.0.1:443"}
{"level":"debug","ts":1635628643.3637228,"logger":"pomerium-operator","msg":"creating manager for operator","component":"operator"}
{"level":"debug","ts":1635628644.0723968,"logger":"pomerium-operator","msg":"manager created","component":"operator"}
{"level":"debug","ts":1635628644.7771401,"logger":"pomerium-operator","msg":"calling OnSave hooks","component":"configmanager"}
{"level":"debug","ts":1635628644.7772982,"logger":"pomerium-operator","msg":"adding controller","name":"pomerium-ingress","kind":""}
{"level":"debug","ts":1635628644.7773826,"logger":"pomerium-operator","msg":"adding controller","name":"pomerium-service","kind":""}
{"level":"info","ts":1635628644.777416,"logger":"pomerium-operator","msg":"starting manager","component":"operator"}
{"level":"info","ts":1635628644.7774415,"logger":"pomerium-operator","msg":"waiting for leadership","component":"operator"}
I1030 21:17:24.777737       1 leaderelection.go:242] attempting to acquire leader lease  default/pomerium-operator-election...
I1030 21:17:41.859002       1 leaderelection.go:252] successfully acquired lease default/pomerium-operator-election
{"level":"debug","ts":1635628661.8591764,"logger":"pomerium-operator","msg":"updating config Secret","component":"configmanager"}
{"level":"debug","ts":1635628661.8657212,"logger":"pomerium-operator","msg":"update config Secret result","component":"configmanager","operation":"unchanged"}
{"level":"debug","ts":1635628662.5632472,"logger":"pomerium-operator","msg":"updating config Secret","component":"configmanager"}
{"level":"error","ts":1635628662.5631535,"logger":"pomerium-operator","msg":"could not start manager","component":"operator","error":"no matches for kind \"Ingress\" in version \"extensions/v1beta1\"","stacktrace":"github.com/pomerium/pomerium-operator/internal/operator.(*Operator).Start\n\t/build/internal/operator/operator.go:102\nmain.glob..func2\n\t/build/cmd/pomerium-operator/root.go:101\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:850\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:958\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:895\nmain.main\n\t/build/cmd/pomerium-operator/root.go:111\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
{"level":"error","ts":1635628662.563308,"logger":"pomerium-operator","msg":"operator failed to start.  exiting","error":"no matches for kind \"Ingress\" in version \"extensions/v1beta1\"","stacktrace":"main.glob..func2\n\t/build/cmd/pomerium-operator/root.go:102\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:850\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:958\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:895\nmain.main\n\t/build/cmd/pomerium-operator/root.go:111\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:204"}
Error: no matches for kind "Ingress" in version "extensions/v1beta1"

Everything is OK with older cluster like 1.20.

cannot unmarshal !!str `email,g...` into []string

I am using the helm-chart, and when configuring:

...
      extraOpts:
        jwt_claims_headers: email,groups,user

I am getting crashloopbackoff for the pomerium-operator container (and only that).

The error its getting is

Error: failed to set base config from /etc/pomerium/config.yaml: failed to unmarshal configuration: yaml: unmarshal errors:
  line 14: cannot unmarshal !!str `email,g...` into []string
Usage:
  pomerium-operator [flags]
Flags:
      --base-config-file string        Path to base configuration file (default "./pomerium-base.yaml")
      --debug                          Run in debug mode
      --election                       Enable leader election (for running multiple controller replicas)
      --election-configmap string      Name of ConfigMap to use for leader election (default "operator-leader-pomerium")
      --election-namespace string      Namespace to use for leader election (default "kube-system")
      --health-address string          Address for health check endpoint.  Default disabled (default "0")
  -h, --help                           help for pomerium-operator
  -i, --ingress-class string           kubernetes.io/ingress.class to monitor (default "pomerium")
...                       

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: packageRules[1]: packageRules cannot combine both matchUpdateTypes and separateMinorPatch. Rule: {"groupName":"Kubernetes","separateMinorPatch":true,"matchPackageNames":["k8s.io/apimachinery","k8s.io/client-go","k8s.io/api","sigs.k8s.io/controller-runtime","k8s.io/client-go"],"matchUpdateTypes":["minor"],"enabled":false}, packageRules[2]: packageRules cannot combine both matchUpdateTypes and separateMinorPatch. Rule: {"groupName":"Kubernetes","separateMinorPatch":true,"matchPackageNames":["k8s.io/apimachinery","k8s.io/client-go","k8s.io/api","sigs.k8s.io/controller-runtime","k8s.io/client-go"],"matchUpdateTypes":["major"],"enabled":false}

Operator removes cookie_secure setting from secret

When using the Helm chart I added the following config:

config:
  extraOpts:
    cookie_secure: false

If the operator is enabled, this value is removed from the secret. When defining a policy manually and disabling the operator, this value still exists in the secret.

Unclear documentation on annotation

I am a bit unclear on the annotations.

The docs say:

Following standard ingress controller behavior, pomerium-operator will respond only to resources that match the configured kubernetes.io/ingress.class and kubernetes.io/service.class annotations, or resources without any annotation at all.

but then the example below does not follow this? (it doesn't have the configured kubernetes.io/ingress.class; but also have annotations; so it shouldn't match?)

Operator doesn't take information of prefixes from Ingress configuration

Environment:
Pomerium chart: 15.0.0
Pomerium version: v0.12.1
Operator version: v0.0.5

I don't use Pomerium for routing. I use it only for forward auth (with Traefik).

Problem:

By some reasons I have an Ingress with the following configuration:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.pomerium.io/allowed_domains: '["example.com"]'
    ingress.pomerium.io/allowed_idp_claims: '{"groups":["group-name"]}'
  labels:
    ...
  name: ingress-rule-1
  namespace: default
spec:
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: example-service
          servicePort: 8080
        path: /path1
        pathType: ImplementationSpecific
      - backend:
          serviceName: example-service
          servicePort: 8080
        path: /path2
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - example.com
    secretName: secret-name

Operator generates the following policies:

- from: https://example.com
  to: http://example-service.default.svc.cluster.local:8080
  allowed_domains:
  - example.com
  allowed_idp_claims:
    groups:
    - group-name
- from: https://example.com
  to: http://example-service.default.svc.cluster.local:8080
  allowed_domains:
  - example.com
  allowed_idp_claims:
    groups:
    - group-name

Proxy service can't start with the configuration. It throws the following error:
ERR error applying configuration error="duplicate name policy-f7b2cb2f22d55930 found among added/updated resources" code=13 details=null

If I stop Operator and add "pathPrefix" options Proxy will start correctly.
From my point of view Operator should take pathPrefix information from Ingress rule.

Am I right?

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • fix(deps): update module github.com/pomerium/pomerium to v0.15.6

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.


  • Check this box to trigger a request for Renovate to run again on this repository

Multiple operators update all pomerium configs

I'm trying to use multiple pomerium instances (including the operator) to manage multiple subdomains, and I see that every instance is updating all the pomerium configs. It does not seems to be harmful, since rules that are not managed by the relevant instance of pomerium are ignored, but I would expect that only the operator's namespace pomerium config would be udated?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.