Giter VIP home page Giter VIP logo

projects-operator's Introduction

Projects

About

projects-operator extends kubernetes with a Project CRD and corresponding controller. Projects are intended to provide isolation of kubernetes resources on a single kubernetes cluster. A Project is essentially a kubernetes namespace along with a corresponding set of RBAC rules.

Contributing

To begin contributing, please read the contributing doc.

Installation and Usage

projects-operator is currently deployed using k14s.

You must first create a ClusterRole that contains the RBAC rules you wish to be applied to each created Project. For example:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: my-clusterrole-with-rbac-for-each-project
rules:
- apiGroups:
  - example.k8s.io
  resources:
  - mycustomresource
  verbs:
  - "*"

Install

Then you will need to build and push the projects-operator image to a registry.

$ docker build -t <REGISTRY_HOSTNAME>/<REGISTRY_PROJECT>/projects-operator .
$ docker push <REGISTRY_HOSTNAME>/<REGISTRY_PROJECT>/projects-operator

# For example, docker build -t gcr.io/team-a/projects-operator .

Then finally you can run the /scripts/kapp-deploy script to deploy projects-operator.

export INSTANCE=<UNIQUE STRING TO IDENTIFY THIS DEPLOYMENT>
export REGISTRY_HOSTNAME=<REGISTRY_HOSTNAME> # e.g. "gcr.io", "my.private.harbor.com", etc.
export REGISTRY_PROJECT=<REGISTRY_PROJECT>   # e.g. "team-a", "dev", etc.
export REGISTRY_USERNAME=<REGISTRY_PASSWORD>
export REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
export CLUSTER_ROLE_REF=my-clusterrole-with-rbac-for-each-project

$ ./scripts/kapp-deploy

Creating a Project

Apply projects yaml with a project name and a list of users/groups/serviceaccounts who have access, for example:

apiVersion: projects.vmware.com/v1alpha1
kind: Project
metadata:
  name: project-sample
spec:
  access:
  - kind: User
    name: alice
  - kind: ServiceAccount
    name: some-robot
    namespace: some-namespace
  - kind: Group
    name: ldap-experts

Uninstall

kapp -n <NAMESPACE> delete -a projects-operator

Webhooks

projects-operator makes use of three webhooks to provide further functionality, as follows:

  1. A ValidatingWebhook (invoked on Project CREATE) - ensures that Projects cannot be created if they have the same name as an existing namespace.
  2. A MutatingWebhook (invoked on ProjectAccess CREATE, UPDATE) - returns a modified ProjectAccess containing the list of Projects the user has access to.
  3. A MutatingWebhook (invoked on Project CREATE) - adds the user from the request as a member of the project if a project is created with no entries in access.

projects-operator's People

Contributors

chenbh avatar crswty avatar dependabot-preview[bot] avatar dependabot[bot] avatar djoyahoy avatar edwardecook avatar gabrielecipriano avatar gmrodgers avatar joaopapereira avatar mgibson1121 avatar mirahimage avatar samze avatar sukhil-suresh avatar teddyking avatar tomkennedy513 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

projects-operator's Issues

Use kind and Github actions for CI

The CI pipeline for projects-operator is currently a concourse pipeline hosted on an internal Pivotal concourse deployment. As we move towards open sourcing the projects-operator, it may make sense to move to a more OSS-friendly CI workflow.

In an ideal world the solution we come up with would be able to successfully run the acceptance test suite, without requiring any external infrastrucutre (as this would require someone to gatekeep the passwords for said infrastructure).

One way I think we could achieve this is by updating the acceptance tests to run against kind, and use the kind Github action for CI workflows.

This would require us to configure the kind cluster for OIDC support, pointing to an openldap server that was itself deployed inside the cluster. I'm not totally sure if this is possible, but I don't see an obvious reason why it wouldn't be.

I wanted to open this issue to start a discussion before investing too much more time in it. What do y'all think?

Make use of features in controller-runtime v0.5.1

There is a new release of controller-runtime, v0.5.1, that contains a couple of features which look like they are potentially of interest to us:

  • Add controllerutil.SetOwnerReference
  • Webhook support in envtest

controllerutil.SetOwnerReference in particular is of interest as we are currently using controllerutil.SetControllerReference to set non-controller OwnerRefs which is a slight misuse of the function. The webhook support in envtest should help us improve our tests to just be local.

Note: there is a dependabot PR that contains the bump to v0.5.1, #24.

Update CRD apiVersion to `apiextensions.k8s.io/v1`

We are currently using apiextensions.k8s.io/v1beta1 as the apiVersion for our CRDs. As of k8s 1.16.0 this is deprecated and the new apiVersion is apiextensions.k8s.io/v1. See https://v1-16.docs.kubernetes.io/docs/setup/release/notes/#api-changes for a (possibly non-exhaustive list) of changes. With kubebuilder this can be accomplished by passing in crd:crdVersions=v1 to controller-gen, see https://book.kubebuilder.io/reference/controller-gen.html#generators.

Note that this would then render subsequent releases of projects-operator incompatible with versions of kubernetes before 1.16.0 so we may want to hold off on this for a little bit, although notionally only 1.16+ are currently maintained: https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions.

Is there a desire for a simple projects operator CLI?

I was curious if you all had considered creating a CLI that can do basic CRUD operations against Project and ProjectAccess resources. On the one hand, you can do everything you'd want with kubectl, but I'm wondering if creating a CLI could provide some sensible guardrails.

Should ProjectAccess be a cluster resource?

We (tanzu build service) just integrated the most recent changes from version 0.7.0 and we noticed that the ProjectAccess resource is namespace scoped. Given it operates on cluster resources, was there a reason it is not a cluster resource as well?

@ashwin-venkatesh

Make `kubectl explain` a useful resource for our CRDs

We don't currently support kubectl explain for any of the CRDs we have in projects-operator. The method for doing so is specifying a structural scheme, see https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema.

The initial switch to a structural schema would come with #62. However in order to gain a useful output from kubectl explain we would also need to include useful descriptions (via comments above our types and fields) that describe the purpose of the CRDs as well as the spec fields.

Improve helm support

The projects-operator is currently built using kubebuilder and deployed via Helm, with the corresponding helm chart living at /helm/projects-operator.

One problem with this approach is that kubebuilder does not currently have native support for templating out helm config. It currently only knows how to generate kustomize config.

This is a problem because it is then a manual (and error-prone) task to copy-and-edit the kustomize-generated config into helm-formatted config whenever we make changes to the projects-operator.

We have made a few attempts in the past to make this better, for example using the /scripts/helmify-yaml script, and custom make tasks, however it's still not intuitive and it requires us to keep thing up to date with newer versions of kubebuilder. Also this is only a partial solution as it only deals with rbac.

Perhaps we could improve this?

One idea @gmrodgers and I have been talking about is to make use of kubebuilder plugins (NB: currently experimental, design doc here).

It's not clear to us exactly if/how this would work, but based on comments from the operator-sdk team it seems like this should be possible.

Perhaps we should spend some time investigating if there is a helm plugin for kubebuilder in the works, and if not, should contribute one and then use it here?

@teddyking and @gmrodgers

Use the controller-manager metrics functionality

controller-manager supposedly has some support for metrics which is the reason we are including kube-rbac-proxy as a dependency, though this functionality is currently unused. We should consider either using this functionality or removing the kube-rbac-proxy dependency.

Use klog for logging

It seems like klog is the standard logger to use for the kubernetes ecosystem, it has support for the logr interface via the klogr package so we should be able to swap out the current zap initialisations fairly directly.

make install does not work on linux

The make install target uses the helmify-yaml script, which uses bsd sed syntax, rather than gnu sed syntax, so it works on macs but not linux. The issue is that gnu sed does not tolerate a space between -i and the suffix, while bsd sed requires the space.

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /go.mod.

The error Dependabot encountered was:

go: k8s.io/[email protected] requires
	go.etcd.io/[email protected] requires
	github.com/grpc-ecosystem/[email protected] requires
	gopkg.in/[email protected]: invalid version: git fetch --unshallow -f origin in /opt/go/gopath/pkg/mod/cache/vcs/748bced43cf7672b862fbc52430e98581510f4f2c34fb30c0064b7102a68ae2c: exit status 128:
	fatal: The remote end hung up unexpectedly

View the update logs.

No CONTRIBUTING.md

Instructions for development and contributing are currently in the README.md, rather than in their own separate CONTRIBUTING.md.

Additionally, the instructions for running tests are split across two separate sections, Deployment and testing workflow and Tests.

Switch to projects.vmware.com for API Group

As part of moving this project to open source we should probably update the API Group to reflect the Pivotal acquisition. I would suggest that the API Group should be updated from projects.pivotal.io to projects.vmware.com, this is in line with the general guidance that an API Group should be a subdomain owned by the originating company.

Improve how RBAC roles are configured in the Projects helm chart

Context:
As of now, whenever we need to apply a new RBAC role to the operator the process is as follow:

  • Creates an annotation in the projects_controller.go (i.e: // +kubebuilder:rbac:groups=rbac.authorization.k8s.io,resources=rolebindings,verbs=watch;list;create;get;update;patch )
  • Run make manifests in order to create the config/rbac/[roles-files]
  • Copy and paste the content of the file in the helm/projects-operator/templates/rbac.yaml file

This is not a great development experience, it would be nice if this were simplified. Ideally, the files used by helm should be autogenerated.

Failure to delete a namespace causes the controller to hang

https://github.com/pivotal/projects-operator/blob/dbe9fdf6f83fa9ce7b192fa8787b046d15ddc4b0/controllers/project_controller.go#L123

Hi all! We encountered this bug today when trying to delete projects. Namespaces in our cluster have a default "kubernetes" finalizer that will not allow them to be deleted (for various reasons that are not important). Thus, the controller will hang when attempting to delete those namespaces and the failure is opaque.

The error in the provided link is ignored and the code enters an infinite loop waiting for the namespace to disappear.

Reimplement webhook logic on top of kubebuilder-generated webhook

The projects-operator has a custom-built webhook that is serving CREATE project and CREATE/UPDATE projectaccess. While doing this work we created a standalone binary and manually wired it all up in our helm templating.

However it seems like kubebuilder has some pretty sweet support for webhooks, so it might be nice for us to buy into this.

Acceptance

  • project-operator acceptance tests all pass
  • we are using a kubebuilder webhook, not a custom manual one

The project finalizer `wait-for-namespace-to-be-deleted` causes a deadlock when running a kapp delete on the cluster

We use kapp delete to uninstall our deployment in CI. This will remove everything we've installed on the cluster including all projects. the project controller and webhook, and the project CRDs. Unfortunately, this creates a deadlock in the following way:

  1. The project controller and webhooks are deleted first
  2. The deletion attempts to delete projects but it cannot because they cannot be finalized by the controller
  3. The deletion attempts to delete the project CRD but it cannot because projects still exist

One solution might be to add the project controller as an owner reference to all created projects.

Another solution may be to drop the finalizer from projects as they already have an owner reference for their corresponding namespaces.

@ashwin-venkatesh @matthewmcnew

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.