Giter VIP home page Giter VIP logo

kyverno's Introduction

Kyverno Tweet

Cloud Native Policy Management πŸŽ‰

Go Report Card License: Apache-2.0 GitHub Repo stars CII Best Practices OpenSSF Scorecard SLSA 3 Artifact HUB codecov FOSSA Status

logo

Kyverno is a policy engine designed for Kubernetes platform engineering teams. It enables security, automation, compliance, and governance using policy-as-code. Kyverno can validate, mutate, generate, and cleanup configurations using Kubernetes admission controls, background scans, and source code respository scans. Kyverno policies can be managed as Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.

Open Source Security Index - Fastest Growing Open Source Security Projects

πŸ“™ Documentation

Kyverno installation and reference documents are available at [kyverno.io] (https://kyverno.io).

πŸ‘‰ Quick Start

πŸ‘‰ Installation

πŸ‘‰ Sample Policies

πŸ™‹β€β™‚οΈ Getting Help

We are here to help!

πŸ‘‰ For feature requests and bugs, file an issue.

πŸ‘‰ For discussions or questions, join the Kyverno Slack channel.

πŸ‘‰ For community meeting access, join the mailing list.

πŸ‘‰ To get updates ⭐️ star this repository.

βž• Contributing

Thanks for your interest in contributing to Kyverno! Here are some steps to help get you started:

βœ” Read and agree to the Contribution Guidelines.

βœ” Browse through the GitHub discussions.

βœ” Read Kyverno design and development details on the GitHub Wiki.

βœ” Check out the good first issues list. Add a comment with /assign to request assignment of the issue.

βœ” Check out the Kyverno Community page for other ways to get involved.

Software Bill of Materials

All Kyverno images include a Software Bill of Materials (SBOM) in CycloneDX JSON format. SBOMs for Kyverno images are stored in a separate repository at ghcr.io/kyverno/sbom. More information on this is available at Fetching the SBOM for Kyverno.

Contributors

Kyverno is built and maintained by our growing community of contributors!

Made with contributors-img.

License

Copyright 2024, the Kyverno project. All rights reserved. Kyverno is licensed under the Apache License 2.0.

Kyverno is a Cloud Native Computing Foundation (CNCF) Incubating project and was contributed by Nirmata.

kyverno's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kyverno's Issues

Support 'copyFrom' in mutation rules

Currently a mutation rule can have "patches" or an "overlay". An additional directive to support would be "copyFrom" which behaves like the "copyFrom" during a generate.

Minor Improvements

  • single logger mechanism used across pkgs.( future introduce klogs as default, to support verbosity)
  • skip policy validation checks on dynamic resources like "Events"
  • format code to comply with goreport code quality specifications

Fix deletion of MutatingWebhookConfiguration

For now controller creates MutatingWebhookConfiguration on its start and tries to delete it in the end of the main, but such deletion is not possible when controller is terminated.
Despite the fact that the MutatingWebhookConfiguration is registered from the controller code, it is not necessary to delete it at the end of controller's main - the configuration will be useful the next time it is launched. The ideal option is to remove the MutatingWebhookConfiguration when the entire product is removed from the cluster:
kubectl delete -f definitions/install.yaml
To do this, we need to connect MutatingWebhookConfiguration with the controller's Deployment using ownerReference:
https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents

Add Wildcards Support for Resources

The wildcards can be employed for distinguishing between resources, which have policies applicable to them. The wildcard support should be added not only for the target resource name, but also for its selector.

Add dependencies / ordering for policies

Currently the order in which policies are applied is not clearly defined. There may be cases where a user wants a "base" policy to be applied, and then additional policies.

This proposal is to a a "dependsOn" or similar field to policies to allow the user to manage ordering.

Add a Resource Monitor to enforce policies post admission

The Recourse Monitor entity could be implemented for the purposes of resource validation in the cluster to check them not only during their creation, but also during their lifetime.
Recourse Monitor will be running together with the Policy Controller and Webhook Server thread from the very start of kube-policy work.

The main purpose of this entity is to monitor all the resources in the cluster, list them, and check whether the policy has generators (or other mutators) and if policy is applicable to resources during each resync period.

Initially this component should be used for creating config-maps and secrets according to policies for Namespaces. In the future, we can create policies that explicitly state whether to monitor resources that are suitable for this policy, or to control only their creation through a web hook.

Logging Framework

Introduce Logging Framework

  • klog framework
  • control the verbosity
  • ability to log to other io.writers

Introduce PolicyStore

Currently, policy objects are directly referenced. But we would need to efficiently store the policy and policy violations for faster retrieval and modifications.
Defining an interface and implementation would allow us to decouple the storage and use, as we can modify the implementation of the storage in the future without affecting others.

Use Dynamic client

Currently, we use the typed client to access the kubernetes objects with a kubeClient wrapper to expose the getters and listers.
Suggestion is to use dynamic clients to provide the API to access the objects of a different kind.

Move packages to pkg

Initially, all packages were created at the root of the project, and the pkg folder appeared due to code generation. Since the pkg directory already exists, we should move all our packages there.

Validation does not occur in case of patch changing resource selection fields

Environment:
Cluster: ubuntu 18.04
kubernetes 1.14.2

Preconditions:
kubernetes cluster is configured
kube-policy controller deployed and works

Steps to reproduce:
Create a policy with the matchLabels selector in the resource block, with a patch that changes the selectable label and with the validation block (see attached files)
Result: policy is created
Create a resource that does not comply with the validation rules and has label for select

Actual result: Validation is skipped, validation rule is not applicable to resource, resource is created.
Expected result: Validation has failed, resource doesn't created.

resource+policy.zip

Failed to install controller in debug mode

I'm using v1.13.5 cluster with docker 18.03.1-ce on ubuntu 18.04, and I'm following this step to install the controller in debug mode.
When I tried to run this script scripts/deploy-controller.sh, some of the files were missing, e.g. crd/MutatingWebhookConfiguration_local.yaml and crd/crd.yaml.

Also, when I tried to apply definitions/install.yaml, it gave me the following error:

The CustomResourceDefinition "policies.policy.nirmata.io" is invalid:
* spec.version: Required value
* spec.validation.openAPIV3Schema.properties[spec].properties[rules].items.properties[configMapGenerator].properties[data].additionalProperties: Forbidden: additionalProperties cannot be set to false
* spec.validation.openAPIV3Schema.properties[spec].properties[rules].items.properties[resource].properties[selector].properties[matchLabels].additionalProperties: Forbidden: additionalProperties cannot be set to false
* spec.validation.openAPIV3Schema.properties[spec].properties[rules].items.properties[secretGenerator].properties[data].additionalProperties: Forbidden: additionalProperties cannot be set to false

I verified this flag --feature-gates=CustomResourceValidation for kube-apiserver, it set to true by default for 1.13 cluster, refer to this.

Support multiple versions of resources for dynamic client

The DefaultRESTMapper provides a structure to resources for multiple versions per group. It is wrapped by the MultiRESTMapper.
It is used in the discovery pkg, where we get information from client.ServerGroupsAndResources() and build the above mapper. This allows us to maintain resource information per group and also multiple versions.

Currently, we directly call the client.ServerGroupsAndResources() and extract the group and resources, we do not maintain multiple versions.

Generate any resource

We currently support two types of generators:

  1. Config Map
  2. Secret

Is it possible to make this more flexible, and allow any kind of resource to be included in a generator?

There are use cases to allow generation of other namespace level resources like:

  • Network Policy (a default for the namespace)
  • Roles
  • Role bindings
  • Service Accounts

Correct Unit Tests

admission_test.go

  • Test TestAdmissionIsRequired: can be removed after the dynamic client merge, as the supported kind is based on registered types.
  • policy type PolicyResource has changed
  • IsRuleApplicableToRequest function was removed.

controller_test.go

  • policy type PolicyCopyFrom, PolicyConfigGenerator, PolicyPatch, PolicyResource & PolicyRule has changed.

Create the Policy Engine

The idea is to design a policy-engine package that would include all the logic for processing the policies on the resource, hence decoupling the policy processing logic from kubernetes components like policy-controller & admission-controller. The policy-engine can then be used individually to process the policies and get the results. With the current design, implementing the violation and event handling is difficult to decouple from the policy management.

This would require the policy-engine be decoupled from the kubernetes and policy client, it would expect a runtime.object(resource) and policy object as input, and provide the result in the form of list of events, violations and jsonPatches.

We have the following entry-points to trigger applying of policies:

  • Admission Controller
    The mutationwebhook configuration defines the β€˜/mutate’ endpoint. The http server would process the incoming mutate url and forward to the handler. The handler gets all the policies and processes them on the incoming request.
  • Policy Controller
    The controller handler watches policy resource and calls the corresponding create/update/remove handler. Provides the api getPolicies used by mutationwebhook handler.

In the future, we also want to apply policies via a command line test tool.

As the policy application is possible from multiple points, it would be ideal to have a package with all the policy processing logic and decouple the other components.

The proposal is to design a policy-engine package which abstracts all the processing of mutation, validation & generators. This package will provide an interface to process existing and new resources.

This package can also be used for testing policies without going through the policy-controller and admission-controller, for example kubectl extension to verify if the policy is valid or a dry-run.

With this requirement, we’ll need re-design and move the code into policy-engine package, with the admission-controller only calling an api like ProcessMutation() and generate the admission-response after. No processing of the policy would be done in the webhook. Similarly, on the policy-controller side each call to the create/update handler will call the ProcessExisting(). The policy-engine will not perform any modifications on the resource or policy, but just return the JSONpatches that need to be applied, events and policy violations to be generated. The policy engine would not have access to any clients(kubeclient or policyclient), but will operate on objects that are passed as arguments.

Interface Proposal
For the policy engine interface, we only expose 2 methods:

type PolicyEngine interface {
    // ProcessMutation should be called from admission contoller
    // when there is an creation / update of the resource
    ProcessMutation(policy types.Policy, rawResource []byte) (patchBytes []byte, events []Events, err error))

    // ProcessValidation should be called from admission contoller
    // when there is an creation / update of the resource
    ProcessValidation(policy types.Policy, rawResource []byte)

    // ProcessExisting should be called from policy controller
    // when there is an create / update of the policy
    // we should process the policy on matched resource, generate violations accordingly
    ProcessExisting(policy types.Policy, rawResource []byte) (violations []Violations, events []Events, err error)
}

For ProcessMutation method, we are processing the policy based on the new event of the resource, it returns JsonPatch and the slice of events. The JsonPatch will be added to admission response and then return to API server to create the actual object. Each of the events will be handled properly in admission controller based on the creation of the resource. Currently this function is extracted from mutationWebhook.

For ProcessExisting method, we are processing the policy on the existing resources based on the change of the policy. It can either be the creation or the update. If the policy contains mutate rule and non-nil JsonPatch is returned through Mutate(), the violation will be added to the returned slice, the event will be added accordingly to the events slice. The returned violations and events will be handled in the policy controller.

Once we agree on the package interface, we will need to design the implementation of the above three methods.

Validation: check that at least one element in array satisfies pattern

Example:

Policy:

validate:
  spec:
    array:
    - (name): nirmata-*
      |value|: 1 | >10

Resource that does not violate policy validation pattern

  spec:
    array:
    - name: nirmata-1
      value: 1
    - name: nirmata-2
      value: 5
    - name: sample-name
      value: 10

Here |key| pattern is a proposal syntax that means: at least one value in array satisfies this pattern

Allow comparison across 2 values in the YAML

A policy conditional should allow a comparison with another value in the YAML. For example, in a PodTemplateSpec, we may want to check that a resource request is less than the resource limit:

  - name: check-memory-in-range
    message: "Memory request cannot be greater than 10Gi"
    resource:
      kind: Deployment
    overlay:
      spec:
        containers:
        - name: "*"
          resources:
            requests:
              memory: "< ../../resources/limits/memory"

Allow a resource list in "kind" field

Some policies need to match multiple resources kinds. We need to allow something like:

apiVersion: policy.nirmata.io/v1alpha1
kind: Policy
metadata:
  name: whitelist-registries
spec:
  rules:
  - resource:
    kind: Deployment, StatefulSet, CronJob, DaemonSet
    overlay:
      template:
        spec:
          containers:
            image: https://private.registry.io* | https://hub.docker.io/nirmata/*

Events support (....was PolicyViolation CRD)

Create a custom resource for PolicyViolation objects. This object is used to report a violation on admission, or by the resource monitor.

--

Instead of a separate custom resource, the plan is to use events.

Validation of new CRD's and new resource versions

with PR #75 we use the memcached version of discovery client to access the registered group version resource information.
We would need to refresh the client cache for registered resources in the following scenarios:

  • CRD create/update/remove
  • k8's resource create/update/remove
  • k8's version upgrade

for CRD, a simple solution would be the have a watcher on the CRD resource and invalidate the cache for every change. But this approach will require us to have a watch on every resource and this could get expensive.

Need to research the approaches more in details.

Policies aren't applied to DaemonSet resources

Environment:
Local Cluster: ubuntu 18.04 kubernetes 1.14.2
Azure cluster: kubernetes 1.13.5

Preconditions:
kubernetes cluster is configured
kube-policy controller deployed and works

Steps to reproduce:
1.Create patch policy for DaemonSet resource
2. Create DaemonSet resource with name appropriate to policy
3. check DaemonSet created resource

Actual result: policy is applied to DaemonSet resource
Expected result: policy isn't applied to DaemonSet resource there are any messages in CRD controller logs

resource yaml are attached ds.zip

Validation patterns do not support the operator features for strings

Preconditions:
kubernetes cluster is configured
kube-policy controller is deployed and works

Steps to reproduce:
Create a policy with the validation pattern that contains strings with an operator (e.g. "192.168.10.171|192.168.10.172", "<10Gi|<10240Mi").
Result: the policy is created
Create a resource that complies with the validation rules.

Actual result: An error has occurred "validation has failed".

Expected result: Validation is successful, the resource is created.

Stateless policy-engine

As we abstract all the policy related logic inside the policy engine package, the policy-engine would provide interfaces to access the functionality.

Currently, the mutation part requires kubeClient for the generation of resources (configMaps & secrets). But as per the proposed design, we would not be applying changes inside the policy-engine but return the delta/patches of changes along with the events & violations. The caller would of interface methods be responsible for applying these changes to the resource.

So the policy-engine can expose standalone functions to process policies, and we would not need to store an instance of policy-engine in controller and webhook. We would pass the resources that are to be operated with the instance of a logger.

Allow using cluster data and possibly external JSON data sets in policy rules

Validation logic may require a lookup or, or comparison to, existing resources.

For example, a rule may want to enforce that there is a single service with 'type: LoadBalancer' per namespace. This requires a check on existing resources.

A solution may be to support a JMESPath query expression that returns a boolean value, on configured resources.

Decode resource.yaml into kubernetes struct for CLI

Currently, the resource.yaml loaded from CLI is decoded into a map[interface{}]interface{}. We need to decode the resource yaml into kubernetes struct so that the default fields are added into YAML. This is helpful for the validation case.

e.g. If user wants to validate the ImagePullPolicy but this field is not specified in resource.yaml, decode the file into kubernetes struct will add the default value for this field.

Create a CLI tool for Policy testing

Create a simple CLI tool that will allow testing policy's output before placing it in cluster.
Checking local policy and resource:
apply-policy --policy mypolicy.yaml --resource myres.yaml

The next step would be to add a kubectl plugin:
kubectl policy apply --policy <path/to/policy> --resource <path/to/resource>

Support for Validation Policies

Validation Policies Specification

About

Validation policy is designed for validation of new (and possibly existing) resources in Kubernetes cluster. The controller of this policy uses validation webhook to allow or deny resource creation (and possibly resource changing).

Definition

apiVersion : policy.nirmata.io/v1alpha1
kind : ValidationPolicy
metadata :
  name : example-policy
spec :
  rules:
    # General description
    - resource:
        kind: <supported_resource_type>
        # Name is optional. By default validation policy is applicable to any resource of supported kind.
        name: <resource_name>
        # Selector is optional. By default validation policy is applicable to any resource of supported kind.
        selector: <resources_selector>
      # Message is optional. The controller should show well-readable message by default.
      message: "Message why the current resource can't pass the validation"
      # The simplest condition which checks label "app"
      validate:
        labels:
          # validate supports wildcard characters ? and * in string values.
          # ? - any single character
          # * - any characters (with length from 0 to infinity)
          # The next expression means that the "app" label should be explicitly set to the resource before creation
          app: ?
           
    # Check some Deployment properties + check whether the containers with "latest" version have imagePullPolicy: "Always".
    - resource:
        # Matched for all deployments
        kind: Deployment
      # The main fields of objects inside lists, which will be checked for neighboring and child fields.
      # If these fields doesn't set, every object field considered main: if the current example doesn't contain    
      validate:
        spec:
          # The example of logical expression for integers.
          # Supported operators:
          #  < (less)
          #  > (greater)
          #  ! (not)
          #  | (or)
          # integer value without sign (equals).
          replicas: !0
          revisionHistoryLimit: >5
          template:
            spec:
              containers:
              # Any field value in parentheses is an anchor and is used as the "if" block for all expressions in clild tree
              # The next expression means, that the policy expects images with "latest" version in the list of containers
              - (image): "*:latest"
                # This expression means that the image with "latest" version should have "Always" value in imagePullPolicy field
                imagePullPolicy: "Always"
                ports:
                # This expression means that the container from image with "latest" version should have ports 443 or 6443
                - containerPort: 443|6443
status:
  log:
    # Error logs
    - timestamp: "2006 Jan 02 15:04:05.999"
      resource: "default/deployments/test1"
      message: 'spec/template/spec/containers/image must have ""*:latest"" format. Actual: "image:v1"'
    - timestamp: "2006 Jan 02 15:04:05.999"
      resource:  "default/deployments/test2"
      message:   'spec/revisionHistoryLimit must be >5. Actual: 3'
 
### Specific validations ###
     
    # 1. Allow container from a list of registries
spec:
  rules:
    - resource:
        kind: Deployment
      validate:
        template:
          spec:
            containers:
            # Checks if the image path of "privateApp" container starts with "https://private.registry.io" OR "https://another-private.registry.io"
            # If some property contains operator | as a normal part of its value, it should be escaped by backslash: "\|".
            - name: privateApp
              path: "https://private.registry.io*|https://another-private.registry.io*"
               
    # 2. Check whether probe intervals are greater than 10s
    - resource:
        kind: Pod
      validate:
        containers:
        # In this case every object in containers list will be checked for pattern
        - (name): *
          livenessProbe:
            periodSeconds: >10
     
    # 3. Disallow hostPath in volumes
    - resource:
        kind: Pod
      validate:
        volumes:
        - (name): *
          hostPath: null
           
    # 4. Disallow nodePort is services
    - resource:
        kind: Service
      validate:
        spec:
          ports:
          - port: *
            nodePort: null
             
    # 5. Disallow memory resource requests > 8Gi
    - resource:
        kind: Pod
      validate:
        spec:
          containers:
          - (name): *
            resources:
              requests:
                # If the value contains logical operator, the integer after it will be checked. Not numeric characters will be a part of pattern.
                # The OR operator can combine the patterns with logical expressions and text patterns.
                memory: "<9Gi|<8193Mi"

The main idea is in the usage of patterns for requested resource. If the resource which is created (or, possibly, changed) in the cluster have supported Kind and matches the pattern, its creation (or changing) is allowed, otherwise - denied.

Principles

  1. Validation policy checks only fields that explicitly defined, any undefined field is treated as wildcard *. This is the common rule for every part of the policy except for Kind in the policy description: the list of supported resources controlled by the policies is declared in kube-policy speification.
  2. What is defined in "validate" should exist in the requested resource. If some field is described in "validate" with any value (except *, which is useless), and it is not contained in requested resource, the validation will fail.
  3. The validation of siblings is performed only when one of the fields is matching his value in "validate". Use parentheses property to explicitly define the main field for which siblings and children will be checked.
  4. The validation of children fields is performed only if parent object matches the condition.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.