Giter VIP home page Giter VIP logo

policy-collection's Introduction

Policy Collection

A collection of policy examples for Open Cluster Management.

Repository structure

This repository hosts policies for Open Cluster Management. You can deploy these policies using Open Cluster Management which includes a policy framework that is available as an addon. Policies are organized in two ways:

  1. By support expectations which are detailed below.
  2. By NIST Special Publication 800-53.

The following folders are used to separate policies by the support expectations:

  • stable -- Policies in the stable folder are contributions that are being supported by the Open Cluster Management Policy SIG.
  • 3rd-party -- Policies in the 3rd-party folder are contributions that are being supported, but not by the Open Cluster Management Policy SIG. See the details of the policy to understand the support being provided.
  • community -- Policies in the community folder are contributed from the open source community. Contributions should start in the community.

In addition to policy contributions, there is the option to contribute groups of policies as a set. This is known as PolicySets and these contributions are made in directories organized as PolicyGenerator projects. The folder containing these contributions is located here: PolicySet projects.

Using GitOps to deploy policies to a cluster

Fork this repository and use the forked version as the target to run the sync against. This is to avoid unintended changes to be applied to your cluster automatically. To get latest policies from the policy-collection repository, you can pull the latest changes from policy-collection to your own repository through a pull request. Any further changes to your repository are automatically be applied to your cluster.

Make sure you have kubectl installed and that you are logged into your hub cluster in terminal.

Run kubectl create ns policies to create a "policies" ns on hub. If you prefer to call the namespace something else, you can run kubectl create ns <custom ns> instead.

From within this directory in terminal, run cd deploy to access the deployment directory, then run bash ./deploy.sh -u <url> -p <path> -n <namespace>. (Details on all of the parameters for this command can be viewed in its README.) This script assumes you have enabled Application lifecycle management as an addon in your Open Cluster Management installation. See Application lifecycle management for details on installing the Application addon. Note: If you are using ArgoCD for gitops, a similar script argoDeploy.sh is provided that does not require the Application Lifecycle addon.

The policies are applied to all managed clusters that are available, and have the environment set to dev. If policies need to be applied to another set of clusters, update the PlacementRule.spec.clusterSelector.matchExpressions section in the policies.

Note: As new clusters are added that fit the criteria previously mentioned, the policies are applied automatically.

Subscription Administrator

In new versions of Open Cluster Management you must be a subscription administrator in order to deploy policies using a subscription. In these cases the subscription is still successfully created, but policy resources are not distributed as expected. You can view the status of the subscription to see the subscription errors. If the subscription administrator role is required, a message similar to the following one appears for any resource that is not created:

        demo-stable-policies-chan-Policy-policy-cert-ocp4:
          lastUpdateTime: "2021-10-15T20:37:59Z"
          phase: Failed
          reason: 'not deployed by a subscription admin. the resource apiVersion: policy.open-cluster-management.io/v1 kind: Policy is not deployed'

To become a subscription administrator, you must add an entry for your user to the ClusterRoleBinding named open-cluster-management:subscription-admin. A new entry may look like the following:

subjects:
  - kind: User
    apiGroup: rbac.authorization.k8s.io
    name: my-username

After updating the ClusterRoleBinding, you need to delete the subscription and deploy the subscription again.

Policy Generator

GitOps through Open Cluster Management is able to handle Kustomize files, so you can also use the Policy Generator Kustomize plugin to generate policies from Kubernetes manifests in your repository. The Policy Generator handles Kubernetes manifests as well as policy engine manifests from policy engines like Gatekeeper and Kyverno.

For additional information about the Policy Generator:

Community, discussion, contribution, and support

Check the Contributing policies document for guidelines on how to contribute to the repository.

Blogs: Read our blogs that are in the blogs folder.

Resources: View the following resources for more information on the components and mechanisms are implemented in the product governance framework.

policy-collection's People

Contributors

airadier avatar berenss avatar birsanv avatar brian-jarvis avatar ch-stark avatar chuckersjp avatar ckandag avatar cooktheryan avatar dhaiducek avatar dockerymick avatar fperearodriguez avatar gparvin avatar hirokuni-kitahara avatar jaormx avatar jforce avatar justinkuli avatar leo8a avatar mahesh-zetta avatar michaelkotelnikov avatar mprahl avatar rrbanda avatar rurikudo avatar sabre1041 avatar sachin-trilio avatar sahare avatar serngawy avatar tesshuflower avatar tphee avatar willkutler avatar yiraechristinekim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

policy-collection's Issues

Add an example of InstallPlanApprover

when manually updating Operators where is currently a known aspect of the current implementation that installation and updates are treated differently in the internal logic

The following workaround can be used (in Gitops-Scenarios).

https://github.com/redhat-cop/gitops-catalog/tree/main/installplan-approver

This example can be further enhanced or reimplement e.g. by Kyverno to check only one specific operator, even if they are in the same namespace

Some ideas:
if you can write a match rule to identify the approval resource, I think you could then mutate it to approve it.

https://kyverno.io/docs/writing-policies/mutate/

Some work done:
https://github.com/mvazquezc/telco-operations/blob/kyverno/rhacm/kyverno/assets/autoapprove-installplans-in-namespace.yaml

Add 'enforcementAction: dryrun' to Gatekeeper policies

Most policies in the collection are using remediationAction: inform, so policies are not 'enforced' out of the box.

Regarding Gatekeeper, it makes sense to enforce the creation of the ConstraintTemplate and Constraint objects when deploying a policy that relies on Gatekeeper. But, it would be more 'safe' to add enforcementAction: dryrun to the Gatekeeper's Constraint object definition. So the constraint is deployed, but it does not deny the creation of objects that violate the OPA policy.

For example, the next policy denies the creation of pods with <image-name>:latest image references as soon as its deployed. While this policy (from a 3rd party repository) is doing the same thing without enforcing the admission controller (since it has enforcementAction: dryrun defined in the Constraint object). This way, the policy monitors violations that Gatekeeper triggers, but it does not disallow the creation of objects.

Policy users can choose to remove enforcementAction: dryrun from the policy definition if they'd like the policy to be enforced in their environment.

Kyverno installation policy not working

I am getting the following error across multiple clusters that indicate Kyverno isn't actually getting installed from the policy-install-kyverno-prod-ns template:

violation - customresourcedefinitions not found: [policyreports.wgpolicyk8s.io] missing; violation - couldn't find mapping resource with kind Application, please check if you have CRD deployed; notification - subscriptions [kyverno-subscription-1] in namespace kyverno found as specified, therefore this Object template is compliant; violation - couldn't find mapping resource with kind PlacementRule, please check if you have CRD deployed; violation - couldn't find mapping resource with kind Channel, please check if you have CRD deployed

I can confirm there is nothing created in the kyverno or kyverno-channel namespaces

document requirement for Compliance Operator

A number of the policies in the "stable" collection depend on CRDs created by the Compliance Operator:

  • ScanSettingBinding
  • ComplianceSuite
  • ComplianceCheckResult
    A number of policies depend on a hardcoded namespace named: openshift-compliance

The CRDs above are created by the Compliance Operator, yet there is no mention in the documentation that the operator needs to be installed. Please update the documentation to include the requirement.

Create a Policy to check that search is enabled

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  generation: 2
  labels:
    app: search-prod
    app.kubernetes.io/managed-by: Helm
    chart: search-prod-2.5.0
    component: search-operator
    heritage: Helm
    installer.name: multiclusterhub
    installer.namespace: open-cluster-management
    release: search-prod-12d02
    manager: multicluster-operators-subscription
  name: search-operator
  namespace: open-cluster-management
  ownerReferences:
  - apiVersion: apps.open-cluster-management.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: HelmRelease
    name: search-prod-12d02
    uid: 86d3d8b7-ddab-4bf7-98a2-2909bb396053
  resourceVersion: "123881"
  uid: 8dcb24b9-9d43-44f4-b693-f8aca481d65c
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: search-operator
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: search-prod
        chart: search-prod-2.5.0
        heritage: Helm
        name: search-operator
        ocm-antiaffinity-selector: searchoperator
        release: search-prod-12d02
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: ocm-antiaffinity-selector
                  operator: In
                  values:
                  - searchoperator
              topologyKey: topology.kubernetes.io/zone
            weight: 70
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: ocm-antiaffinity-selector
                  operator: In
                  values:
                  - searchoperator
              topologyKey: kubernetes.io/hostname
            weight: 35
      containers:
      - args:
        - --enable-leader-election
        command:
        - /manager
        env:
        - name: RELEASE_NAME
          value: search-prod-12d02
        - name: WATCH_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: OPERATOR_NAME
          value: search-operator
        - name: SEARCH_COLLECTOR_IMAGE_NAME
          value: quay.io/stolostron/search-collector@sha256:63c5dde252a968917381ba00436459d4ccf991937ac2d3f756de61f8245a497d
        - name: DEPLOY_REDISGRAPH
          value: "true"
        image: quay.io/stolostron/search-operator@sha256:7c07841572b19eec94f0339166f23a74da664dfa559c4cb005e8c713c6eaf73c
        imagePullPolicy: IfNotPresent
        name: search-operator
        resources:
          limits:
            memory: 256Mi
          requests:
            cpu: 1m
            memory: 32Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: multiclusterhub-operator-pull-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsNonRoot: true
      serviceAccount: search-operator
      serviceAccountName: search-operator
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/infra
        operator: Exists

oc set env deploy search-operator DEPLOY_REDISGRAPH="true" -n open-cluster-management

Deploy policies using GitOps in an air-gap environment

Hi guys,
So I forked this repo to an disconnected environment and followed the procedure step by step but could not implement the policies using gitops.
The only information I have is the logs from the klusterlet-addon-appmgr pod:
Exit reconciling subscription.
The policies are in a private gitLab instance.

RBAC policy to restrict Rights at Namespace Levels for Managed Cluster

we have below requirement which we wanted to check :

Lets say user has 3 namespaces in managed cluster A, B ,C , he wants a USERA to access and view only project a from ACM console or Grafana console

similarly he wants his other user lets say USERB to access and view only project b from ACM console or Grafana console

He should see the resources of project A with USERA in ACM console as well as in grafana console

As of now we can restrict user in cluster level by using the below comnands :

oc create clusterrolebinding userA --clusterrole=open-cluster-management:view:devcluster --user=userA
oc create rolebinding userA -n devcluster --clusterrole=devclusterview --user=userA

by this commands , userA can see all namespaces in devcluster , but we want to restrict userA to see only project A.

is that possible ?
Can someone please guide us

OPA Policy to Protect Namespaces from Accidental Deletion

Hi,

I was trying to use the native syntax of OPA/Gatekeeper, but it seems like there Gatekeeper of Redhat Marketplace is not the same as community one:

Anyway, this is the policy, which does not take effect unfortunetely:

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: preventdeletingallnamespaces
spec:
  crd:
    spec:
      names:
        kind: PreventDeletingAllNamespaces
      validation:
        # Schema for the `PreventDeletingAllNamespaces` CR
        # This CR will be used to enforce the constraint
        openAPIV3Schema:
          properties:
            apiVersion:
              type: string
            kind:
              type: string
            metadata:
              type: object
            spec:
              type: object
  # Parameters for the OPA policy
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package preventdeletingallnamespaces

        # import data.kubernetes.namespaces
        # import data.lib.openshift.namespaces
        # Deny deletion if the namespace doesn't have the required annotation
        violation[msg] {
            input.request.operation == "DELETE"
            input.request.kind.kind == "Namespace"
            # not namespace_has_allow_delete_annotation(input.request.namespace)
            msg := sprintf("Deletion of namespace '%v' is not allowed as it is not annotated with allow-delete=true", [input.request.namespace])
        }

        # Check if the namespace has the required annotation
        namespace_has_allow_delete_annotation(namespace) {
            namespaces[namespace].metadata.annotations["allow-delete"] == "true"
        }

---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: PreventDeletingAllNamespaces
metadata:
  name: preventdeletingallnamespaces
spec: {}

testing it :

oc create ns test-hello
oc delete ns test-hello
# It was not denied !?

Could you please provide a working policy with the same purpose?

Updated policy-integrity-shield

We updated the ACM policy policy-integrity-shield to include the changes in IShield CRDs as well as references to updated IShield images.
@ycao56
Could you review this PR and merge ?
#71
Thank you

CM network policy

can you please add a network policy like

    - complianceType: "musthave"
      objectDefinition:
        kind: NetworkPolicy
        apiVersion: networking.k8s.io/v1
        metadata:
          namespace: default
          name: deny-from-other-namespaces
        spec:
          podSelector:
            matchLabels:
          ingress:
          - from:
            - podSelector: {} # accept ingress from all pods within this namespace only

Create a PolicySet for Kubernetes-Hardening

Purpose of this PolicySet should be that it is working generic for all Kubernetes-Clusters

Policy will be updated:

  • check for namespaces in terminating
  • missing labels
  • pods in pending state

ACS policies SecuredCluster

Hi
it may be good to change collection method as EBPF. KernelModule is deprecated

https://docs.openshift.com/acs/3.74/release_notes/374-release-notes.html#kernel-collection-module

Added 20 March 2023

Currently, secured clusters can specify three options of collection methods for runtime events: eBPF (selected by default), kernel module, or no collection. Kernel module as a collection method is deprecated in the RHACS version 3.74 release and is planned for removal in the RHACS version 4.1 release.

Provide a Policy for DRPlacementControl

During manual failover when working with ODF the following object has been created.
This is to investigate if a Policy should be created

apiVersion: ramendr.openshift.io/v1alpha1
kind: DRPlacementControl
metadata:
  name: pacman-placement-1-drpc
  namespace: pacman
  finalizers:
    - drpc.ramendr.openshift.io/finalizer
  labels:
    app: pacman
    cluster.open-cluster-management.io/backup: resource
spec:
  action: Failover
  drPolicyRef:
    name: ocp4perf1-ocp4perf2-2m
  failoverCluster: ocp4perf2
  placementRef:
    kind: PlacementRule
    name: pacman-placement-1
    namespace: pacman
  preferredCluster: ocp4perf1
  pvcSelector: {}

add a script to test policy-generator local

e.g like below it helps to use PolicyGenerator more easily:

mkdir -p ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator

curl -L \
  -o ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator \
  https://github.com/stolostron/policy-generator-plugin/releases/download/v1.8.0/linux-amd64-PolicyGenerator
chmod +x ${HOME}/.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator

curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  | bash


sudo chmod a+x kustomize /usr/local/sbin
sudo cp kustomize /usr/local/sbin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.