Giter VIP home page Giter VIP logo

charts's Introduction

CNCF Landscape Latest Release GitHub License Documentation Stack Overflow

Welcome to the CloudNativePG project!

CloudNativePG is a comprehensive open source platform designed to seamlessly manage PostgreSQL databases within Kubernetes environments, covering the entire operational lifecycle from initial deployment to ongoing maintenance. The main component is the CloudNativePG operator.

CloudNativePG was originally built and sponsored by EDB.

Table of content

Getting Started

The best way to get started is with the "Quickstart" section in the documentation.

Scope

The goal of CloudNativePG is to increase the adoption of PostgreSQL, one of the most loved DBMS in traditional VM and bare metal environments, inside Kubernetes, thus making the database an integral part of the development process and GitOps CI/CD automated pipelines.

In scope

CloudNativePG has been designed by Postgres experts with Kubernetes administrators in mind. Put simply, it leverages Kubernetes by extending its controller and by defining, in a programmatic way, all the actions that a good DBA would normally do when managing a highly available PostgreSQL database cluster.

Since the inception, our philosophy has been to adopt a Kubernetes native approach to PostgreSQL cluster management, making incremental decisions that would answer the fundamental question: "What would a Kubernetes user expect from a Postgres operator?".

The most important decision we made is to have the status of a PostgreSQL cluster directly available in the Cluster resource, so to inspect it through the Kubernetes API. We've fully embraced the operator pattern and eventual consistency, two of the core principles upon which Kubernetes is built for managing complex applications.

As a result, the operator is responsible for managing the status of the Cluster resource, keeping it up to date with the information that each PostgreSQL instance manager regularly reports back through the API server. Changes to the cluster status might trigger, for example, actions like:

  • a PostgreSQL failover where, after an unexpected failure of a cluster's primary instance, the operator itself elects the new primary, updates the status, and directly coordinates the operation through the reconciliation loop, by relying on the instance managers

  • scaling up or down the number of read-only replicas, based on a positive or negative variation in the number of desired instances in the cluster, so that the operator creates or removes the required resources to run PostgreSQL, such as persistent volumes, persistent volume claims, pods, secrets, config maps, and then coordinates cloning and streaming replication tasks

  • updates of the endpoints of the PostgreSQL services that applications rely on to interact with the database, as Kubernetes represents the single source of truth and authority

  • updates of container images in a rolling fashion, following a change in the image name, by first updating the pods where replicas are running, and then the primary, issuing a switchover first

The latter example is based on another pillar of CloudNativePG: immutable application containers - as explained in the blog article "Why EDB Chose Immutable Application Containers".

The above list can be extended. However, the gist is that CloudNativePG exclusively relies on the Kubernetes API server and the instance manager to coordinate the complex operations that need to take place in a business continuity PostgreSQL cluster, without requiring any assistance from an intermediate management tool responsible for high availability and failover management like similar open source operators.

CloudNativePG also manages additional resources to help the Cluster resource manage PostgreSQL - currently Backup, ClusterImageCatalog, ImageCatalog, Pooler, and ScheduledBackup.

Fully embracing Kubernetes means adopting a hands-off approach during temporary failures of the Kubernetes API server. In such instances, the operator refrains from taking action, deferring decisions until the API server is operational again. Meanwhile, Postgres instances persist, maintaining operations based on the latest known state of the cluster.

Out of scope

CloudNativePG is exclusively focused on the PostgreSQL database management system maintained by the PostgreSQL Global Development Group (PGDG). We are not currently considering adding to CloudNativePG extensions or capabilities that are included in forks of the PostgreSQL database management system, unless in the form of extensible or pluggable frameworks. The operator itself can be extended via a plugin interface called CNPG-I.

CloudNativePG doesn't intend to pursue database independence (e.g. control a MariaDB cluster).

Communications

Resources

Adopters

A list of publicly known users of the CloudNativePG operator is in ADOPTERS.md. Help us grow our community and CloudNativePG by adding yourself and your organization to this list!

CloudNativePG at KubeCon

Useful links

Star History

Star History Chart

Trademarks

Postgres, PostgreSQL and the Slonik Logo are trademarks or registered trademarks of the PostgreSQL Community Association of Canada, and used with their permission.

charts's People

Contributors

baurmatt avatar carlosrmendes avatar codereaper avatar cvrajeesh avatar dependabot[bot] avatar gbartolini avatar github-actions[bot] avatar gpothier avatar itay-grudev avatar itmwiw avatar jakubhajek avatar jcpunk avatar jessebot avatar jorricks avatar jsilvela avatar leonardoce avatar litaocdl avatar niccolofei avatar nigelvanhattum avatar orlovmyk avatar pchol avatar phisco avatar pionerd avatar pseudoresonance avatar renovate[bot] avatar stevenpc3 avatar sxd avatar theadamwright avatar wadlejitendra avatar walterbaidal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Consider moving the cloudnative-pg helm chart to cloudnative-pg repo

Hi ๐Ÿ‘‹๐Ÿผ

It would be nice if the helm chart for cloudnative-pg could be moved to the https://github.com/cloudnative-pg/cloudnative-pg repo. One major issue with having this separate from the main repo is that the versions are not the same and that makes automation a bit tricky when managing CRDs outside the helm release.

Alt Text

Kyverno is a pretty good example to showcase what I mean. In the following kustomization I can apply the CRDs and the Flux HelmRelease (with .Values.installCRDs=false) under the same Flux Kustomization and use renovate to manage the dependencies since the application and helm chart are in the same repo.

https://github.com/onedr0p/home-ops/blob/20f8533085c9a5ac8f07b46ba4a419caa2b60dc7/cluster/apps/kube-tools/kyverno/kustomization.yaml

When renovate discovers a new version of Kyverno you can see that a pull request is opened that can apply the CRDs and the HelmRelease in one shot.

https://github.com/onedr0p/home-ops/pull/4028/files

Now you might ask, why do you need to separate the CRDs from the HelmRelease, why not just use .Values.crds.create=true ?

This is because kustomize will not be able to apply the CRDs generated from the Flux HelmRelease before the cnpg Cluster CR. Meaning the CRDs are not created before the cnpg Cluster CR tries to get created.

Given the helm chart was moved into the https://github.com/cloudnative-pg/cloudnative-pg repo I could make this all work very similar to how I have it working with Kyverno or kube-prometheus-stack since the raw CRDs would be under the same tag as the helm chart and the application.

I know this is probably a longshot of a request but this is pretty much the only project I've ever come across where the developers who are both in control of the application and chart are using different repos and versions for each. It's a bit out of the norm when comparing to other popular projects like Kyverno, cert-manager, longhorn, rook-ceph, longhorn, trivy-operator and others I've come across where the developers are also in control of the helm chart and application.

Thanks

Addition of `hostNetwork` value in Chart

When using EKS with a Calico CNI, there are some issues with Webhooks that are well-known. Calico recommends setting hostNetwork: true

"Calico networking cannot currently be installed on the EKS control plane nodes. As a result the control plane nodes will not be able to initiate network connections to Calico pods. (This is a general limitation of EKS's custom networking support, not specific to Calico.) As a workaround, trusted pods that require control plane nodes to connect to them, such as those implementing admission controller webhooks, can include hostNetwork:true in their pod spec. See the Kubernetes API pod spec definition for more information on this setting."

https://docs.tigera.io/calico/latest/getting-started/kubernetes/managed-public-cloud/eks#install-eks-with-calico-networking

Could we allow this as a value to be set via the Chart with the default set to false?

cnpg-controller-manager-config breaking issue in 0.19.0

Previously working config in 0.18.0.

Originally using config: secret: true
Latest still looks for configmap.

Tested clean install using default values. Still getting error about configMap:

"logger":"setup","msg":"unable to read ConfigMap","namespace":"cnpg-system","name":"cnpg-controller-manager-config","error":"configmaps \"cnpg-controller-manager-config\" is forbidden: User \"system:serviceaccount:cnpg-system:cloudnative-pg\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"cnpg-system\""

Issue shouldn't exist when using a secret instead. Issue definitely shouldn't exist when using default values file.

Add support for existing backup credentials

Currently, the cluster chart create a secret containing backup credentials like S3.
There is no way to provide a existingSecret to avoid putting these credentials in the Helm Values.

Operator PodMonitor

Can a PodMonitor be added to the chart based on the example in the documentation?

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: cnpg-controller-manager
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: cloudnative-pg
  podMetricsEndpoints:
    - port: metrics

Add support for pgpooler to use private image registry

Requirement to point pooler to private registry for images - currently have to stand this up outside of the Helm chart so would be handy to include this.

To support, need to map helm to:

spec:
  template:
      spec:
         containers:
             - name: pgbouncer
               image: private-image-path

Backups incompatibility with unencrypted minio

When i enable S3 backups with a minio configuration.

The barmanConfig looks like :

barmanObjectStore: wal: compression: gzip encryption: AES256 data: compression: gzip encryption: AES256 jobs: 2

This configuration doesn't work if you don't enable KMS/Encryption on your minio. It would be nice to configure these settings.
For exemple, we need to change the AES256 to "" for backup and wal archiving to work.

helm-schema-gen repository has been archived

To generate the json schema from the values.yaml file we were using a helm plugin, helm-schema-gen, which has been deprecated and did not support arm64 based macbooks.

I forked the original project, updated goreleaser config and cut a new release, so now the plugin can be installed by running helm plugin install https://github.com/phisco/helm-schema-gen.git.
So, either we move to my fork and I take over the support of the plugin or we just start writing the schema by hand, I would lean toward the former. What do you think?

version well-know label is hardcoded to appVersion in Chart.yaml

The intention seems to be that it should match what is set by .Values.image.tag else default to .Chart.appVersion.

This issue causes changes to the label even when .Values.image.tag is set ti an older value (as of 17th Jan 2024):

-    app.kubernetes.io/version: 1.20.2
-    helm.sh/chart: cloudnative-pg-0.18.2
+    app.kubernetes.io/version: 1.22.0
+    helm.sh/chart: cloudnative-pg-0.20.0

CRDs

I noticed that the CRDs of this helm chart are placed in the templates folder instead of the Helm 3 special crds folder. I wonder if this is intentional and will there be any gotchas when upgrading the helm release?

This is more of a question than a change request - I'm not familiar with the technical details of Helm. But I used several Helm charts that whenever a new version has changes to the CRDs, requires me to manually update the CRDs. e.g. https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#from-35x-to-36x

default values is setting runAsNonRoot under the wrong context.

I ran an installation of the operator, and noticed that while most of the security settings looked good, the setting for runAsNonRoot is currently under the Pod SecurityContext, and not the Container SecurityContext. I was able to override with my values.yaml, but thought I'd let you know so that you could adjust the defaults.

Chart fails readiness probe when being used as a sub-chart with helm/chart-testing

Hey all,

I'm having an issue when installing this chart via the helm/chart-testing utility as a sub-chart. The reason I'm doing this is I've written a helm chart to deploy the Cluster resources made available by the CRDs in this chart and I need the CRDs to exist before I can run tests against minikube on pull-requests.

The issue is that the readiness probe fails for the deployment pods. If there was a configurable initialDelaySeconds on the probes this would likely work and give the pods time to start before failing.

I know this isn't a bug or issue with your product per-se but I figured it was a harmless addition and more configurable bits is always nice to have. I have created a PR I'll link to this issue.

`PodDisruptionBudget.policy` version error on K8s v1.25

When deploying v0.15.1 of the cloudnative-pg chart with vanilla config on a Kubernetes cluster with nodes running v1.25.2, the following error logs repeatedly on the CloudNativePG pod:

{
   "level":"error",
   "ts":1667361753.8337529,
   "logger":"controller-runtime.source",
   "msg":"if kind is a CRD, it should be installed before calling Start",
   "kind":"PodDisruptionBudget.policy",
   "error":"no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"",
   "stacktrace":"sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:139\nk8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:233\nk8s.io/apimachinery/pkg/util/wait.WaitForWithContext\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:660\nk8s.io/apimachinery/pkg/util/wait.poll\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:594\nk8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext\n\tpkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:545\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1\n\tpkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:132"
}

This is due to policy/v1beta1 PodDisruptionBudget being deprecated in 1.25 in favor of the stable policy/v1 PodDisruptionBudget.

Similar:

Provide a Cluster chart

We have an operator chart, but we lack a chart that initializes a database cluster.

The objective of such a chart should not be to provide a straight through interface to a Cluster.postgresql.cnpg.io object. Advanced use should be done by interacting with CNPG's CRDs directly. Instead to the chart should provide a reliable, easy-to-use method for setting up a CNPG cluster and related resources with safe defaults.

An good cluster chart should cover multiple aspects, such as:

  • Provision a cluster
  • Provision a Pooler
  • Backup configuration
  • Scheduled Backups
  • Disaster recovery
  • Alerting and Monitoring
  • Safe and reliable defaults and warnings whenever user settings result in unsafe configuration
  • Tests

Related to: #88

Installing via terraform

Hi,

Perhaps I am doing something wrong, but this isn't working right via terraform

resource "helm_release" "cloudnative_pg" {
  name             = "cnpg"
  namespace        = local.namespace
  create_namespace = true

  repository = "https://cloudnative-pg.github.io/charts"
  chart      = "cnpg/cloudnative-pg"
  version    = "0.18.2"  
}

The error is
Error: could not download chart: chart "cnpg/cloudnative-pg" version "0.18.2" not found in https://cloudnative-pg.github.io/charts repository
I know this isn't terraform support, but I've not had this issue with other helm repos, where the chart name had a / and it didn't like that so I had to put in the repository url similar to this

  repository = "https://cloudnative-pg.github.io/charts/cnpg/"
  chart      = "cloudnative-pg"
  version    = "0.18.2"

Which gives out a different error, Error: could not download chart: Chart.yaml file is missing

Has anybody installed this via terraform?

Add support for additional labels in PodMonitor

Currently, there is no way to add extra labels for PodMonitor. This is useful when they're discovered using specific labels e.g. kube-prometheus-stack expects a label release: kube-prometheus-stack on the PodMonitor.

Suggestions

  1. New key called podMonitorAdditionalLabels
monitoring:
  podMonitorEnabled: true
  podMonitorAdditionalLabels:
    release: kube-prometheus-stack
  1. Dreprecate podMonitorEnabled instead and create a new key called podMonitor
monitoring:
  podMonitor:
    enabled: true
    additionalLabels:
       release: kube-prometheus-stack

This needs to be done for both cloudnative-pg and cluster charts

Can't override imageName in cluster to pull image from local registry instead of ghcr

I've set the imageName key in the spec setting to pull from my local registry as my kubernetes cluster doesn't have access to the internet. However, when I spin up the cloudnative-pg cluster the initdb pod tries to pull the image from ghcr.io/cloudnative-pg/postgresql instead of the value I've specified in imageName.

Is there a way to override this value so that the cluster can pull an image from a different source?

Feature: Cluster Chart PrometheusRule alerts

Automated alerts for cluster issues would be extremely useful to improve the safety of a CNPG cluster.

Ideally, alerts should come in both a warning and critical severity levels, so that users have time to react to an alert before a problem escalates.

Proposed alerts should include:

  • Instance failure based on the cluster spec
  • Low Disk space
  • High Replication Lag
  • High connection count on any instance

There are some alerts such as high CPU usage and/or memory that are already covered by kube-prometheus-stack and should not be implemented here.

Cannot disable weak TLS ciphers

It is becoming increasingly common security policy to disable weak ciphers in TLS (ref: https://support.securityscorecard.com/hc/en-us/articles/115003260246-TLS-Service-Supports-Weak-Cipher-Suite#:~:text=Ultimately%2C%20it%20is%20recommended%20to,discovered%20that%20render%20it%20insecure.)

The CNPG operator does not allow you to do this.

We want to restrict TLS to only ECDHE-RSA-AES256-GCM-SHA384 and minimum TLS version to 1.2. Therefore, we added the following to our installation to set the ciphers for postgresql.conf

postgresql:
  parameters:
     ssl_ciphers: ECDHE-RSA-AES256-GCM-SHA384
     ssl_min_protocol_version: TLSv1.2

Unfortunately, when you try and set these, the helm install fails with a message

[spec.postgresql.parameters.ssl_ciphers: Invalid value: "ECDHE-RSA-AES256-GCM-SHA384": Can't set fixed configuration parameter, spec.postgresql.parameters.ssl_min_protocol_version: Invalid value: "TLSv1.2": Can't set fixed configuration parameter]

Despite the operator letting you set spec.postgresql.parameters with key/value configuration options for postgresql.conf, there are some which the operator is intentionally locking down and this includes ones which would be needed for restricting the cipher suites.

https://github.com/cloudnative-pg/cloudnative-pg/blob/main/pkg/postgres/configuration.go#L284

Helm charts/operators for MySQL, MongoDB and Neo4j all allow the cipher lists to be restricted. The PostgreSQL implementation should seriously consider allowing the same.

podSecurityContext expose supplementalGroups

As part of security the cluster I'm trying to deploy requires that pod security includes supplementalGroups: - 1

  podSecurityContext:
    fsGroup: 1001
    runAsGroup: 1001
    runAsNonRoot: true
    runAsUser: 1001
    supplementalGroups:
      - 1
    seccompProfile:
      type: RuntimeDefault

I'm hoping that we can plumb that through to the values.yaml. I'm pretty sure that would need to be included in the CRDs especially for the Cluster resource.

Crashloop on 0.14.1

Hello
Helm chart version: 0.14.1 crashes with following error:

Uses following image: ghcr.io/cloudnative-pg/cloudnative-pg:1.16.1

Error: unknown flag: --enable-leader-election

[Security]: Provide Helm Provenance to mitigate supply chain attacks

It would be nice if automated Helm-Provenance is provided with the Chart.
This ensures users can verify the origin of the Helm-Charts and prevents some supply chain attacks.
More info: https://helm.sh/docs/topics/provenance/

Even more so when bundled with signing the images as well.

--

As a downstream, the current unsecure distribution prevents us from directly referencing these Helm charts, as it doesn't qualify our security policy. Which requires correct signing of images and provenance on helm charts.

Feature Request: Add Healthchecks to the Grafana Dashboard

It would be quite useful if the Grafana Dashboard shows health status of different components of a database cluster and the operator.

At minimum the following should be covered:

  • CPU
  • Memory
  • Connections
  • Storage
  • Number of Ready replicas
  • Backups
  • Operator status

Ability to specify extra labels

We are using labels to reuse network policies. This prevent us from re-specifying the same network policy 10 times with a slightly different podLabel. Especially the KubeAPI is important as the IP could change when you switch Kubernetes cluster or reinstall it.
Therefore the request is as follows; could we please add the ability to specify extra labels, just like we do for annotations?

Prometheus metrics relabeling

Hi,
Often there are use cases for Prometheus metrics relabeling rules to be set in a target configuration. For example we need to rewrite the cluster label in CNPG metrics because it collides with external labels. In prometheus-operator this is done adding a RelabelConfig list in many custom resources.
I'd like to submit a PR implementing this in the PodMonitor.
Cheers

Edit: my initial statement was wrong about needing a ServiceMonitor... working on it :)

Deprecate cnpg-sandbox

With CloudNativePG 1.18 we have basically moved the most important aspects of cnpg-sandbox inside the main project. In particular:

  • installing Prometheus and Grafana for evaluation
  • deploying default metric (now included in CloudNativePG itself)
  • installing a base Grafana dashboard
  • deploying some basic rules for alerting
  • running pgbench

For details, see:

After all, cnpg-sandbox was born as an experimentation project to prototype the above resources. The only missing part is to run fio, which will be included in #952 but it is not a critical component.

Please remove cnpg-sandbox from the code and document the above.

helm chart does not support recently-added region to s3credentials

ref: cloudnative-pg/cloudnative-pg#207

cluster spec (truncated):

  backup:
    barmanObjectStore:
      destinationPath: "s3://BUCKET
      endpointURL: "S3_ENDPOINT"
      s3Credentials:
        accessKeyId:
          name: s3-secret
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: s3-secret
          key: ACCESS_SECRET_KEY
        # region: us-east

Error: [...] .spec.backup.barmanObjectStore.s3Credentials.region: field not declared in schema

I have deployed the operator via helm chart, version 0.13.1

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/actions/setup-kind/action.yml
  • azure/setup-helm v3.5@5119fcb9089d432beecbf79bb2c7915207344b78
  • azure/setup-kubectl v3.2@901a10e89ea615cf61f57ac05cecdf23e7de06d8
  • helm/kind-action v1.10.0@0025e74a8c7512023d06dc019c617aa3cf561fde
.github/workflows/lint.yml
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • azure/setup-helm v3.5@5119fcb9089d432beecbf79bb2c7915207344b78
  • actions/setup-python v5.1.0@82c7e631bb3cdc910f68e0081d67478d79c6982d
  • helm/chart-testing-action v2.6.1@e6669bcd63d7cb57cb4380c33043eebe5d111992
  • ubuntu 22.04
.github/workflows/release-pr.yml
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • ubuntu 22.04
.github/workflows/release-publish.yml
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • azure/setup-helm v4.2.0@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814
  • helm/chart-releaser-action v1.6.0@a917fd15b20e8b64b94d9158ad54cd6345335584
  • docker/login-action v3.2.0@0d4c9c5ea7693da7b068278f7b52bda2a190a446
  • sigstore/cosign-installer v3.5.0@59acb6260d9c0ba8f4a2f9d9b48431a222b68e20
  • ubuntu 22.04
.github/workflows/tests-cluster-standalone.yml
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • ubuntu 22.04
  • ubuntu 22.04
.github/workflows/tests-operator.yml
  • actions/checkout v4.1.6@a5ac7e51b41094c92402da3b24376905380afc29
  • ubuntu 22.04
helm-values
charts/cloudnative-pg/values.yaml
helmv3
charts/cloudnative-pg/Chart.yaml
  • cluster 0.0
regex
charts/cloudnative-pg/Chart.yaml
  • ghcr.io/cloudnative-pg/cloudnative-pg 1.23.1

  • Check this box to trigger a request for Renovate to run again on this repository

Add nodeSelector and tolerations to cluster chart for storage nodes

When deploying to an automated k8s cluster with separate storage nodes, the pg cluster needs to utilize either nodeAffinity or nodeSelector to ensure placement on storage nodes. NodeSelector is preferred.

Additionally, for the taints on the storage nodes the cluster needs tolerations with key, operator, value & effect fields.

Example for values.yaml file:

# -- Nodeselector for the cluster to be installed.
nodeSelector:
  node-role.kubernetes.io/storage: storage

# -- Tolerations for the cluster to be installed.
tolerations:
  - key: "storage"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"

This could be added to the cluster chart values file before the affinity rules to keep uniformity with the operator chart values file.

It would also need to be added to the cluster.yaml file before affinity

Example for templates cluster.yaml file:

{{- with .Values.cluster.nodeSelector }}
  nodeSelector:
    {{- toYaml . | nindent 4 }}
  {{- end }}
{{- with .Values.cluster.tolerations }}
  tolerations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
{{- with .Values.cluster.affinity }} # Add before affinity
  affinity:
    {{- toYaml . | nindent 4 }}
  {{- end }}

[Feature] optional network policy for the operator

The prometheus node-exporter includes an optional default network policy in their helm chart.

It would be nice if a policy that permits only the required access to the operator could be optionally enabled. https://cloudnative-pg.io/documentation/1.19/security/#exposed-ports

This request specifically ignores any Clusters created by the operator.

For egress perhaps something like:

  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
    - port: 443
      protocol: TCP

Descriptions in CRDs are wrongly formatted

kubectl explain clusters.postgresql.cnpg.io.spec.certificates
GROUP:      postgresql.cnpg.io
KIND:       Cluster
VERSION:    v1

FIELD: certificates <Object>

DESCRIPTION:
    The configuration for the CA and related certificates
    
FIELDS:
  clientCASecret	<string>
    The secret containing the Client CA certificate. If not defined, a new
    secret will be created with a self-signed CA and will be used to generate
    all the client certificates.
    
     Contains:<br /> <br /> - `ca.crt`: CA that should be used to validate the
    client certificates, used as `ssl_ca_file` of all the instances.<br /> -
    `ca.key`: key used to generate client certificates, if ReplicationTLSSecret
    is provided, this can be omitted.<br />

  replicationTLSSecret	<string>
    The secret of type kubernetes.io/tls containing the client certificate to
    authenticate as the `streaming_replica` user. If not defined, ClientCASecret
    must provide also `ca.key`, and a new secret will be created using the
    provided CA.

  serverAltDNSNames	<[]string>
    The list of the server alternative DNS names to be added to the generated
    server TLS certificates, when required.

  serverCASecret	<string>
    The secret containing the Server CA certificate. If not defined, a new
    secret will be created with a self-signed CA and will be used to generate
    the TLS certificate ServerTLSSecret.<br /> <br /> Contains:<br /> <br /> -
    `ca.crt`: CA that should be used to validate the server certificate, used as
    `sslrootcert` in client connection strings.<br /> - `ca.key`: key used to
    generate Server SSL certs, if ServerTLSSecret is provided, this can be
    omitted.<br />

  serverTLSSecret	<string>
    The secret of type kubernetes.io/tls containing the server TLS certificate
    and key that will be set as `ssl_cert_file` and `ssl_key_file` so that
    clients can connect to postgres securely. If not defined, ServerCASecret
    must provide also `ca.key` and a new secret will be created using the
    provided CA.

There are html elements which are not rendered correctly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.