Giter VIP home page Giter VIP logo

percona-server-mongodb-operator's Introduction

Percona Operator for MongoDB

Percona Kubernetes Operators

License Docker Pulls Docker Image Size (latest by date) GitHub tag (latest by date) GitHub go.mod Go version Go Report Card

Percona Operator for MongoDB automates the creation, modification, or deletion of items in your Percona Server for MongoDB environment. The Operator contains the necessary Kubernetes settings to maintain a consistent Percona Server for MongoDB instance, be it a replica set or a sharded cluster.

Based on our best practices for deployment and configuration, Percona Operator for MongoDB contains everything you need to quickly and consistently deploy and scale Percona Server for MongoDB instances into a Kubernetes cluster on-premises or in the cloud. It provides the following features to keep your Percona Server for MongoDB deployment healthy:

  • Easy deployment with no single point of failure
  • Sharding support
  • Scheduled and manual backups
  • Integrated monitoring with Percona Monitoring and Management
  • Smart update to keep your database software up to date automatically
  • Automated password rotation – use the standard Kubernetes API to enforce password rotation policies for system user
  • Private container image registries

You interact with Percona Operator mostly via the command line tool. If you feel more comfortable with operating the Operator and database clusters via the web interface, there is Percona Everest - an open-source web-based database provisioning tool available for you. It automates day-to-day database management operations for you, reducing the overall administrative overhead. Get started with Percona Everest.

Architecture

Percona Operators are based on the Operator SDK and leverage Kubernetes primitives to follow best CNCF practices.

Learn more about architecture and design decisions.

Documentation

To learn more about the Operator, check the Percona Operator for MongoDB documentation.

Quickstart installation

Ready to try out the Operator? Check the Quickstart tutorial for easy-to follow steps.

Below is one of the ways to deploy the Operator using kubectl.

kubectl

  1. Deploy the operator from deploy/bundle.yaml:
kubectl apply --server-side -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/deploy/bundle.yaml
  1. Deploy the database cluster itself from `deploy/cr.yaml
kubectl apply -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/main/deploy/cr-minimal.yaml

Contributing

Percona welcomes and encourages community contributions to help improve Percona Kubernetes Operator for Percona Server for MongoDB.

See the Contribution Guide and Building and Testing Guide for more information on how you can contribute.

Communication

We would love to hear from you! Reach out to us on Forum with your questions, feedback and ideas

Join Percona Kubernetes Squad!

                    %                        _____                
                   %%%                      |  __ \                                          
                 ###%%%%%%%%%%%%*           | |__) |__ _ __ ___ ___  _ __   __ _             
                ###  ##%%      %%%%         |  ___/ _ \ '__/ __/ _ \| '_ \ / _` |            
              ####     ##%       %%%%       | |  |  __/ | | (_| (_) | | | | (_| |            
             ###        ####      %%%       |_|   \___|_|  \___\___/|_| |_|\__,_|           
           ,((###         ###     %%%        _      _          _____                       _
          (((( (###        ####  %%%%       | |   / _ \       / ____|                     | | 
         (((     ((#         ######         | | _| (_) |___  | (___   __ _ _   _  __ _  __| | 
       ((((       (((#        ####          | |/ /> _ </ __|  \___ \ / _` | | | |/ _` |/ _` |
      /((          ,(((        *###         |   <| (_) \__ \  ____) | (_| | |_| | (_| | (_| |
    ////             (((         ####       |_|\_\\___/|___/ |_____/ \__, |\__,_|\__,_|\__,_|
   ///                ((((        ####                                  | |                  
 /////////////(((((((((((((((((########                                 |_|   Join @ percona.com/k8s   

You can get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles. Interested? Fill in the form at percona.com/k8s.

Roadmap

We have an experimental public roadmap which can be found here. Please feel free to contribute and propose new features by following the roadmap guidelines.

Submitting Bug Reports

If you find a bug in Percona Docker Images or in one of the related projects, please submit a report to that project's JIRA issue tracker or create a GitHub issue in this repository.

Learn more about submitting bugs, new features ideas and improvements in the Contribution Guide.

percona-server-mongodb-operator's People

Contributors

alyhkafoury avatar blez avatar cap1984 avatar dadabird avatar dalbani avatar defbin avatar delgod avatar dependabot[bot] avatar dolbager avatar egegunes avatar fiowro avatar heartwilltell avatar hors avatar inelpandzic avatar jlyshoel avatar nmarukovich avatar nonemax avatar ollevche avatar panchal-yash avatar phin1x avatar pkeuter avatar pooknull avatar ptankov avatar qjkee avatar sergelogvinov avatar spron-in avatar srteam2020 avatar timvaillancourt avatar tplavcic avatar vadimtk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

percona-server-mongodb-operator's Issues

[FEAT]: Arm64 support

This is written in Go, it should be 1h tops to add arm64 support to the container image. Please do this.

secrets are deleted when delete-psmdb-pvc has been set

secrets are deleted when delete-psmdb-pvc has been set.
the problem is sometimes the percona secrets (cr.Spec.Secrets.Users,
"internal-" + cr.Name + "-users") are respectively being recreated right after the deletion in reconcileUsersSecret, reconcileUsers func calls. Sometimes not.

This logic of deleting secrets causes issues/side effects in tha case that the psmdb CR might be re-created after to recover a deleted cluster for example.

To work around this logic we try to overwrite/create the user secret with the previous data (leaving to the operator to sync the secrets, etc), but sometimes recreating psmdb lateron ends up creating the mongodb pods successfully with some auth errors on the mongo pods though, sometimes not. the psmdb does not go in initializing status even. (there are logs on the operator: not found the internal users secret.)

Backups failing on one cluster without error message

Report

We have two clusters managed by the same operator running on kubernetes. Daily backup is set up for them, which works for one cluster, but fails for the other. All the settings are the same for backups, both uses the same s3 bucket.

More about the problem

Error message on the CRD:
some of pbm-agents were lost during the backup

State of the backup is error

Checking the logs of the backup-agent container in one of the pods I see, that it's writing the collections, and then it stopps with the following error message:

2024-04-10T11:47:19.097+0000    Mux close namespace XXXXX                                                                                       
2024-04-10T11:47:19.097+0000    done dumping XXXX (0 documents)                                                                                
2024-04-10T11:47:19.098+0000    writing XXXXX to archive on stdout
2024/04/10 11:47:21 [entrypoint] `pbm-agent` exited with code -1                                                                                                                   
2024/04/10 11:47:21 [entrypoint] restart in 5 sec                                                                                                                                  │
2024/04/10 11:47:26 [entrypoint] starting `pbm-agent`  

We had a change on this cluster, when it stopped working, but it was just to increase the resources from c5a.large to c5a.4xlarge. First I thought that maybe the backup agent gets OOMKilled, as it now sees, that there are plenty more resources available, so I decreased the resources (as we don't need increased anymore) to c5a.xlarge, but the issue is still the same.

I was not able to enable debug loggin on the backup-agent, maybe it's not even possible. How could I get more details on the error?

Steps to reproduce

  1. Install cluster via mongodb-operator
  2. Enable backups
  3. Increase cluster resources (also requests/limits)
  4. Backups will fail (?)

Versions

  1. Kubernetes: 1.26.13-eks-508b6b3
  2. Operator: percona/percona-server-mongodb-operator:1.15.0
  3. Backup agent version: percona/percona-backup-mongodb:2.0.4
  4. Mongo version: percona/percona-server-mongodb:5.0.15-13

Anything else?

I also tried to restart the whole cluster, but still the same.

We haven't changed the resources of the other cluster and the backups are working fine there.

Possibility to choose nodeport used port

Hi,
I want to deploy a database instance and expose it, my idea was to use a nodeport but I also wanted to choose the port, bur from what I saw there is no way to do that.
I am using the helm chart to deploy my instance but from what I saw it seems to be an issue coming from the operator.
I want to use a nodeport to make things easier and I want to choose the port to keep it consistant between my clusters

Allow creation of backups without delete-backup finalizers

Proposal

We should be able when enabling backups in PerconaServerMongoDb to choose the finalizers or at least to remove the delete-backup finalizer.

backup:
  enabled: true
  finalizers: false

or

backup:
  enabled: true
  finalizers: []

Use-Case

The credentials used to push a new backup (or even to retrieve them) should not be able to delete a backup as this would create a security bridge.
Most likely no-one is even able to delete a backup as enabling object-lock should be a standard practice.
This prevent the backup objects from being deleted without manually removing the useless finalizer in this case.

Is this a feature you are interested in implementing yourself?

No

Anything else?

No response

requireTLS is ignored in "unsafe" mode

Report

setting

spec:
  allowUnsafeConfigurations: true
  replsets:
    configuration: |
      net:
        tls:
          mode: requireTLS

means that requireTLS is (silently) ignored. From the code this appears to be because "unsafe" means both "less than 3 replicas" (I would like to use a PSA config) and also "don't use TLS certificates for mongo replica authentication"

I would suggest either splitting this flag into two to allow for a PSA config that requires tls, or leaving it and considering a PSA configuration safe.

More about the problem

See repro steps

Steps to reproduce

  1. apply a config as above
  2. check mongo parameters in the container, observe that requireTLS is not set

Versions

  1. 1.28.3
  2. 1.15
  3. mongo 6.0.9-7

Anything else?

No response

Backup config not updated

Report

  • When update secret for backup storage in kind PerconaServerMongoDB updates are not applied to PBM backup configuration
  • When update endpointUrl backup storages s3 updates are not applied to PBM backup configuration###

More about the problem

Backup configuration cannot be updated via YAML

Steps to reproduce

  1. Create cluster using PerconaServerMongoDB with s3 backup and SecretV1 and endpointUrlV1
    1. Check configutaion in backup-agent using pbm config
  2. Update PerconaServerMongoDB and use for s3 backup SecretV2 and endpointUrlV2
  3. Check configutaion in backup-agent using pbm config

Versions

  1. Kubernetes - v1.27.7
  2. Operator - percona/percona-server-mongodb-operator:1.15.0
  3. Database - percona/percona-backup-mongodb:2.3.0

Anything else?

No response

Unable to change the replica size, Operator seems to always pick the size as 3 for the StatefulSet

Report

I have been trying to get mongodb deployed in a kubernetes cluster. But the value set for size under replsets doesn't seem to have any effect. The operator is always creating the statefulset with 3 pods.

More about the problem

The operator should respect the passed size and create pods based on it.

Steps to reproduce

  1. Deploy the pmdb chart using the following values,
    The contents of pmdb.yml.
replsets:
  - name: rs0
    size: 2
    podSecurityContext:
      fsGroup: 1001
      runAsGroup: 1001
      runAsUser: 1001
    resources:
      limits:
        cpu: "3900m"
        memory: 14Gi
      requests:
        cpu: 2
        memory: 10Gi

Deploy psmdb

helm install my-db percona/psmdb-db --namespace=percona -f ./deploy/mongodb/pmdb.yml 
  1. Verify if the expected values are set in the pmdb resource
    image
  2. The operator creates statefulset with 3 pods regardless whatever is passed in size

Versions

  1. Kubernetes: 1.28
  2. Operator: 1.15.0
  3. Database: MongoDB 6

Anything else?

No response

Support for accessing syslog and mounting additional volumes for audit collection

Proposal

Add the ability to configure syslog and add option to mount additional volumes, so i can use a sidecar like fluentbit to collect audit logs

Use-Case

Currently i cannot access audit logs in syslog or file format as i cannot access these directories from a sidecar. Having the option to modify rsyslog.conf or mounting volumes on the mongod pod will allow me to collect the logs using fluentbit

In the existing operator, we can add volumes and mounts to sidecars, but there is no extraVolumes or additionalVolumes property available for the mongod replicaset itself, so i cannot expose directories for fluentbit to collect from

For our SIEM monitoring, we have a requirement to collect audit events from MongoDB

Is this a feature you are interested in implementing yourself?

Maybe

Anything else?

No response

Operator not using IRSA Service Account IAM Role for restore and backup reconcile

Report

Hi,
We use the operator to do automatic backups to S3 buckets on EKS, which works fine, as the backups are done by the agent container running in the mongodb pods.

We use IRSA, so the EKS pods are using the AWS IAM roles to access resources like S3 buckets, which is working fine on the agent container, but it seems that it's not working on the operator.

More about the problem

When the operator tries to list the backups it fails (for restore and reconcile):

2024-02-08T09:59:46.621Z    ERROR    failed to run finalizer    {"controller": "perconaservermongodbbackup-controller", "object": {"name":"BACKUP-S3-BUCKET",
│ error": "delete files from storage: get file list: \"2024-01-09T00:47:41Z/\": get backup list: AccessDenied: Access Denied\n\tstatus code: 403

Now the operator uses the EKS node's role, if I grant access to the node role and allow the role in the s3 policy, everything works fine.

Steps to reproduce

  1. Create EKS cluster
  2. Create role/profile for the nodes running mongodb
  3. Install percona opertor via helm
  4. Add the IRSA annotation to the mongodb serviceaccount (or to default)
  5. Add the IRSA annotation to the operator's serviceaccount
  6. Create a backup -> works fine
  7. Restore backup -> fails
  8. Reconcile backup -> fails

Versions

  1. Kubernetes: 1.26
  2. Operator: 1.14.3, 1.15.0
  3. Database: 5.0.15-13

Anything else?

No response

Cannot use custom issuer to generate tls certificate

Proposal

As far as I know, the CR definition of Percona MongoDB cluster does not provide any configuration attribute to use an existing custom Issuer to generate the tls certificate with cert-manager. A <cluster-name>-psmdb-issuer issuer resource is automatically generated by the operator, then <cluster-name>-ssl and <cluster-name>-ssl-internal certificate resources are issued from it.

The Percona XtraDB cluster provide the tls.issuerConf attributes to specify a custom Issuer for these certificates to be generated.

It would be a great feature to implement this as well for the future releases of Percona MongoDB operator if not planned yet !

Use-Case

No response

Is this a feature you are interested in implementing yourself?

No

Anything else?

No response

storageClassName not ending up in PVC

Hi! I want to make sure my pvcs use a storageclass with allowVolumeExpansion, so I've added these helm values to make sure the storageclass is getting set:

 replsets:
- name: rs0
  volumeSpec:
    pvc:
      storageClassName: custom
sharding:
  configrs:
    volumeSpec:
      pvc:
        storageClassName: custom

And I'm seeing the pvcTemplate ending up in the corresponding statefulsets:

  volumeClaimTemplates:
  - apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      creationTimestamp: null
      name: mongod-data
      namespace: percona
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 3Gi
      storageClassName: custom
      volumeMode: Filesystem
    status:
      phase: Pending

But unfortunately, the cfg pvcs seem to be created w the EKS default storage class:

% k get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongod-data-mongodb-cfg-0   Bound    pvc-5c40c40a-d1ce-4636-b3e2-1e9c36755047   3Gi        RWO            gp2            64m
mongod-data-mongodb-cfg-1   Bound    pvc-d3b39739-780a-4358-8487-1db81dd9eb5e   3Gi        RWO            gp2            63m
mongod-data-mongodb-cfg-2   Bound    pvc-115a1fd6-bd83-4235-beea-f366aa18bb3d   3Gi        RWO            gp2            63m
mongod-data-mongodb-rs0-0   Bound    pvc-e966d021-f864-4122-9c3a-deb85118a252   3Gi        RWO            custom         64m
mongod-data-mongodb-rs0-1   Bound    pvc-dc2846c4-4bf1-437d-b061-2e2feb372820   3Gi        RWO            custom         63m
mongod-data-mongodb-rs0-2   Bound    pvc-6b3868e3-d880-47b6-923f-b635278f0cea   3Gi        RWO            custom         63m

Any idea why that would be? I've deleted and redeployed a few times. Any ideas?

Backups/Restores are in Waiting Status after Kubernetes scheduler restarted the backup-agent container

Report

MongoDB Backup is stuck on Status:Waiting and backup-agent container is not doing anything after Kubernetes scheduler restarted the backup-agent container during the execution of a restore:

Schermata del 2024-03-06 15-57-14

More about the problem

I expect to see an ongoing backup after asking for a backup through the PerconaServerMongoDBBackup yml definition, when other actions (backups / restores) are not in progress.

Steps to reproduce

Start a MongoDB cluster in unsafe mode with only 1 replica (this is usefull for development environments) and fill it with some data (let's say about 600MB of gzipped data);

Do a MongoDB backup and wait for the completion (Status = Ready) with the following yml (this will upload the backup to our AWS S3 bucket):

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  finalizers:
    - delete-backup
  name: backup1
spec:
  clusterName: mongodb-percona-cluster
  storageName: eu-central-1
  type: logical

Drop collections on MongoDB replicaset (just to avoid the _id clashes at next point);

Now ask for a restore of the above backup with the following yml (this works as intended since I saw the logs and the data inside MongoDB ReplicaSet):

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
  name: restore1
spec:
  clusterName: mongodb-percona-cluster
  backupName: backup1

Ask for another backup with the following yml (keep in mind that at this point the previous restore process is still in progress)

apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBBackup
metadata:
  finalizers:
    - delete-backup
  name: backup2
spec:
  clusterName: mongodb-percona-cluster
  storageName: eu-central-1
  type: logical

The backup2 will be put on Status=Waiting;

At this point Kubernetes scheduler should kill the backup-agent container from the MongoDB replica pod because of memory issues and restart it;

Now if you do a kubectl get psmdb-backup, you'll see that backup2 is in Error status and if you do a kubectl get psmdb-restore, you'll see that restore1 is also in Error status (OK, I can take that);

From this point onwards, no backup/restore will be possible through any yml, because they'll be appended as Status=Waiting.

The new backup-agent container logs state that it is waiting for incoming requests:

2024/03/05 16:36:01 [entrypoint] starting `pbm-agent`
2024-03-05T16:36:05.000+0000 I pbm-agent:
Version:   2.3.0
Platform:  linux/amd64
GitCommit: 3b1c2e263901cf041c6b83547f6f28ac2879911f
GitBranch: release-2.3.0
BuildTime: 2023-09-20_14:42_UTC
GoVersion: go1.19
2024-03-05T16:36:05.000+0000 I starting PITR routine
2024-03-05T16:36:05.000+0000 I node: rs0/mongodb-percona-cluster-rs0-0.mongodb-percona-cluster-rs0.default.svc.cluster.local:27017
2024-03-05T16:36:05.000+0000 I listening for the commands

Versions

  1. Kubernetes version v1.27.9 in a 8 nodes cluster with 4GB of RAM each, in Azure Cloud
  2. Operator image percona/percona-server-mongodb-operator:1.15.0
  3. Database image percona/percona-server-mongodb:5.0.20-17

Anything else?

Same bug applies also on cronjobs (so it's not an issue triggered by the on demand backup/restore requests): they are kept in Waiting status.
The bug does NOT happen when using a ReplicaSet with at least 3 replicas (the default topology).

Constant host unreachable errors within pod

Report

Hi all,
I’ve been trying to debug this for a good while now, I had a pretty large mongodb deployment and I thought I had a lot of logs simply due to its size. After some digging tho it seemed that they were mostly errors. The strange thing is that the cluster still appears to work. I’ve removed most of the components including sharding, backups, and pmm but I’m still seeing the errors even with a single replicaset. I’ve also disabled istio and effectively turned off the firewall. I’m deploying this cluster using the ansible helm module, and below I’ll paste the full config and a big sample of the logs.

More about the problem

and here is a sample of the logs:

{"t":{"$date":"2024-01-16T22:05:09.468+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn601","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:09.468+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn601","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:12.483+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn602","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:12.484+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn602","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:12.531+00:00"},"s":"D1", "c":"REPL",     "id":6208204, "ctx":"conn469","msg":"Error while waiting for hello response","attr":{"status":{"code":262,"codeName":"ExceededTimeLimit","errmsg":"operation exceeded time limit"}}}
{"t":{"$date":"2024-01-16T22:05:13.179+00:00"},"s":"D1", "c":"REPL",     "id":6208204, "ctx":"conn444","msg":"Error while waiting for hello response","attr":{"status":{"code":262,"codeName":"ExceededTimeLimit","errmsg":"operation exceeded time limit"}}}
{"t":{"$date":"2024-01-16T22:05:13.906+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn607","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.906+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn607","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:13.907+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn608","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.907+00:00"},"s":"D1", "c":"REPL",     "id":6208204, "ctx":"conn606","msg":"Error while waiting for hello response","attr":{"status":{"code":279,"codeName":"ClientDisconnect","errmsg":"operation was interrupted"}}}
{"t":{"$date":"2024-01-16T22:05:13.907+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/db/repl/replication_coordinator_impl.cpp","line":2453}}
{"t":{"$date":"2024-01-16T22:05:13.907+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn608","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"COMMAND",  "id":21962,   "ctx":"conn606","msg":"Assertion while executing command","attr":{"command":"hello","db":"admin","commandArgs":{"hello":1,"helloOk":true,"topologyVersion":{"processId":{"$oid":"65a6fbe68c4bda018b2c0028"},"counter":13},"maxAwaitTimeMS":10000,"$db":"admin","$readPreference":{"mode":"primaryPreferred"}},"error":"ClientDisconnect: operation was interrupted"}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.908+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn606","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn605","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn604","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn605","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn604","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"REPL",     "id":6208204, "ctx":"conn603","msg":"Error while waiting for hello response","attr":{"status":{"code":279,"codeName":"ClientDisconnect","errmsg":"operation was interrupted"}}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/db/repl/replication_coordinator_impl.cpp","line":2453}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"COMMAND",  "id":21962,   "ctx":"conn603","msg":"Assertion while executing command","attr":{"command":"hello","db":"admin","commandArgs":{"hello":1,"helloOk":true,"topologyVersion":{"processId":{"$oid":"65a6fbe68c4bda018b2c0028"},"counter":13},"maxAwaitTimeMS":10000,"$db":"admin","$readPreference":{"mode":"primaryPreferred"}},"error":"ClientDisconnect: operation was interrupted"}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:13.913+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn603","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:14.137+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn609","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:14.137+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn609","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:14.137+00:00"},"s":"D1", "c":"REPL",     "id":6208204, "ctx":"conn610","msg":"Error while waiting for hello response","attr":{"status":{"code":279,"codeName":"ClientDisconnect","errmsg":"operation was interrupted"}}}
{"t":{"$date":"2024-01-16T22:05:14.137+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/db/repl/replication_coordinator_impl.cpp","line":2453}}
{"t":{"$date":"2024-01-16T22:05:14.137+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1104}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"ClientDisconnect: operation was interrupted","file":"src/mongo/util/future_impl.h","line":1087}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"COMMAND",  "id":21962,   "ctx":"conn610","msg":"Assertion while executing command","attr":{"command":"hello","db":"admin","commandArgs":{"hello":1,"helloOk":true,"topologyVersion":{"processId":{"$oid":"65a6fbe68c4bda018b2c0028"},"counter":13},"maxAwaitTimeMS":10000,"$db":"admin","$readPreference":{"mode":"primaryPreferred"}},"error":"ClientDisconnect: operation was interrupted"}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn611","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:14.138+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn611","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:14.139+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:14.139+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn610","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
{"t":{"$date":"2024-01-16T22:05:15.464+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn612","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":299}}
{"t":{"$date":"2024-01-16T22:05:15.464+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn612","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}
"t":{"$date":"2024-01-16T22:10:25.733+00:00"},"s":"D1", "c":"ASSERT",   "id":23074,   "ctx":"conn1060","msg":"User assertion","attr":{"error":"HostUnreachable: Connection closed by peer","file":"src/mongo/transport/service_state_machine.cpp","line":444}}

Custom resource is reporting a ready status, here are its logs:

2024-01-22T17:42:38.722Z    INFO    Created a new mongo key    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "26ab8173-6077-437d-9d50-6123d1c182a2", "KeyName": "kev-test-psmdb-db-mongodb-keyfile"}
2024-01-22T17:42:38.731Z    INFO    Created a new mongo key    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "26ab8173-6077-437d-9d50-6123d1c182a2", "KeyName": "kev-test-psmdb-db-mongodb-encryption-key"}
2024-01-22T17:42:38.779Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "26ab8173-6077-437d-9d50-6123d1c182a2", "replset": "shard", "size": 3, "pods": 0}
2024-01-22T17:42:38.799Z    INFO    Cluster state changed    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "26ab8173-6077-437d-9d50-6123d1c182a2", "previous": "", "current": "initializing"}
2024-01-22T17:42:38.850Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "af333758-e95d-4100-9f52-cfe3cbf57cdb", "replset": "shard", "size": 3, "pods": 0}
2024-01-22T17:42:43.867Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "f074b1e2-166a-456d-b8c4-c27b61b0d77a", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:42:43.947Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "009f498e-7342-427e-8e6e-17b82542081d", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:42:48.943Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "987a4804-a466-4fe3-8ab1-3ae5140fd989", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:42:53.989Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "cfab78a1-aa52-4adb-a08c-a25b6b5bf0a5", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:42:59.029Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "e81e933d-7d9e-4ef5-b173-cf0bd43303ac", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:43:04.078Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "50774dcf-0a7b-4a39-b645-b9c79e266449", "replset": "shard", "size": 3, "pods": 1}
2024-01-22T17:43:09.162Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "3fa6b94b-3b6b-4235-ad28-c66e97550762", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:09.260Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "49550521-00e6-4d8c-be77-756ca1e99cd0", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:14.248Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "df050785-9e5c-4a0e-b4ed-89eb420d1696", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:19.312Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "fe180278-1aaf-480a-a804-398c9df72e9a", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:24.358Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "c48ac446-c9f1-409d-91b5-f6047cc05b3c", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:29.398Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "ad5fc822-752e-4b4e-8f4a-183bff39c200", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:34.453Z    INFO    Waiting for the pods    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "3ebe6019-550b-4ac0-b2cc-9c5e55b7a3f2", "replset": "shard", "size": 3, "pods": 2}
2024-01-22T17:43:49.532Z    INFO    initiating replset    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "0d2ff7eb-2cdc-4d08-93e1-a4dc02dc10a4", "replset": "shard", "pod": "kev-test-psmdb-db-shard-0"}
2024-01-22T17:43:58.669Z    INFO    replset initialized    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "0d2ff7eb-2cdc-4d08-93e1-a4dc02dc10a4", "replset": "shard", "pod": "kev-test-psmdb-db-shard-0"}
2024-01-22T17:43:59.205Z    INFO    Fixing member tags    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "replset": "shard"}
2024-01-22T17:43:59.205Z    DEBUG    Running replSetReconfig config    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "cfg": {"_id":"shard","version":2,"members":[{"_id":0,"host":"kev-test-psmdb-db-shard-0.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1,"tags":{"podName":"kev-test-psmdb-db-shard-0","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1}],"protocolVersion":1,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":"65aea9578f40f2bb98bd0171"},"writeConcernMajorityJournalDefault":true}}
2024-01-22T17:43:59.209Z    INFO    Adding new nodes    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "replset": "shard"}
2024-01-22T17:43:59.209Z    DEBUG    Running replSetReconfig config    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "cfg": {"_id":"shard","version":3,"members":[{"_id":0,"host":"kev-test-psmdb-db-shard-0.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1,"tags":{"podName":"kev-test-psmdb-db-shard-0","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1},{"_id":1,"host":"kev-test-psmdb-db-shard-1.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-1","serviceName":"kev-test-psmdb-db"},"votes":1}],"protocolVersion":1,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":"65aea9578f40f2bb98bd0171"},"writeConcernMajorityJournalDefault":true}}
2024-01-22T17:43:59.231Z    INFO    Configuring member votes and priorities    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "replset": "shard"}
2024-01-22T17:43:59.231Z    DEBUG    Running replSetReconfig config    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "459a4f0b-f928-4fc3-ad20-e22c5d92c4c3", "cfg": {"_id":"shard","version":4,"members":[{"_id":0,"host":"kev-test-psmdb-db-shard-0.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-0","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1},{"_id":1,"host":"kev-test-psmdb-db-shard-1.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":0,"tags":{"podName":"kev-test-psmdb-db-shard-1","serviceName":"kev-test-psmdb-db"},"votes":0}],"protocolVersion":1,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":"65aea9578f40f2bb98bd0171"},"writeConcernMajorityJournalDefault":true}}
2024-01-22T17:44:04.188Z    INFO    Adding new nodes    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "ca1d90c5-512e-4d47-b455-14193e6634af", "replset": "shard"}
2024-01-22T17:44:04.188Z    DEBUG    Running replSetReconfig config    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "ca1d90c5-512e-4d47-b455-14193e6634af", "cfg": {"_id":"shard","version":6,"members":[{"_id":0,"host":"kev-test-psmdb-db-shard-0.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-0","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1},{"_id":1,"host":"kev-test-psmdb-db-shard-1.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":0,"tags":{"podName":"kev-test-psmdb-db-shard-1","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":0},{"_id":2,"host":"kev-test-psmdb-db-shard-2.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-2","serviceName":"kev-test-psmdb-db"},"votes":1}],"protocolVersion":1,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":"65aea9578f40f2bb98bd0171"},"writeConcernMajorityJournalDefault":true}}
2024-01-22T17:44:04.216Z    INFO    Configuring member votes and priorities    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "ca1d90c5-512e-4d47-b455-14193e6634af", "replset": "shard"}
2024-01-22T17:44:04.216Z    DEBUG    Running replSetReconfig config    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "ca1d90c5-512e-4d47-b455-14193e6634af", "cfg": {"_id":"shard","version":7,"members":[{"_id":0,"host":"kev-test-psmdb-db-shard-0.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-0","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1},{"_id":1,"host":"kev-test-psmdb-db-shard-1.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-1","serviceName":"kev-test-psmdb-db"},"secondaryDelaySecs":0,"votes":1},{"_id":2,"host":"kev-test-psmdb-db-shard-2.kev-test-psmdb-db-shard.app.svc.dev.ahq:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":2,"tags":{"podName":"kev-test-psmdb-db-shard-2","serviceName":"kev-test-psmdb-db"},"votes":1}],"protocolVersion":1,"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":"65aea9578f40f2bb98bd0171"},"writeConcernMajorityJournalDefault":true}}
2024-01-22T17:44:16.958Z    INFO    Cluster state changed    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "de7ea753-5e72-4be0-988a-08850b8ffd9d", "previous": "initializing", "current": "ready"}
2024-01-22T17:44:17.309Z    INFO    update Mongo version to 6.0.9-7 (fetched from db)    {"controller": "psmdb-controller", "object": {"name":"kev-test-psmdb-db","namespace":"app"}, "namespace": "app", "name": "kev-test-psmdb-db", "reconcileID": "d5aee049-1b1a-4b61-a34b-d46db7cb5736"}

Steps to reproduce

- name: Deploy Percona Server for MongoDB
  kubernetes.core.helm:
    name: kev-test-mongodb
    chart_ref: percona/psmdb-db
    chart_version: "1.15.1"
    release_namespace: app
    wait: true
    wait_timeout: "10m"
    values:
      clusterServiceDNSSuffix: 'svc.{{ cluster_domain }}'
      finalizers:
        - delete-psmdb-pvc
      nameOverride: ""
      fullnameOverride: ""
      crVersion: 1.15.0
      pause: false
      unmanaged: false
      allowUnsafeConfigurations: false
      multiCluster:
        enabled: false
      updateStrategy: SmartUpdate
      # updateStrategy: RollingUpdate
      upgradeOptions:
        versionServiceEndpoint: https://check.percona.com
        apply: disabled
        schedule: "0 2 * * *"
        setFCV: false
      image:
        repository: percona/percona-server-mongodb
        tag: 6.0.9-7
      imagePullPolicy: Always
      secrets: {}
      pmm:
        enabled: false
      replsets:
        - name: shard
          size: 3
          annotations:
            sidecar.istio.io/inject: "false"
          configuration: |
            security:
              enableEncryption: false
            systemLog:
              verbosity: 1
          serviceAccountName: app
          storage:
            engine: inMemory
            inMemory:
              engineConfig:
                inMemorySizeRatio: 0.9
          podDisruptionBudget:
            maxUnavailable: 1
          expose:
            enabled: true
            exposeType: ClusterIP
          nonvoting:
            enabled: false
            size: 1
          arbiter:
            enabled: false
            size: 1
          resources:
            limits:
              cpu: "2048m"
              memory: "5.0G"
            requests:
              cpu: "300m"
              memory: "0.5G"
          volumeSpec:
            pvc:
              storageClassName: "ceph-block"
              accessModes: [ "ReadWriteOnce" ]
              resources:
                requests:
                  storage: 1Gi
      sharding:
        enabled: false
      backup:
        enabled: false

Versions

  1. Kubernetes RKE2 1.28.3
  2. Operator percona/percona-server-mongodb-operator:1.15.0
  3. Database percona/percona-server-mongodb:6.0.9-7

Anything else?

No response

configurable externalTrafficPolicy for LoadBalancer

Proposal

I would like to be able to change externalTrafficPolicy to Local for the LoadBalancer service in order to preserve the realIP address
It would be nice if it's configurable with the helm chart values

Use-Case

I would like to see real IP address in the logs using LoadBalancer service

Is this a feature you are interested in implementing yourself?

No

Anything else?

No response

Cannot scale up mongodb instances storage from the CR definition of the psmdb cluster

Proposal

If we try to increase the size of the replset mongodb instances storage defined in the spec.replsets[*].volumeSpec.persistentVolumeClaim.resources.requests.storage or spec.sharding.configsvrReplSet.volumeSpec.persistentVolumeClaim.resources.requests.storage CR attributes, the operator is not able to update the related replset statefulsets and pvc as we can see from its logs :

"Forbidden: updates to statefulset spec for fields other than …"

As far as I know, the only way to scale up the storage size of these instances is to directly update the related pvc definition, assuming that these pvc are provided by storageclasses allowing the volume expansion of course. But this is not very convenient as the CR definition and related statefulsets are not updated behind.

It would be a great improvement to allow volume expansion from the CR definition by creating external pvc attached to the pod using Claims As Volumes method as implemented with the PostgreSQL operator instead of the dynamic pvc creation from statefulset definition with the Volume Claim Templates method.

Use-Case

Steps to reproduce this issue :
1 - Create a psmdb cluster with one replset configure with its storage size
2 - Wait for the cluster to be ready and see the newly created pvc with the desired storage size
3 - Try to scale up the instances by increasing spec.replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage attribute in the CR definition
4 - See that the pvc storage size is not updated because the operator is unable to update the related statefulset definition as mentioned in its logs

Is this a feature you are interested in implementing yourself?

No

Anything else?

No response

test

Proposal

test

Use-Case

test

Is this a feature you are interested in implementing yourself?

Yes

Anything else?

test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.