Giter VIP home page Giter VIP logo

mongo-k8s-sidecar's Introduction

Mongo Kubernetes Replica Set Sidecar

This project is as a PoC to setup a mongo replica set using Kubernetes. It should handle resizing of any type and be resilient to the various conditions both mongo and kubernetes can find themselves in.

How to use it

The docker image is hosted on docker hub and can be found here: https://hub.docker.com/r/cvallance/mongo-k8s-sidecar/

An example kubernetes replication controller can be found in the examples directory on github here: https://github.com/cvallance/mongo-k8s-sidecar

There you will also find some helper scripts to test out creating the replica set and resizing it.

Settings

Environment Variable Required Default Description
KUBE_NAMESPACE NO The namespace to look up pods in. Not setting it will search for pods in all namespaces.
MONGO_SIDECAR_POD_LABELS YES This should be a comma separated list of key values the same as the podTemplate labels. See above for example.
MONGO_SIDECAR_SLEEP_SECONDS NO 5 This is how long to sleep between work cycles.
MONGO_SIDECAR_UNHEALTHY_SECONDS NO 15 This is how many seconds a replica set member has to get healthy before automatically being removed from the replica set.
MONGO_PORT NO 27017 Configures the mongo port, allows the usage of non-standard ports.
CONFIG_SVR NO false Configures the configsvr variable when initializing the replicaset.
KUBERNETES_MONGO_SERVICE_NAME NO This should point to the MongoDB Kubernetes (headless) service that identifies all the pods. It is used for setting up the DNS configuration for the mongo pods, instead of the default pod IPs. Works only with the StatefulSets' stable network ID.
KUBERNETES_CLUSTER_DOMAIN NO cluster.local This allows the specification of a custom cluster domain name. Used for the creation of a stable network ID of the k8s Mongo pods. An example could be: "kube.local".
MONGODB_USERNAME NO Configures the mongo username for authentication
MONGODB_PASSWORD NO Configures the mongo password for authentication
MONGODB_DATABASE NO local Configures the mongo authentication database
MONGO_SSL_ENABLED NO false Enable SSL for MongoDB.
MONGO_SSL_ALLOW_INVALID_CERTIFICATES NO true This should be set to true if you want to use self signed certificates.
MONGO_SSL_ALLOW_INVALID_HOSTNAMES NO true This should be set to true if your certificates FQDN's do not match the host name set in your replset.

In its default configuration the sidecar uses the pods' IPs for the MongodDB replica names. Here is a trimmed example:

[ { _id: 1,
   name: '10.48.0.70:27017',
   stateStr: 'PRIMARY',
   ...},
 { _id: 2,
   name: '10.48.0.72:27017',
   stateStr: 'SECONDARY',
   ...},
 { _id: 3,
   name: '10.48.0.73:27017',
   stateStr: 'SECONDARY',
   ...} ]

If you want to use the StatefulSets' stable network ID, you have to make sure that you have the KUBERNETES_MONGO_SERVICE_NAME environmental variable set. Then the MongoDB replica set node names could look like this:

[ { _id: 1,
   name: 'mongo-prod-0.mongodb.db-namespace.svc.cluster.local:27017',
   stateStr: 'PRIMARY',
   ...},
 { _id: 2,
   name: 'mongo-prod-1.mongodb.db-namespace.svc.cluster.local:27017',
   stateStr: 'SECONDARY',
   ...},
 { _id: 3,
   name: 'mongo-prod-2.mongodb.db-namespace.svc.cluster.local:27017',
   stateStr: 'SECONDARY',
   ...} ]

StatefulSet name: mongo-prod. Headless service name: mongodb. Namespace: db-namespace.

Read more about the stable network IDs here.

An example for a stable network pod ID looks like this: $(statefulset name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. The statefulset name + the ordinal form the pod name, the service name is passed via KUBERNETES_MONGO_SERVICE_NAME, the namespace is extracted from the pod metadata and the rest is static.

A thing to consider when running a cluster with the mongo-k8s-sidecar is that it will prefer the stateful set stable network ID to the pod IP. It is however compatible with replica sets, configured with the pod IP as identifier - the sidecar should not add an additional entry for it, nor alter the existing entries. The mongo-k8s-sidecar should only use the stable network ID for new entries in the cluster.

Finally if you have a preconfigured replica set you have to make sure that:

  • the names of the mongo nodes are their IPs
  • the names of the mongo nodes are their stable network IDs (for more info see the link above)

Example of compatible mongo replica names:

10.48.0.72:27017 # Uses the default pod IP name
mongo-prod-0.mongodb.db-namespace.svc.cluster.local:27017 # Uses the stable network ID

Example of not compatible mongo replica names:

mongodb-service-0 # Uses some custom k8s service name. Risks being a duplicate entry for the same mongo.

If you run the sidecar alongside such a cluster, it may lead to a broken replica set, so make sure to test it well before going to production with it (which applies for all software).

MongoDB Command

The following is an example of how you would update the mongo command enabling ssl and using a certificate obtained from a secret and mounted at /data/ssl/mongodb.pem

Command

        - name: my-mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - heroku
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
            - "--sslMode"
            - "requireSSL"
            - "--sslPEMKeyFile"
            - "/data/ssl/mongodb.pem"
            - "--sslAllowConnectionsWithoutCertificates"
            - "--sslAllowInvalidCertificates"
            - "--sslAllowInvalidHostnames"

Volume & Volume Mount

          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
            - name: mongo-ssl
              mountPath: /data/ssl
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar:latest
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=prod"
            - name: MONGO_SSL_ENABLED
              value: 'true'
      volumes:
        - name: mongo-ssl
          secret:
            secretName: mongo

Creating Secret for SSL

Use the Makefile:

Environment Variable Required Default Description
MONGO_SECRET_NAME NO mongo-ssl This is the name that the secret containing the SSL certificates will be created with.
KUBECTL_NAMESPACE NO default This is the namespace in which the secret containing the SSL certificates will be created.
export MONGO_SECRET_NAME=mongo-ssl
export KUBECTL_NAMESPACE=default
cd examples && make generate-certificate

or

Generate them on your own and push the secrets kube create secret generic mongo --from-file=./keys where keys is a directory containing your SSL pem file named mongodb.pem

Debugging

TODO: Instructions for cloning, mounting and watching

Still to do

  • Add tests!
  • Add to circleCi
  • Alter k8s call so that we don't have to filter in memory

mongo-k8s-sidecar's People

Contributors

aliartiza75 avatar bekriebel avatar cvallance avatar diegows avatar donbeave avatar harai avatar ienliven avatar jgrenon avatar john-lin avatar jwaldrip avatar kikov79 avatar phutchins avatar thesandlord avatar v-antech avatar yourit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongo-k8s-sidecar's Issues

How to access mongodb server with statefulset from outside k8S cluster?

Any idea how external user access mongodb service?
I try to export the mongodb as a normal service, for example, the type of NodePort. I could access mongodb by CLI or programming interface but has to specify rs.slaveOk(). However according to http://dbversity.com/mongodb-why-we-need-to-set-rs-slaveok-while-querying-on-secondaries, the rs.slaveOk() may not be recommended for all the use cases.
Does user has to use the old solution of one replicateset/service per mongodb instance?
BTW, the service has to be created with headless first, otherwise sidecare failed to initialize mongodb instance. User could update the service with nodeport or other type later to access the mongodb externally.

Split brain

The current use of the sidecare leaves the cluster vulnerable to split-brain due to network partition.

Would it be possible to introduce another environment variable to the sidecar for EXPECTED_CLUSTER_SIZE? This would keep the sidecar from removing replicas during a network partition and still allow it to resize dynamically when new replicas are added or removed.

If you are not aware, this blog post discusses the side-car and split-brain issue:
http://pauldone.blogspot.co.uk/2017/06/deploying-mongodb-on-kubernetes-gke25.html

Unauthorized

Hello,

I am trying out this sidecar, which sounds like a dream :-) I am having issues, however. I am doing it with StatefulSet.

I am trying to run the sidecar with docker image mongo:2. My current environment is based on coreos-vagrant multi and some glusterFS for persistent storage.

Starting the sidecar container, I get this:

npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart: [email protected]
npm info lifecycle [email protected]~start: [email protected]

> [email protected] start /opt/cvallance/mongo-k8s-sidecar
> forever src/index.js

warn:    --minUptime not set. Defaulting to: 1000ms
warn:    --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
Starting up mongo-k8s-sidecar
Error in workloop { [Error: Unauthorized
] message: 'Unauthorized\n', statusCode: 401 }
Error in workloop { MongoError: can't get local.system.replset config from self or any seed (EMPTYCONFIG)
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:483:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:429:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:463:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:319:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:554:20)
  name: 'MongoError',
  message: 'can\'t get local.system.replset config from self or any seed (EMPTYCONFIG)',
  startupStatus: 3,
  info: 'run rs.initiate(...) if not yet done for the set',
  ok: 0,
  errmsg: 'can\'t get local.system.replset config from self or any seed (EMPTYCONFIG)' }

If i kubectl exec into one of the mongo container and initiate the replicaSet myself:

> rs.initiate()
{
        "info2" : "no configuration explicitly specified -- making one",
        "me" : "mongo-0:27017",
        "info" : "Config now saved locally.  Should come online in about a minute.",
        "ok" : 1
}

which then triggers the following log in the sidecar:

Addresses to add:    [ '10.2.19.8:27017', '10.2.93.8:27017', '10.2.63.9:27017' ]
Addresses to remove: []
Error in workloop { [Error: Unauthorized
] message: 'Unauthorized\n', statusCode: 401 }
Error in workloop { MongoError: no such cmd: replSetGetConfig
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:483:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:429:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:463:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:319:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:554:20)
  name: 'MongoError',
  message: 'no such cmd: replSetGetConfig',
  ok: 0,
  errmsg: 'no such cmd: replSetGetConfig',
  code: 59,
  'bad cmd': { replSetGetConfig: 1 } }

I am currently investigating, yet I very much welcome pointers if you have any.

I will post here any findings (and possible mistakes) I find.

Thank you!
fred

How to use different storage classes for StatefulSet example

I am followed the blog http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html on Azure and all working well, but how can I connect the different replicas of the mongo StatefulSet to different storage classes? Right now every replica connects to the same storage class / storage account in Azure, so if there are any issues with that account it can affect the whole replica set

Not sure if here is the right place to create the issue - maybe that is something for the Kubernetes repo itself. Just let me know.

External access

Is there a way to access the single mongo instances externally somehow?
Maybe using the kube-proxy or a service?

Move to StatefulSets

I'm working on moving the example deployments to Stateful Sets instead of Replication Controllers. This should simplify the setup and provide more stability. From my initial testing, it seems like we still need a sidecar to do the configuration though.

Right now, the Makefile creates the individual volumes, a RC, and a Service per MongoDB Replica, along with a "master" service for all the MongoDB pods.

Under the new system, we can use a Volume Provisioner to dynamically create disks for a TON of platforms out of the box. Then we use a single Stateful Set and a single headless service. Each MongoDB replica will have its own DNS address and volume created automatically.

As an added bonus, using a Volume Provisioner means there is no longer a need for different steps for different cloud providers, etc.

Fortunately, I think this can be done with no code changes to the actual sidecar. The current method the sidecar uses to find the other members of the Replica Set should still work.

I'm working on a PR, but wanted to run it by the community first for comments.

Istio issues with TLS

I've been deploying Mongo with the side-car using StatefullSets and Istio.io. It works well, still there is an issue with istio-auth which forces TLS between pods, reported this in the Istio Users group [1]

TLS should be handled by Istio Envoy, but somehow the sidecar complains about SSL issues connecting to the other pods. I'll take a close look at this, but let me know if somebody already noticed.

[1] https://groups.google.com/forum/#!topic/istio-users/5fO83wx2H1M

Logs
Error in workloop { Error: socket hang up
at TLSSocket.onHangUp (_tls_wrap.js:1120:19)
at Object.onceWrapper (events.js:293:19)
at emitNone (events.js:91:20)
at TLSSocket.emit (events.js:188:7)
at endReadableNT (_stream_readable.js:975:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9) code: 'ECONNRESET' }

Memory leak on fail state [Not critical]

Steps to reproduce

  1. Run the sidecar with a MONGO_SIDECAR_POD_LABELS set to a value which does not select any pods. In my case use ; instead of , to delimit the labels ;-)
  2. Wait for the memory consumption of the sidecar to pile up. Roughly 500kb per 15 minutes

This is not critical but forced me to restart my box because there was no memory left to ssh.

Connection Issue due host names

Hi,

I successfully deployed a cluster with your example. And it works almost fine. I tested the connection with C# and Java driver and both seem to have the same problem:

This is a quote from the C# documentation:

Even when providing a seed list with resolvable host names, if the replica set configuration uses unresolvable host names, the driver will fail to connect. 

http://mongodb.github.io/mongo-csharp-driver/2.2/reference/driver/error_handling/

Both drivers query the replica set config by themselves. Would it be possible to configure replica set with the host names instead of the IPs?

sidecar process error


2017-06-06T08:00:49.961491000Z npm info it worked if it ends with ok
2017-06-06T08:00:49.961724000Z npm info using [email protected]
2017-06-06T08:00:49.961901000Z npm info using [email protected]
2017-06-06T08:00:50.190941000Z npm info lifecycle [email protected]~prestart: [email protected]
2017-06-06T08:00:50.199392000Z npm info lifecycle [email protected]~start: [email protected]
2017-06-06T08:00:50.204128000Z > [email protected] start /opt/cvallance/mongo-k8s-sidecar
2017-06-06T08:00:50.204323000Z > forever src/index.js
2017-06-06T08:00:50.455072000Z warn:    --minUptime not set. Defaulting to: 1000ms
2017-06-06T08:00:50.456001000Z warn:    --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
2017-06-06T08:00:50.672112000Z Using mongo port: 27017
2017-06-06T08:00:50.756149000Z fs.js:584
2017-06-06T08:00:50.756399000Z   return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode);
2017-06-06T08:00:50.756579000Z                  ^
2017-06-06T08:00:50.756915000Z Error: ENOENT: no such file or directory, open '/var/run/secrets/kubernetes.io/serviceaccount/token'
2017-06-06T08:00:50.757077000Z     at Object.fs.openSync (fs.js:584:18)
2017-06-06T08:00:50.757240000Z     at Object.fs.readFileSync (fs.js:491:33)
2017-06-06T08:00:50.757415000Z     at Object.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/src/lib/k8s.js:7:20)
2017-06-06T08:00:50.757577000Z     at Module._compile (module.js:571:32)
2017-06-06T08:00:50.757743000Z     at Object.Module._extensions..js (module.js:580:10)
2017-06-06T08:00:50.757902000Z     at Module.load (module.js:488:32)
2017-06-06T08:00:50.758070000Z     at tryModuleLoad (module.js:447:12)
2017-06-06T08:00:50.758227000Z     at Function.Module._load (module.js:439:3)
2017-06-06T08:00:50.758397000Z     at Module.require (module.js:498:17)
2017-06-06T08:00:50.758553000Z     at require (internal/module.js:20:19)
2017-06-06T08:00:50.761551000Z error: Forever detected script exited with code: 1

is there an environment variable support k8s config?

Great project! Any way to use it with persistentVolumes

For example NFS.

The problem is that all mongo containers in each pod share the same directory.
I have been desperate to parametrize the volumePath in the replication controller and haven't found any solution yet!

Can you be of any help ?

Many thanks!

Eric

Master without slaves

Hi:

Deploying mongo replset with side-car rise up to me the following problem:

Master are configured properly but sidecar cant add slave nodes because a NewReplicaSetConfigurationIncompatible error. Master work fine, I can insert and read data but of course, I cant read this data from the slave nodes because they dont appear in the replset configuration as slave.

The logs say that The hosts mongo-0:27017 and 10.36.0.4:27017 all map to this node in new configuration version 2 for replica set rs0 but the IP and the hostname mongo-0 are the same node.

Below the logs from sidecar container and mongo instances, also the IP of mongo instances (they dont run in the same node):

root@s-smartc2-zprei:/opt/kubernetes/mongodb# kubectl get po -o wide
NAME                   READY     STATUS    RESTARTS   AGE       IP          NODE
mongo-0                2/2       Running   0          47m       10.36.0.4   s-smartc3-zprei
mongo-1                2/2       Running   0          47m       10.44.0.2   s-smartc4-zprei
mongo-2                2/2       Running   0          46m       10.32.0.5   s-smartc2-zprei
root@s-smartc2-zprei:/opt/kubernetes/mongodb#

sidecar logs:

Error in workloop { MongoError: The hosts mongo-0:27017 and 10.36.0.4:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:489:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:435:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:469:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:321:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at readableAddChunk (_stream_readable.js:178:18)
    at Socket.Readable.push (_stream_readable.js:136:10)
    at TCP.onread (net.js:561:20)
  name: 'MongoError',
  message: 'The hosts mongo-0:27017 and 10.36.0.4:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts mongo-0:27017 and 10.36.0.4:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible' }

Status of 3 mongo instance:

root@s-smartc2-zprei:/opt/kubernetes/mongodb# for i in `seq 0 2`; do kubectl exec mongo-$i -- sh -c '/usr/bin/mongo --eval="printjson(rs.isMaster())"'; done
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-0' to see all of the containers in this pod.
MongoDB shell version v3.4.8
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.8
{
        "hosts" : [
                "mongo-0:27017"
        ],
        "setName" : "rs0",
        "setVersion" : 1,
        "ismaster" : true,
        "secondary" : false,
        "primary" : "mongo-0:27017",
        "me" : "mongo-0:27017",
        "electionId" : ObjectId("59b283bd98a268a30af4eeb3"),
        "lastWrite" : {
                "opTime" : {
                        "ts" : Timestamp(1504872388, 1),
                        "t" : NumberLong(-1)
                },
                "lastWriteDate" : ISODate("2017-09-08T12:06:28Z")
        },
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2017-09-08T12:06:36.437Z"),
        "maxWireVersion" : 5,
        "minWireVersion" : 0,
        "readOnly" : false,
        "ok" : 1
}
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-1' to see all of the containers in this pod.
MongoDB shell version v3.4.8
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.8
{
        "ismaster" : **false**,
        "secondary" : **false**,
        "info" : "**Does not have a valid replica set config**",
        "isreplicaset" : true,
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2017-09-08T12:06:33.620Z"),
        "maxWireVersion" : 5,
        "minWireVersion" : 0,
        "readOnly" : false,
        "ok" : 1
}
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-2' to see all of the containers in this pod.
MongoDB shell version v3.4.8
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.8
{
        "ismaster" : **false**,
        "secondary" : **false**,
        "info" : "**Does not have a valid replica set config**",
        "isreplicaset" : true,
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2017-09-08T12:06:37.636Z"),
        "maxWireVersion" : 5,
        "minWireVersion" : 0,
        "readOnly" : false,
        "ok" : 1
}
root@s-smartc2-zprei:/opt/kubernetes/mongodb#

Logs for Members that dont have master or slave state:

> rs.status()
{
        "info" : "run rs.initiate(...) if not yet done for the set",
        "ok" : 0,
        "errmsg" : "**no replset config has been received**",
        "code" : 94,
        "codeName" : "**NotYetInitialized**"
}
>

Error during podElection with undefined podIP

We have been trying to identify some intermittent issues with our MongoDB cluster running on stateful sets using this sidecar. In our logs leading up to some of our issues, we are seeing the following logs.

I am not even sure where to begin to identify what could be causing this, and I haven't seen any issues related to this.

TypeError: Cannot read property 'split' of undefined at Object.toInt [as toLong] 
(/opt/cvallance/mongo-k8s-sidecar/node_modules/ip/lib/ip.js:364:5) at /opt/cvallance/mongo-k8s-
sidecar/src/lib/worker.js:215:21 at Array.sort (native) at podElection (/opt/cvallance/mongo-k8s-
sidecar/src/lib/worker.js:213:8) at inReplicaSet (/opt/cvallance/mongo-k8s-
sidecar/src/lib/worker.js:124:25) at /opt/cvallance/mongo-k8s-sidecar/src/lib/worker.js:85:7 at 
/opt/cvallance/mongo-k8s-sidecar/src/lib/mongo.js:68:12 at /opt/cvallance/mongo-k8s-
sidecar/node_modules/mongodb/lib/admin.js:77:31 at handleCallback (/opt/cvallance/mongo-k8s-
sidecar/node_modules/mongodb/lib/utils.js:120:56) at /opt/cvallance/mongo-k8s-
sidecar/node_modules/mongodb/lib/db.js:995:7

This points at the podElection(...) function defined at https://github.com/cvallance/mongo-k8s-sidecar/blob/master/src/lib/worker.js#L214.

Does not work well with stateful sets.

I have pods that are in a removed state from the replica set. this leaves the cluster in an unusable state. Why doesn't the sidecar automatically remove these nodes?

Error in workloop Quorum check failed HostUnreachable

I'm having an issue getting the sidecar to see my two MongoDB replicas.

Error in workloop { MongoError: Quorum check failed because not enough voting nodes responded; required 3 but only the following 1 voting nodes responded: mongodb-0:27017; the following nodes did not respond affirmatively: mongodb-0.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-1.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-2.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable
at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
at Socket. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:331:22)
at Socket.emit (events.js:159:13)
at addChunk (_stream_readable.js:265:12)
at readableAddChunk (_stream_readable.js:252:11)
at Socket.Readable.push (_stream_readable.js:209:10)
at TCP.onread (net.js:598:20)
name: 'MongoError',
message: 'Quorum check failed because not enough voting nodes responded; required 3 but only the following 1 voting nodes responded: mongodb-0:27017; the following nodes did not respond affirmatively: mongodb-0.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-1.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-2.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable',
ok: 0,
errmsg: 'Quorum check failed because not enough voting nodes responded; required 3 but only the following 1 voting nodes responded: mongodb-0:27017; the following nodes did not respond affirmatively: mongodb-0.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-1.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable, mongodb-2.mongodb.stage-sprint.svc.cluster.local:27017 failed with HostUnreachable',
code: 74,
codeName: 'NodeNotFound',

Here's my Kubernetes yaml configuration that defines my resources.

# Based off of http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html 
# and https://github.com/cvallance/mongo-k8s-sidecar/tree/master/example/StatefulSet
#
# Learn more about StatefulSets, PersistentVolumes and PersistentVolumeClaims:
# https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
#

# Based off of https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/azure_ssd.yaml

# Define the Azure volume storage class for use by our customer MongoDB nodes
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: mongodb-storage
  namespace: stage-sprint
  labels:
    app: mongodb
    tier: storage
provisioner: kubernetes.io/azure-disk
parameters:
  skuName: Premium_LRS
  location: centralus
  storageAccount: <storage_acct>
---

# Create a headless service so each MongoDB pod will get a stable hostname (note: this does NOT load balance)
apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: stage-sprint
  labels:
    app: mongodb
    tier: svc
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    app: mongodb
    tier: db
---

# Create the stateful set for the MongoDB nodes
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongodb
  namespace: stage-sprint
spec:
  serviceName: "mongodb"
  replicas: 3
  template:
    metadata:
      labels:
        app: mongodb
        tier: db
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:latest
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
            - "--bind_ip_all"
          ports:
            - containerPort: 27017
          #imagePullPolicy: Always
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "app=mongodb,tier=db"
            - name: KUBE_NAMESPACE
              value: "stage-sprint"
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: "mongodb"
      imagePullSecrets:
        - name: regsecret
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "mongodb-storage"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 5Gi

Did anybody ever manage to set this up?

I am trying for 2 weeks now, but

Unhandled rejection MongoError: failed to connect to server [mongo-0:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-0 mongo-0:27017]

is the only outcome I get. I tried

mongo-0
mongo-0.default.svc.cluster.local
mongo-0.mongo.default.svc.cluster.local

I have set

    - name: KUBERNETES_MONGO_SERVICE_NAME
      value: mongo

GCE - K8s 1.8 - pods is forbidden - Cannot list pods - Unknown user "system:serviceaccount:default:default"

I used the mongo-k8s-sidecar for somewhilte and everythign worked fine.

today i created a new cluster with 1.8.7-gke.1 and deployed mongo db with the sicdecar.
As a result the pods don't get connected (see below)
when i recreate the cluster with k8s 1.7.12-gke.1 everything works fine.

From sidecar log when faliing:
message: 'pods is forbidden: User "system:serviceaccount:default:default" cannot list pods at the cluster scope: Unknown user "system:serviceaccount:default:default"',

mongo-sidecar | Feb 28, 2018, 11:04:19 AM | status: 'Failure',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | metadata: {},
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | apiVersion: 'v1',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | { kind: 'Status',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | message:
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | Error in workloop { [Error: [object Object]]
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | statusCode: 403 }
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | code: 403 },
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | details: { kind: 'pods' },
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | reason: 'Forbidden',

any hints what has changed in gce permission system? Something i need to configure?

configuration

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
  name: mongo-lp
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo-lp
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo-lp-statefulset
spec:
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo-lp
        environment: test
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3.4.9
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage-lp
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo-lp,environment=test"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage-lp
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 32Gi

StatefulSets example: Four MongoDB Warnings

Hi,

The StatefulSets example, when run exactly as described in the readme,
shows the following warnings (when connecting to mongo shell).

please, which of these should be taken seriously and how can they be avoided?

** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
**          See http://dochub.mongodb.org/core/prodnotes-filesystem

** WARNING: Access control is not enabled for the database.
**          Read and write access to data and configuration is unrestricted.
** WARNING: You are running this process as the root user, which is not recommended.


** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
**        We suggest setting it to 'never'

Synergies

I think there might be some interesting synergies to do with this helm mongodb-replicaset incubator project.

Any thoughts about this?

Edit: Link updated

Restart of Statefulset deletes data

Hello Team,

This is a great work. I am using MongoDB sidecar with Statefulsets but when I delete Statefulsets and create again it deletes the databases which we created. I am using local hostpath storage. This should continue using existing replica config and databases instead of starting afresh cluster. Am I missing any config to handle this issue?

Thanks
Asif J.

Setting up credentials

Hi,

I'm trying to create a username and a password by passing it into the yaml. The values are stored in a secret. Mongo doesn't seem to set them up. The database is setup without credentials. What am I doing wrong?

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: mongo
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: production
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=production"
            - name: MONGODB_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongosecret
                  key: username
            - name: MONGODB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongosecret
                  key: password
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "mongodb-fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 100Gi

MongoDB replica set with authentication fails to start

How do we setup authentication or define a root user?

So far I have tried:

  1. Setting MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD environment variables on mongo container. Mongo container fails to start with "Error: couldn't add user: not master :" which is fair since we pass --replSet flag and it tries adding a user before rs.initiate();
  2. Passing --auth flag to mongod and running post start script on container to add a root user:
sleep 5;
mongo --eval 'rs.initiate()';
mongo --eval 'db=db.getSiblingDB("admin");db.createUser({ user: "root", pwd: "root", roles: [{ role: "root", db: "admin" }]})';

The user is added successfully but sidecar fails with:
The hosts mongo-0:27017 and 10.0.12.57:27017 all map to this node in new configuration version 2 for replica set rs0
as it tries to add the same replica again as mongo-0 is 10.0.12.57.

So what's the proper way of enabling authentication?

Replica member coup when Set ID is lost

I am not sure how to reproduce this, but this was a result of scaling mongo up and down multiple times via the kubectl scale --replicas=X statefulset mongo command.

I was running tests and noticed performance went out the window until I checked the pod logs to find the logs being absolutely flooded with these messages (hundreds, potentially more per second, per pod).

2017-03-28T12:09:44.891097041Z 2017-03-28T12:09:44.874+0000 I REPL [ReplicationExecutor] Error in heartbeat request to 172.17.0.18:27017; InvalidReplicaSetConfig: replica set IDs do not match, ours: 58d8c51ad5bb19757ac841d2; remote node's: 58da2c6a046a5427009f1336

This results in the pods staging repeated coups and electing themselves new masters constantly. I can see from my connections the masters cycle through all of the mongo pods running (I am running 4 currently) and switches shortly after.

If it somehow helps, I attach the sidecar output from mongo-0 below. I am honestly not sure if this is an issue with the sidecar, or even preventable. I just was mainly posting here to see if anyone has noticed this before as a result of continuous scaling up and down during testing phases.

2017-03-28T12:18:07.541385644Z tags: {},
2017-03-28T12:18:07.541388008Z slaveDelay: 0,
2017-03-28T12:18:07.541390261Z votes: 1 },
2017-03-28T12:18:07.541392746Z { _id: 19,
2017-03-28T12:18:07.541395025Z host: '172.17.0.18:27017',
2017-03-28T12:18:07.541397295Z arbiterOnly: false,
2017-03-28T12:18:07.541399560Z buildIndexes: true,
2017-03-28T12:18:07.541401820Z hidden: false,
2017-03-28T12:18:07.541404048Z priority: 1,
2017-03-28T12:18:07.541406223Z tags: {},
2017-03-28T12:18:07.541408426Z slaveDelay: 0,
2017-03-28T12:18:07.541410585Z votes: 1 },
2017-03-28T12:18:07.541412839Z { _id: 20, host: '172.17.0.6:27017' },
2017-03-28T12:18:07.541415202Z { _id: 21, host: '172.17.0.7:27017' } ],
2017-03-28T12:18:07.541417480Z settings:
2017-03-28T12:18:07.541419689Z { chainingAllowed: true,
2017-03-28T12:18:07.541421902Z heartbeatIntervalMillis: 2000,
2017-03-28T12:18:07.541424141Z heartbeatTimeoutSecs: 10,
2017-03-28T12:18:07.541426397Z electionTimeoutMillis: 10000,
2017-03-28T12:18:07.541428688Z catchUpTimeoutMillis: 2000,
2017-03-28T12:18:07.541430923Z getLastErrorModes: {},
2017-03-28T12:18:07.541433482Z getLastErrorDefaults: { w: 1, wtimeout: 0 },
2017-03-28T12:18:07.541435856Z replicaSetId: 58da2c6a046a5427009f1336 } }
2017-03-28T12:18:07.556109862Z Error in workloop { MongoError: Our replica set ID of 58da2c6a046a5427009f1336 did not match that of 172.17.0.6:27017, which is 58d8c51ad5bb19757ac841d2
2017-03-28T12:18:07.556134171Z at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
2017-03-28T12:18:07.556140132Z at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:483:72
2017-03-28T12:18:07.556144641Z at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:429:16)
2017-03-28T12:18:07.556149183Z at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:463:5)
2017-03-28T12:18:07.556153798Z at Socket. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:319:22)
2017-03-28T12:18:07.556159526Z at emitOne (events.js:96:13)
2017-03-28T12:18:07.556163791Z at Socket.emit (events.js:191:7)
2017-03-28T12:18:07.556179163Z at readableAddChunk (_stream_readable.js:176:18)
2017-03-28T12:18:07.556183624Z at Socket.Readable.push (_stream_readable.js:134:10)
2017-03-28T12:18:07.556187840Z at TCP.onread (net.js:554:20)
2017-03-28T12:18:07.556192136Z name: 'MongoError',
2017-03-28T12:18:07.556196928Z message: 'Our replica set ID of 58da2c6a046a5427009f1336 did not match that of 172.17.0.6:27017, which is 58d8c51ad5bb19757ac841d2',
2017-03-28T12:18:07.556201988Z ok: 0,
2017-03-28T12:18:07.556206224Z errmsg: 'Our replica set ID of 58da2c6a046a5427009f1336 did not match that of 172.17.0.6:27017, which is 58d8c51ad5bb19757ac841d2',
2017-03-28T12:18:07.556210736Z code: 103,
2017-03-28T12:18:07.556214806Z codeName: 'NewReplicaSetConfigurationIncompatible' }

Authenticated cluster

Hi,
I have used the sidecar application to see the automatic config of a Replicaset.
The mongod are started in a non auth mode, and I wonder whether if you looked at this.
I tried to create an admin account, then I changed the template to include --auth and restart the containers. It seems not working.
Thx for your answers.

Duplicate member entry results in error

I tried to create three-member replica set on GKE (Kubernetes 1.1.7), but the last two MongoDB instances couldn't join the replica set.

After struggling, I found the following error:

Error in workloop { [MongoError: The hosts mongo-1-ag3nf:27017 and 10.96.2.3:27017 all map to this node in new configuration version 2 for replica set rs0]
  name: 'MongoError',
  message: 'The hosts mongo-1-ag3nf:27017 and 10.96.2.3:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts mongo-1-ag3nf:27017 and 10.96.2.3:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103 }

It seems the IP address 10.96.2.3 was compared against its host name mongo-1-ag3nf, which resulted in duplicate entry.

Is it possible to avoid this problem?

MongoError: Found two member configurations with same host field

Hi, guys!

We just deployed some mongo replicas on our cluster, but some mongo-sidecar instances are returning this error below:

Error in workloop { MongoError: Found two member configurations with same host field, members.0.host == members.2.host == 10.32.2.59:27017
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:489:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:435:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:469:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:321:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at readableAddChunk (_stream_readable.js:178:18)
    at Socket.Readable.push (_stream_readable.js:136:10)
    at TCP.onread (net.js:561:20)

We have pairs of 2 Mongo Instances on each service and one of those says on Mongo Console with the label SECONDARY and the other with the label OTHER (status REMOVED from rs.conf()). Our Mongo Servers is v3.4.6.

Our Kubernetes is on GKE:

Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.1-gke.0", GitCommit:"841411e511bfb1ccd78e76bf8633d4515061e18b", GitTreeState:"clean", BuildDate:"2017-07-14T17:52:01Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Anyone knows what could be occasioning these errors? Thanks!

Edit: I think that is happening some conflict on the election for the PRIMARY. As the documentation says, we must have, at least, 3 replicas for voting.

Mongo-SideCar unable to connect to Mongodb

$ kubectl logs mongo-0 -c mongo-sidecar

> [email protected] start /opt/cvallance/mongo-k8s-sidecar
> forever src/index.js

warn:    --minUptime not set. Defaulting to: 1000ms
warn:    --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
Using mongo port: 27017
Starting up mongo-k8s-sidecar
The cluster domain 'cluster.local' was successfully verified.
Error in workloop { MongoError: failed to connect to server [10.244.1.8:27017] on first connect [MongoError: connect ECONNREFUSED 10.244.1.8:27017]
    at Pool.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35)
    at emitOne (events.js:125:13)
    at Pool.emit (events.js:221:7)
    at Connection.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12)
    at Object.onceWrapper (events.js:326:30)
    at emitTwo (events.js:135:13)
    at Connection.emit (events.js:224:7)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:187:49)
    at Object.onceWrapper (events.js:324:30)
    at emitOne (events.js:125:13)
  name: 'MongoError',
  message: 'failed to connect to server [10.244.1.8:27017] on first connect [MongoError: connect ECONNREFUSED 10.244.1.8:27017]' }

Hi deploying the example statefulset to azure seems to give me the following error. Any ideas? Is there any specific configuration required for azure?

I am using the azure storage config as mentioned:

azure_ssd.yaml

I also attempted to add the following environment variables:

- name: MONGO_SIDECAR_POD_LABELS
   value: "role=mongo,environment=test" 
- name: KUBE_NAMESPACE
   value: "dev"
- name: KUBERNETES_MONGO_SERVICE_NAME
   value: "mongo"

Kubernetes 1.8.2 Error: context deadline exceeded

Hello,
I'm trying to deploy mongodb as a statefulset on Kubernetes 1.8.2 cluster and have the next situation:

  1. Storage class is created successfully.
  2. Persistent volume is provisioned successfully.
  3. mongo-k8s-sidecar contaner is created successfully
  4. On the moment of mongo container creating docker hangs and I have next messages from kubectl:

FirstSeen LastSeen Count From SubObjectPath Type Reason Message


22m 22m 2 default-scheduler Warning FailedScheduling PersistentVolumeClaim is not bound: "mongo-persistent-storage-mongo-0" (repeated 4 times)
22m 22m 1 default-scheduler Normal Scheduled Successfully assigned mongo-0 to testkubeworker2
22m 22m 1 kubelet, testkubeworker2 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-863c2edf-cfa2-11e7-8374-005056ba2036"
22m 22m 1 kubelet, testkubeworker2 Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-zz2rr"
22m 22m 1 kubelet, testkubeworker2 spec.containers{mongo} Normal Pulling pulling image "mongo"
21m 21m 1 kubelet, testkubeworker2 spec.containers{mongo} Normal Pulled Successfully pulled image "mongo"
21m 21m 1 kubelet, testkubeworker2 spec.containers{mongo} Normal Created Created container
19m 19m 1 kubelet, testkubeworker2 spec.containers{mongo} Warning Failed Error: context deadline exceeded
19m 19m 1 kubelet, testkubeworker2 spec.containers{mongo-sidecar} Normal Pulling pulling image "cvallance/mongo-k8s-sidecar"
19m 19m 1 kubelet, testkubeworker2 spec.containers{mongo-sidecar} Normal Pulled Successfully pulled image "cvallance/mongo-k8s-sidecar"
19m 19m 1 kubelet, testkubeworker2 spec.containers{mongo-sidecar} Normal Created Created container
19m 19m 1 kubelet, testkubeworker2 spec.containers{mongo-sidecar} Normal Started Started container
19m 19m 1 kubelet, testkubeworker2 Warning FailedSync Error syncing pod

And next messages from kubelet: testkubeworker2 kubelet[1239]: I1122 17:53:44.127015 1239 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 18m0.662513672s ago; threshold is 3

Docker service is alive but I can't launch any command ("sudo docker ps" hangs).

Does anyone have something similar?

mongo sidecar container does not attempt SSL connection with SSL enabled

I am having an issue with this which may be obvious to the designer but is not to me. I stepped through all the documentation and I am unable to get the sidecar to establish an ssl connection to my container.
It appears to be trying to connect to localhost but I think that is because it's an initial failure of ssl. I have the correct podname specified in the db string and it even shows so. Any insight would be appreciated.


Initializing node with hostname podname-mongo-0
Options are {"mongoOptions":{"ssl":true,"sslAllowInvalidCertificates":true,"sslAllowInvalidHostnames":true}}
Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect
    at Pool.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:326:35)
    at emitOne (events.js:96:13)
    at Pool.emit (events.js:189:7)
    at Connection.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:270:12)
    at Object.onceWrapper (events.js:291:19)
    at emitTwo (events.js:106:13)
    at Connection.emit (events.js:192:7)
    at TLSSocket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:175:49)
    at Object.onceWrapper (events.js:291:19)
    at emitOne (events.js:96:13)
    at TLSSocket.emit (events.js:189:7)
  name: 'MongoError',
  message: 'failed to connect to server [127.0.0.1:27017] on first connect' }

Did not find local voted for document at startup

Hello,

I have deployed mongo statefulset and pods successfully, but when I connect to mongo and do rc.initiate() I get next error:

E QUERY [thread1] ReferenceError: rc is not defined :

There are next string in the mongo container logs:

{"log":"2017-11-23T17:21:09.572+0000 I REPL [initandlisten] Did not find local voted for document at startup.\n","stream":"stdout","time":"2017-11-23T17:21:09.572843716Z"}
{"log":"2017-11-23T17:21:09.572+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset\n","stream":"stdout","time":"2017-11-23T17:21:09.57286932Z"}

Does anyone have anything similar?

MongoDB Stateful Set, Each Read/Write into random Instance, then does not find entries

Worst case scenario is happening as in subject described. I have a StatefulSet running and on database write and read it uses random instances (mongo-0, mongo-1) and does not find values it needs.

My deployment of the app has this:

  spec:
      hostAliases:
        - ip: "10.2.6.112"
          hostnames:
            - "mongo-0"
            - "mongo-0.mongo"
            - "mongo-0.mongo.yourTLD.net"
     ...

Then I do:

If i run rs.initiate() on all mongo set members, one is designated as "secondary" and two as "other". All three tell me:

> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "mongo-2:27017",
	"ok" : 1
}

Then I do as in Readme of this project:

and/or according to sidecar github project documentation:

        - name: KUBERNETES_MONGO_SERVICE_NAME
          value: mongo # your statefulSet name 
in MONGODB_URI:

         value: "mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017,mongo-2.mongo:27017/YOUR_DB?replicaSet=mongo&auth=admin"
bash-3.2$ for i in `seq 0 2`; do kubectl exec mongo-$i -- sh -c '/usr/bin/mongo --eval="printjson(rs.isMaster())"'; done
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-0' to see all of the containers in this pod.
MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.9
{
	"hosts" : [
		"mongo-0:27017"
	],
	"setName" : "mongo",
	"setVersion" : 1,
	"ismaster" : true,
	"secondary" : false,
	"primary" : "mongo-0:27017",
	"me" : "mongo-0:27017",
	"electionId" : ObjectId("7fffffff0000000000000001"),
	"lastWrite" : {
		"opTime" : {
			"ts" : Timestamp(1506910366, 1),
			"t" : NumberLong(1)
		},
		"lastWriteDate" : ISODate("2017-10-02T02:12:46Z")
	},
	"maxBsonObjectSize" : 16777216,
	"maxMessageSizeBytes" : 48000000,
	"maxWriteBatchSize" : 1000,
	"localTime" : ISODate("2017-10-02T02:12:49.688Z"),
	"maxWireVersion" : 5,
	"minWireVersion" : 0,
	"readOnly" : false,
	"ok" : 1
}
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-1' to see all of the containers in this pod.
MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.9
{
	"hosts" : [
		"mongo-1:27017"
	],
	"setName" : "mongo",
	"setVersion" : 1,
	"ismaster" : true,
	"secondary" : false,
	"primary" : "mongo-1:27017",
	"me" : "mongo-1:27017",
	"electionId" : ObjectId("7fffffff0000000000000001"),
	"lastWrite" : {
		"opTime" : {
			"ts" : Timestamp(1506910363, 1),
			"t" : NumberLong(1)
		},
		"lastWriteDate" : ISODate("2017-10-02T02:12:43Z")
	},
	"maxBsonObjectSize" : 16777216,
	"maxMessageSizeBytes" : 48000000,
	"maxWriteBatchSize" : 1000,
	"localTime" : ISODate("2017-10-02T02:12:51.165Z"),
	"maxWireVersion" : 5,
	"minWireVersion" : 0,
	"readOnly" : false,
	"ok" : 1
}
Defaulting container name to mongo.
Use 'kubectl describe pod/mongo-2' to see all of the containers in this pod.
MongoDB shell version v3.4.9
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.9
{
	"hosts" : [
		"mongo-2:27017"
	],
	"setName" : "mongo",
	"setVersion" : 1,
	"ismaster" : true,
	"secondary" : false,
	"primary" : "mongo-2:27017",
	"me" : "mongo-2:27017",
	"electionId" : ObjectId("7fffffff0000000000000001"),
	"lastWrite" : {
		"opTime" : {
			"ts" : Timestamp(1506910363, 1),
			"t" : NumberLong(1)
		},
		"lastWriteDate" : ISODate("2017-10-02T02:12:43Z")
	},
	"maxBsonObjectSize" : 16777216,
	"maxMessageSizeBytes" : 48000000,
	"maxWriteBatchSize" : 1000,
	"localTime" : ISODate("2017-10-02T02:12:51.959Z"),
	"maxWireVersion" : 5,
	"minWireVersion" : 0,
	"readOnly" : false,
	"ok" : 1
}

How can I ensure only 1 DB is used or the others are replicas. At the moment the replication obviously does not work.

Username and Password variables

A question please:

Do the environment variable MONGODB_USERNAME & MONGODB_PASSWORD have any affect after the first deployment? is changing them a good way of changing admin password or should that be done through interacting with mongo directly?

(if this is not the appropriate place for questions, I do apologize)

possible simpler design?

Hello @donbeave @kikov79 @harai @cvallance @ericln ,

Awesome project...
I've been looking at setting up MongoDB with replica sets on Kubernetes, and I found this project, so I'm pretty stoked.

Yet, I'm a little weary of deploying (yet another) NodeJS application in my cluster... It seems a little overkill to run this as a sidecar permanently, so I wanted to pick your brain(s) about another approach:

you've obviously spent a bunch of time on this, so you might just be able to point the flaws in the concept right away before I spend too much time working on it.

Here are the assumptions:

  • A replicaSets need to be initialized on the master node, and master node only.
  • MongoDB pods are deployed with a ReplicationController, so if a pod dies, it gets replaced, so running a script on post start can check for pods that are gone and pods that came in to reconfigure the master.
  • a Headless service provides a list of IPs for the pods references by the mongoDB label.

I'm thinking all this can easily be done with a shell script:

  • Looking up pods with the apiserver is just a curl command. I use this all the time and use jq to parse the JSON output.
  • Finding the master (or lack thereof on bootstrap) can be done with the mongo shell as command line.

So the idea is the following:
on container lifecycle postStart hook, run a shell script in the pod that will:

  • lookup IPs of all mongoDB pods.
  • run mongo shell rs.status() command on each mongo pod to find if there is a master, or an initialized replicaSet already.
    -> if there is a master, list all pods in the replica set from the mongo shell rs.conf() command on the master, and add/remove pods according to what the Kubernetes apiserver pod list provides.
    -> if there is no master, rs.initiate() the current pod as master, and add the other pods as replicas.

Obviously, this could cause problems if a ReplicationController starts many pods at once on bootstrap (race condition to become the master and create the replicaSet)
One could acquire a lock by setting a key in etcd, but that makes things more complicated.
If the process is assumed to bootstrap one mongo pod first, then scale, that seems very reasonable to me.

When a pod dies, the RC will restart it, and reconfig will happen on postStart, so there doesn't seem to need to be a worker running at all times checking the state of the cluster: if a node is gone, it will reconnect, rejoin, and clear up its old IP.
Left over removed pods may only be a problem on scaling down. A similar reconfig script could be ran on the container lifecycle preStop hook.

What are your thoughts?

Cheers

Simple Authentication always fails

I am running kops on AWS. I can get the mongo cluster to work without authentication, but as soon as I try to secure it with simple user/password the nodes fail to connect.

I am creating the user like
db.createUser( { user: "cesco", pwd: "mypassword", roles:[{role:"root", db:"admin"}] })
then deleting the statefulset and creating back again uncommenting the below commented lines

---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongod
spec:
  serviceName: mongodb-service
  replicas: 3
  template:
    metadata:
      labels:
        role: mongo
        environment: development
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongod-container
          image: mongo:3.6
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
            # - "--auth"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo"
            - name: KUBERNETES_MONGO_SERVICE_NAME
              value: "mongodb-service"
            # - name: MONGODB_USERNAME
            #   value: "cesco"
            # - name: MONGODB_PASSWORD
            #   value: "mypassword"
            # - name: MONGODB_DATABASE
            #   value: "admin"
  volumeClaimTemplates:
  - metadata:
      name: mongo-persistent-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "fast"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 4Gi

On the sidecar-log i get

Db.prototype.authenticate method will no longer be available in the next major release 3.x as MongoDB 3.6 will only allow auth against users in the admin db and will no longer allow multiple credentials on a socket. Please authenticate using MongoClient.connect with auth credentials.
Error in workloop { MongoError: Authentication failed.
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'Authentication failed.',
  ok: 0,
  errmsg: 'Authentication failed.',
  code: 18,
  codeName: 'AuthenticationFailed',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1525288116 },
  '$clusterTime': 
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1525288116 },
     signature: { hash: [Binary], keyId: [Long] } } }

I can exec into the mongod-0 pod and run mongo -u cesco -p mypassword and log in just right.As well as running use admin db.auth("cesco","mypassord") I get 1

Logs show

2018-05-02T19:13:22.851+0000 I ACCESS   [conn12] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-2.mongodb-service.default.svc.cluster.local:27017", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.039+0000 I REPL_HB  [replexec-0] Error in heartbeat (requestId: 817) to mongod-2.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.063+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:54200 #13 (11 connections now open)
2018-05-02T19:13:23.063+0000 I NETWORK  [conn13] received client metadata from 127.0.0.1:54200 conn13: { driver: { name: "nodejs", version: "2.2.35" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.115-k8s" }, platform: "Node.js v9.8.0, LE, mongodb-core: 2.1.19" }
2018-05-02T19:13:23.064+0000 I ACCESS   [conn13] SCRAM-SHA-1 authentication failed for cesco on admin from client 127.0.0.1:54200 ; UserNotFound: Could not find user cesco@admin
2018-05-02T19:13:23.122+0000 I REPL_HB  [replexec-2] Error in heartbeat (requestId: 819) to mongod-1.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.191+0000 I ACCESS   [conn6] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-1.mongodb-service.default.svc.cluster.local:27017", fromId: 1, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.352+0000 I ACCESS   [conn12] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-2.mongodb-service.default.svc.cluster.local:27017", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.539+0000 I REPL_HB  [replexec-1] Error in heartbeat (requestId: 821) to mongod-2.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.623+0000 I REPL_HB  [replexec-0] Error in heartbeat (requestId: 823) to mongod-1.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.692+0000 I ACCESS   [conn6] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-1.mongodb-service.default.svc.cluster.local:27017", fromId: 1, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:23.852+0000 I ACCESS   [conn12] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-2.mongodb-service.default.svc.cluster.local:27017", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.040+0000 I REPL_HB  [replexec-2] Error in heartbeat (requestId: 825) to mongod-2.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.124+0000 I REPL_HB  [replexec-1] Error in heartbeat (requestId: 827) to mongod-1.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.193+0000 I ACCESS   [conn6] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-1.mongodb-service.default.svc.cluster.local:27017", fromId: 1, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.353+0000 I ACCESS   [conn12] Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-2.mongodb-service.default.svc.cluster.local:27017", fromId: 2, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.541+0000 I REPL_HB  [replexec-0] Error in heartbeat (requestId: 829) to mongod-2.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }
2018-05-02T19:13:24.625+0000 I REPL_HB  [replexec-2] Error in heartbeat (requestId: 831) to mongod-1.mongodb-service.default.svc.cluster.local:27017, response status: Unauthorized: not authorized on admin to execute command { replSetHeartbeat: "rs0", configVersion: 5, hbv: 1, from: "mongod-0.mongodb-service.default.svc.cluster.local:27017", fromId: 0, term: 1, $replData: 1, $clusterTime: { clusterTime: Timestamp(1525288116, 1), signature: { hash: BinData(0, 259E9B3067A93809C76C1DC5B31CA15C0B2814DF), keyId: 6551044020938735618 } }, $db: "admin" }

What I am doing wrong? I need to grant external access to this database, otherwise i would not care about authentication. Help would be very much appreciated

Name conflict when I use `mongo` as StatefulSet's name

Kubernets version:

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

The following StatefulSet gave me a very quiky bug.

apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    app: mongo
spec:
  ports:
  - port: 27017
  selector:
    app: mongo
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: mongo
spec:
  serviceName: mongo
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--replSet"
            - rs0
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
            - "--noprealloc"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-pv
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar:latest
          volumeMounts:
            - name: mongo-pv
              mountPath: /data/db
          env:
            - name: MONGO_SIDECAR_POD_LABELS
              value: "app=mongo"
            - name: KUBERNETES_MONGO_SERVICE_NAME 
              value: "mongo" 
  volumeClaimTemplates:
  - metadata:
      name: mongo-pv
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: mongo
      resources:
        requests:
          storage: 10Gi

Here is the error log from mongo-sidecar:

Error in workloop RangeError [ERR_SOCKET_BAD_PORT]: Port should be > 0 and < 65536. Received tcp://172.20.177.53:27017.
    at lookupAndConnect (net.js:1065:13)
    at Socket.connect (net.js:1034:5)
    at Object.connect (net.js:107:35)
    at Connection.connect (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:412:25)
    at Pool.connect (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:666:16)
    at Server.connect (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:399:17)
    at Server.connect (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb/lib/server.js:368:17)
    at open (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb/lib/db.js:229:19)
    at Db.open (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb/lib/db.js:252:44)
    at getDb (/opt/cvallance/mongo-k8s-sidecar/src/lib/mongo.js:32:11)

I digged into mongo-sidecar's image, and debugged its environment variable.
It turns out that at https://github.com/cvallance/mongo-k8s-sidecar/blob/master/src/lib/config.js#L79
The process.env.MONGO_PORT has the value of tcp://172.20.177.53:27017.
There is an environment variable name conflicts with kubernettes' default environment variable.

I would suggest giving each config file a prefix so that I won't accidentally run into name conflict, or put an note in the docs to remind everyone to avoid using mongo as the name of statefulset.

Getting error: message: 'no such cmd: replSetGetConfig' in mongo-sidecar

Hi all,

I have been trying to set up a mongodb replica set on k8s cluster. I am able to run all the pods, but they are not connected to each other, when I checked the logs of sidecar, I get the following error:

Addresses to add:     [ '10.1.220.17:27017', '10.1.23.157:27017', '10.1.220.18:27017' ]
Addresses to remove:  []
Error in workloop { MongoError: no such cmd: replSetGetConfig
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:321:22)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at readableAddChunk (_stream_readable.js:178:18)
    at Socket.Readable.push (_stream_readable.js:136:10)
    at TCP.onread (net.js:561:20)
  name: 'MongoError',
  message: 'no such cmd: replSetGetConfig',
  ok: 0,
  errmsg: 'no such cmd: replSetGetConfig',
  code: 59,
  'bad cmd': { replSetGetConfig: 1 } }

Any idea what could cause this error? or How to resolve it? TIA.

Broken with kubernetes 1.6 with RBAC

The sidecar fails on 1.6 with RBAC with:

Error in workloop { [Error: User "system:serviceaccount:wekan-test:default" cannot list pods at the cluster scope.]
  message: 'User "system:serviceaccount:wekan-test:default" cannot list pods at the cluster scope.',
  statusCode: 403 }

Resources Management Guidelines

Just discovered this PoC which seems awesome, still some work to go I guess, but still it's heading good!

One thing that bothers me, is that if you deploy this as per the example folder, you might hit a memory issue with WiredTiger.

Let's say you've a cluster with one machine having 10Go RAM. If you deploy it as it is with 3 nodes in your RS, and your indices grows until 4Go, you'll have your pod being killed by OOM each time it tries to fill in memory which is a real issue that will lead to plenty of useless crashes.

A note about resource management might be useful to add I believe. Don't hesitate to share any inputs on this.

Unable to read from Slave Pod

I manged to deploy a StatefulSet with following commands,

  containers:
  - image: mongo
    name: mongo
    command:
      - mongod
      - "--replSet"
      - rs0
      - "--smallfiles"
      - "--noprealloc"

From the logs, a master was assigned successfully and all Pods are in sync.
However, when trying to connect (through Nodejs app), I am only able to interact with the master, the salves return the following error:
MongoError: not master and slaveOk=false:null

In order to accept reading from slave, I have defined the following attributes:

var c_opt = {server:{auto_reconnect:true,poolSize: settings.mongoConnectionPoolSize}, slave_ok: true };
mongodb.connect(mongourl, c_opt, function(err, conn){

Still, I am getting the same error, how to properly enable slave read?

StatefulSets' stable network ID error

logs from sidecar container:

Error in workloop { MongoError: failed to connect to server [192.168.199.136:27017] on first connect [MongoError: connect ECONNREFUSED 192.168.199.136:27017]
at Pool. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35)
at emitOne (events.js:115:13)
at Pool.emit (events.js:210:7)
at Connection. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12)
at Object.onceWrapper (events.js:318:30)
at emitTwo (events.js:125:13)
at Connection.emit (events.js:213:7)
at Socket. (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:187:49)
at Object.onceWrapper (events.js:316:30)
at emitOne (events.js:115:13)
name: 'MongoError',
message: 'failed to connect to server [192.168.199.136:27017] on first connect [MongoError: connect ECONNREFUSED 192.168.199.136:27017]' }_

my yaml:

apiVersion: v1
kind: Service
metadata:
    name: mongodb-shard1
    labels:
        name: mongo
spec:
    ports:
      - port: 27017
        targetPort: 27017
    clusterIP: None
    selector:
      role: mongo-shard1
---

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
    name: mongod
spec:
    serviceName: mongodb-shard1
    replicas: 3
    template:
        metadata:
            labels:
              role: mongo-shard1
              environment: test
        spec:
            terminationGracePeriodSeconds: 10
            volumes:
              - name: mongodb-volume
                emptyDir: {}
            containers:
              - name: mongod-container
                image: mongo:latest
                command:
                  - "mongod"
                  - "--replSet"
                  - "shard1set"
                  - "--shardsvr"
                  - "--port"
                  - "27017"
                  - "--smallfiles"
                  - "--noprealloc"
                ports:
                  - containerPort: 27017
                volumeMounts:
                  - name: mongodb-volume
                    mountPath: /data/db
              - name: mongo-sidecar
                image: cvallance/mongo-k8s-sidecar:latest
                env:
                  - name: MONGO_SIDECAR_POD_LABELS
                    value: "role=mongo-shard1,environment=test"
                env:
                  - name: KUBERNETES_MONGO_SERVICE_NAME
                    value: mongodb-shard1
                env:
                  - name: KUBERNETES_CLUSTER_DOMAIN
                    value: cluster.local


it seems that rs.initail() didnot work!

MongoError

An error was deployed using the stateful method,I don't know the wrong reason.

 message: 'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' }
Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
    at Pool.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:328:35)
    at emitOne (events.js:96:13)
    at Pool.emit (events.js:191:7)
    at Connection.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:274:12)
    at Object.onceWrapper (events.js:293:19)
    at emitTwo (events.js:106:13)
    at Connection.emit (events.js:194:7)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:177:49)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:96:13)
  name: 'MongoError',

Memory leak on node process

I was testing this sidecar container on a Kubernetes cluster made up of 3 small nodes, and noticed one of my nodes became unresponsive.
After examining the processes, I found a node.js process that belongs to mongo-k8s-sidecar taking 1 GB of memory, which caused other critical processes to crash.

Here's the output from top:

  PID	USER	PR	NI	VIRT	RES	SHR	S	%CPU	%MEM	TIME+		COMMAND                                                                                                                                         
10498	1001	20	0	1565688	0.999g	0	D	0.2	51.1	1:36.21		node                                                                                                                                            
 3673	root	20	0	731068	95792	0	S	0.8	4.7	12:48.00	mongod                                                                                                                                          
 2754	root	20	0	675800	74988	0	S	0.4	3.7	10:58.94	mongod                                                                                                                                          
26246	root	20	0	666336	74184	0	S	0.4	3.6	10:22.46	mongod       

Output from ps shows this:

3372 ? Sl 0:00 node /opt/cvallance/mongo-k8s-sidecar/node_modules/.bin/forever src/index.js

This happened after about 1 full day of running the Mongo pods that contain this sidecar inside.
It happens consistently, and after a third occurrence I came here to report the issue.
I didn't set any resource limits on the sidecar container, as I had no idea the memory use could spike this high for such a small helper program.

ReplicaSet _id error when pods are frequently replaced

If pods are replaced frequently (such as when using preemptible machines), the replicaset member _id value continues to increment. The _id value has a hard limit of 255. After this is reached, the following error is produced:

Error in workloop { MongoError: _id field value of 256 is out of range.
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:331:22)
    at Socket.emit (events.js:159:13)
    at addChunk (_stream_readable.js:265:12)
    at readableAddChunk (_stream_readable.js:252:11)
    at Socket.Readable.push (_stream_readable.js:209:10)
    at TCP.onread (net.js:598:20)
  name: 'MongoError',
  message: '_id field value of 256 is out of range.',
  ok: 0,
  errmsg: '_id field value of 256 is out of range.',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible' }

Instead of just incrementing the _id value, the lowed available value below 255 should be used.

Use a single replication controller and service

Currently it looks like you create a service for every replica?

What would we need to do to make the replica model (or something similar):

svc/mongo -> rc/mongo -> lots of mongo pods

It has been mentions, but PetSets could be a good fit for mongo clusters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.