docker-mongo-local-replicaset's People
docker-mongo-local-replicaset's Issues
Error when rerun the image
Hi,
I have a problem when i try to rerun the image any idea ?
When i try to rerun the image i get this error from rs.status() inside the mongo shell
{
"state" : 10,
"stateStr" : "REMOVED",
"uptime" : 713,
"optime" : Timestamp(1483201761, 1),
"optimeDate" : ISODate("2016-12-31T16:29:21Z"),
"ok" : 0,
"errmsg" : "Our replica set config is invalid or we are not a member of it",
"code" : 93
}
this is my docker compose
version: '2'
services:
mongo:
image: boucher/mongo-local-replicaset
ports:
- "27017:27017"
volumes:
- dbdata:/data
volumes:
dbdata:
this is my log from the image :
WRITING KEYFILE
STARTING CLUSTER
trying: 27001 dev dev
2016-12-31T16:32:58.957+0000 I CONTROL [initandlisten] MongoDB starting : pid=15 port=27001 dbpath=/data/db1 64-bit host=380b49bee914
2016-12-31T16:32:58.958+0000 I CONTROL [initandlisten] db version v3.0.14
2016-12-31T16:32:58.958+0000 I CONTROL [initandlisten] git version: 08352afcca24bfc145240a0fac9d28b978ab77f3
2016-12-31T16:32:58.958+0000 I CONTROL [initandlisten] build info: Linux ip-10-30-223-232 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
2016-12-31T16:32:58.958+0000 I CONTROL [initandlisten] allocator: tcmalloc
2016-12-31T16:32:58.958+0000 I CONTROL [initandlisten] options: { net: { port: 27001 }, replication: { replSet: "rs0" }, security: { authorization: "enabled", keyFile: "/var/mongo_keyfile" }, storage: { dbPath: "/data/db1", mmapv1: { smallFiles: true } } }
2016-12-31T16:32:58.961+0000 I CONTROL [initandlisten] MongoDB starting : pid=14 port=27002 dbpath=/data/db2 64-bit host=380b49bee914
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] db version v3.0.14
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] git version: 08352afcca24bfc145240a0fac9d28b978ab77f3
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] build info: Linux ip-10-30-223-232 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] allocator: tcmalloc
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] options: { net: { port: 27002 }, replication: { replSet: "rs0" }, security: { authorization: "enabled", keyFile: "/var/mongo_keyfile" }, storage: { dbPath: "/data/db2", mmapv1: { smallFiles: true } } }
2016-12-31T16:32:58.962+0000 I CONTROL [initandlisten] MongoDB starting : pid=13 port=27003 dbpath=/data/db3 64-bit host=380b49bee914
2016-12-31T16:32:58.964+0000 I CONTROL [initandlisten] db version v3.0.14
2016-12-31T16:32:58.964+0000 I CONTROL [initandlisten] git version: 08352afcca24bfc145240a0fac9d28b978ab77f3
2016-12-31T16:32:58.964+0000 I CONTROL [initandlisten] build info: Linux ip-10-30-223-232 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 BOOST_LIB_VERSION=1_49
2016-12-31T16:32:58.964+0000 I CONTROL [initandlisten] allocator: tcmalloc
2016-12-31T16:32:58.964+0000 I CONTROL [initandlisten] options: { net: { port: 27003 }, replication: { replSet: "rs0" }, security: { authorization: "enabled", keyFile: "/var/mongo_keyfile" }, storage: { dbPath: "/data/db3", mmapv1: { smallFiles: true } } }
2016-12-31T16:32:58.989+0000 W NETWORK Failed to connect to 127.0.0.1:27001, reason: errno:111 Connection refused
2016-12-31T16:32:58.991+0000 E QUERY Error: couldn't connect to server 127.0.0.1:27001 (127.0.0.1), connection attempt failed
at connect (src/mongo/shell/mongo.js:181:14)
at (connect):1:21 at src/mongo/shell/mongo.js:181
exception: connect failed
2016-12-31T16:32:59.003+0000 W - [initandlisten] Detected unclean shutdown - /data/db2/mongod.lock is not empty.
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] journal dir=/data/db2/journal
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover begin
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover lsn: 173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover /data/db2/journal/j._0
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:1 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:2 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:90 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:591 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:5492 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:5682 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:5692 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover skipping application of section seq:5792 < lsn:173538
2016-12-31T16:32:59.025+0000 I JOURNAL [initandlisten] recover applying initial journal section with sequence number 173398
2016-12-31T16:32:59.029+0000 I STORAGE [initandlisten] recover create file /data/db2/app.ns 16MB
2016-12-31T16:32:59.050+0000 W - [initandlisten] Detected unclean shutdown - /data/db1/mongod.lock is not empty.
2016-12-31T16:32:59.059+0000 W - [initandlisten] Detected unclean shutdown - /data/db3/mongod.lock is not empty.
2016-12-31T16:32:59.074+0000 I JOURNAL [initandlisten] journal dir=/data/db3/journal
2016-12-31T16:32:59.074+0000 I JOURNAL [initandlisten] recover begin
2016-12-31T16:32:59.074+0000 I JOURNAL [initandlisten] recover lsn: 173682
2016-12-31T16:32:59.074+0000 I JOURNAL [initandlisten] recover /data/db3/journal/j._0
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:1 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:2 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:70 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:870 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:1190 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:2080 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:3100 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:3110 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover skipping application of section seq:3210 < lsn:173682
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover applying initial journal section with sequence number 173562
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] journal dir=/data/db1/journal
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover begin
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover lsn: 180429
2016-12-31T16:32:59.084+0000 I JOURNAL [initandlisten] recover /data/db1/journal/j._0
2016-12-31T16:32:59.092+0000 I JOURNAL [initandlisten] recover skipping application of section seq:10 < lsn:180429
2016-12-31T16:32:59.092+0000 I JOURNAL [initandlisten] recover skipping application of section seq:8033 < lsn:180429
2016-12-31T16:32:59.092+0000 I JOURNAL [initandlisten] recover skipping application of section seq:8383 < lsn:180429
2016-12-31T16:32:59.092+0000 I JOURNAL [initandlisten] recover applying initial journal section with sequence number 178589
2016-12-31T16:32:59.102+0000 I STORAGE [initandlisten] recover create file /data/db3/app.ns 16MB
2016-12-31T16:32:59.106+0000 I STORAGE [initandlisten] recover create file /data/db1/app.ns 16MB
2016-12-31T16:32:59.114+0000 I STORAGE [initandlisten] recover create file /data/db2/app.0 16MB
2016-12-31T16:32:59.299+0000 I STORAGE [initandlisten] recover create file /data/db1/app.0 16MB
2016-12-31T16:32:59.301+0000 I STORAGE [initandlisten] recover create file /data/db3/app.0 16MB
2016-12-31T16:32:59.324+0000 I JOURNAL [initandlisten] recover cleaning up
2016-12-31T16:32:59.324+0000 I JOURNAL [initandlisten] removeJournalFiles
2016-12-31T16:32:59.414+0000 I JOURNAL [initandlisten] recover done
2016-12-31T16:32:59.414+0000 I JOURNAL [initandlisten] preallocating a journal file /data/db2/journal/prealloc.0
2016-12-31T16:32:59.483+0000 I JOURNAL [initandlisten] recover cleaning up
2016-12-31T16:32:59.483+0000 I JOURNAL [initandlisten] removeJournalFiles
2016-12-31T16:32:59.491+0000 I JOURNAL [initandlisten] recover cleaning up
2016-12-31T16:32:59.491+0000 I JOURNAL [initandlisten] removeJournalFiles
2016-12-31T16:32:59.502+0000 I JOURNAL [initandlisten] recover done
2016-12-31T16:32:59.502+0000 I JOURNAL [initandlisten] preallocating a journal file /data/db1/journal/prealloc.0
2016-12-31T16:32:59.511+0000 I JOURNAL [initandlisten] recover done
2016-12-31T16:32:59.511+0000 I JOURNAL [initandlisten] preallocating a journal file /data/db3/journal/prealloc.0
2016-12-31T16:33:00.805+0000 I JOURNAL [durability] Durability thread started
2016-12-31T16:33:00.808+0000 I JOURNAL [journal writer] Journal writer thread started
2016-12-31T16:33:00.811+0000 I JOURNAL [durability] Durability thread started
2016-12-31T16:33:00.811+0000 I JOURNAL [journal writer] Journal writer thread started
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.831+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.841+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-12-31T16:33:00.841+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.842+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.851+0000 I JOURNAL [durability] Durability thread started
2016-12-31T16:33:00.851+0000 I JOURNAL [journal writer] Journal writer thread started
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-12-31T16:33:00.875+0000 I CONTROL [initandlisten]
2016-12-31T16:33:00.910+0000 I NETWORK [initandlisten] waiting for connections on port 27003
2016-12-31T16:33:00.913+0000 I NETWORK [initandlisten] waiting for connections on port 27001
trying: 27001 dev dev
2016-12-31T16:33:01.022+0000 I NETWORK [initandlisten] waiting for connections on port 27002
2016-12-31T16:33:01.143+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38754 #1 (1 connection now open)
2016-12-31T16:33:01.455+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:01.455+0000 W NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:01.457+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:01.823+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:01.892+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:01.904+0000 W NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.291+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.291+0000 W NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.351+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.645+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.645+0000 W REPL [ReplicationExecutor] Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat; Got "NodeNotFound No host described in new configuration 1 for replica set rs0 maps to this node" while validating { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.645+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.645+0000 I REPL [ReplicationExecutor] This node is not a member of the config
2016-12-31T16:33:02.645+0000 I REPL [ReplicationExecutor] transition to REMOVED
2016-12-31T16:33:02.645+0000 I NETWORK DBClientCursor::init call() failed
2016-12-31T16:33:02.646+0000 I REPL [ReplicationExecutor] Starting replication applier threads
2016-12-31T16:33:02.646+0000 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:256 locks:{} 1501ms
2016-12-31T16:33:02.648+0000 I NETWORK [conn1] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:38754]
2016-12-31T16:33:02.651+0000 E QUERY Error: error doing query: failed
at DBQuery._exec (src/mongo/shell/query.js:83:36)
at DBQuery.hasNext (src/mongo/shell/query.js:240:10)
at DBCollection.findOne (src/mongo/shell/collection.js:187:19)
at DB.runCommand (src/mongo/shell/db.js:58:41)
at DB.isMaster (src/mongo/shell/db.js:680:51)
at DB._getDefaultAuthenticationMechanism (src/mongo/shell/db.js:1232:27)
at DB._authOrThrow (src/mongo/shell/db.js:1257:33)
at (auth):6:8
at (auth):7:2 at src/mongo/shell/query.js:83
2016-12-31T16:33:02.655+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.655+0000 W REPL [ReplicationExecutor] Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat; Got "NodeNotFound No host described in new configuration 1 for replica set rs0 maps to this node" while validating { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.655+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.655+0000 I REPL [ReplicationExecutor] This node is not a member of the config
2016-12-31T16:33:02.655+0000 I REPL [ReplicationExecutor] transition to REMOVED
2016-12-31T16:33:02.655+0000 I REPL [ReplicationExecutor] Starting replication applier threads
exception: login failed
2016-12-31T16:33:02.743+0000 I NETWORK [ReplicationExecutor] getaddrinfo("2efcd498d07d") failed: Name or service not known
2016-12-31T16:33:02.743+0000 W REPL [ReplicationExecutor] Locally stored replica set configuration does not have a valid entry for the current node; waiting for reconfig or remote heartbeat; Got "NodeNotFound No host described in new configuration 1 for replica set rs0 maps to this node" while validating { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.743+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 1, members: [ { _id: 0, host: "2efcd498d07d:27001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "2efcd498d07d:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "2efcd498d07d:27003", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2016-12-31T16:33:02.743+0000 I REPL [ReplicationExecutor] This node is not a member of the config
2016-12-31T16:33:02.743+0000 I REPL [ReplicationExecutor] transition to REMOVED
2016-12-31T16:33:02.743+0000 I REPL [ReplicationExecutor] Starting replication applier threads
trying: 27001 dev dev
2016-12-31T16:33:04.724+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38794 #2 (1 connection now open)
2016-12-31T16:33:04.756+0000 I ACCESS [conn2] Successfully authenticated as principal dev on admin
admin
2016-12-31T16:33:04.763+0000 I NETWORK [conn2] end connection 127.0.0.1:38794 (0 connections now open)
2016-12-31T16:33:04.957+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:43842 #1 (1 connection now open)
admin
2016-12-31T16:33:04.960+0000 I NETWORK [conn1] end connection 127.0.0.1:43842 (0 connections now open)
2016-12-31T16:33:05.022+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:44756 #1 (1 connection now open)
admin
2016-12-31T16:33:05.025+0000 I NETWORK [conn1] end connection 127.0.0.1:44756 (0 connections now open)
CONFIGURING REPLICA SET: 380b49bee914
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/admin
2016-12-31T16:33:05.107+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38802 #3 (1 connection now open)
2016-12-31T16:33:05.142+0000 I ACCESS [conn3] Successfully authenticated as principal dev on admin
2016-12-31T16:33:05.143+0000 I REPL [conn3] replSetInitiate admin command received from client
[object Object]
2016-12-31T16:33:05.147+0000 I NETWORK [conn3] end connection 127.0.0.1:38802 (0 connections now open)
trying: 27002 dev dev
2016-12-31T16:33:05.270+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:43852 #2 (1 connection now open)
2016-12-31T16:33:05.311+0000 I ACCESS [conn2] Successfully authenticated as principal dev on admin
admin
2016-12-31T16:33:05.312+0000 I NETWORK [conn2] end connection 127.0.0.1:43852 (0 connections now open)
trying: 27003 dev dev
2016-12-31T16:33:05.409+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:44766 #2 (1 connection now open)
2016-12-31T16:33:05.450+0000 I ACCESS [conn2] Successfully authenticated as principal dev on admin
admin
2016-12-31T16:33:05.451+0000 I NETWORK [conn2] end connection 127.0.0.1:44766 (0 connections now open)
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27001/admin
2016-12-31T16:33:05.542+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38812 #4 (1 connection now open)
2016-12-31T16:33:05.606+0000 I ACCESS [conn4] Successfully authenticated as principal dev on admin
[object Object]
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27002/admin
2016-12-31T16:33:05.722+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:43860 #3 (1 connection now open)
2016-12-31T16:33:05.767+0000 I ACCESS [conn3] Successfully authenticated as principal dev on admin
[object Object]
MongoDB shell version: 3.0.14
connecting to: 127.0.0.1:27003/admin
2016-12-31T16:33:05.896+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:44776 #3 (1 connection now open)
2016-12-31T16:33:05.929+0000 I ACCESS [conn3] Successfully authenticated as principal dev on admin
[object Object]
REPLICA SET ONLINE
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.