Comments (8)
plz give more detailed infomation
from mongoshake.
@vinllen
[root@bogon mongo-shake]# uname -a
Linux bogon 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@bogon mongo-shake]#
src as following:
risk_sfc:PRIMARY> db.version()
2.4.10
risk_sfc:PRIMARY>
dst as following:
TokuMX mongo shell v1.5.0-mongodb-2.4.10
connecting to: 172.23.240.29:27019/test
db.version()
2.4.10
mongo-shake is the newest code from git.
config file as following:
[root@bogon mongo-shake]# cat conf/collector.conf
mongo_urls = mongodb://172.23.240.29:27018
collector.id = mongoshake
checkpoint.interval = 5000
http_profile = 9100
system_profile = 9200
log_level = info
log_file = collector.log
log_buffer = true
filter.namespace.black =
filter.namespace.white =
oplog.gids =
shard_key = auto
worker = 8
worker.batch_queue_size = 64
worker.oplog_compressor = none
tunnel = direct
tunnel.address = mongodb://172.23.240.29:27019
context.storage = database
context.address = ckpt_default
context.start_position = 2000-01-01T00:00:01Z
master_quorum = false
replayer.executor = 1
replayer.executor.upsert = true
replayer.conflict_write_to = none
replayer.durable = true
log as following:
[root@bogon mongo-shake]# vi logs/collector.log
[2018/08/08 10:36:44 CST] [WARN] [common.Welcome:172]
\ \ _ ______ |
\ \ / _-=O'/|O'/|
\ Here we go !!! _\ / | / )
/ / '/-== _/|/=-| -GM
/ / * \ | |
/ / (o)
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/08 10:36:44 CST] [INFO] [dbpool.(*MongoConn).HasUniqueIndex:110] Found unique index id on mongoshake.ckpt_default in auto shard mode
[2018/08/08 10:36:44 CST] [INFO] [dbpool.(*MongoConn).Close:46] Close session with mongodb://172.23.240.29:27018
[2018/08/08 10:36:44 CST] [INFO] [collector.(*ReplicationCoordinator).Run:42] Collector startup. shard_by[collection] gids[]
[2018/08/08 10:36:44 CST] [INFO] [collector.(*ReplicationCoordinator).Run:46] Collector configuration {"MongoUrls":["mongodb://172.23.240.29:27018"],"CollectorId":"mongoshake","CheckpointInterval":5000,"HTTPListenPort":9100,"SystemProfile":9200,"LogLevel":"info","LogFileName":"collector.log","LogBuffer":true,"OplogGIDS":"","ShardKey":"collection","WorkerNum":8,"WorkerOplogCompressor":"none","WorkerBatchQueueSize":64,"Tunnel":"direct","TunnelAddress":["mongodb://172.23.240.29:27019"],"MasterQuorum":false,"ContextStorage":"database","ContextStorageUrl":"mongodb://172.23.240.29:27018","ContextAddress":"ckpt_default","ContextStartPosition":946656001,"FilterNamespaceBlack":[],"FilterNamespaceWhite":[],"ReplayerDMLOnly":true,"ReplayerExecutor":1,"ReplayerExecutorUpsert":true,"ReplayerExecutorInsertOnDupUpdate":false,"ReplayerCollisionEnable":false,"ReplayerConflictWriteTo":"none","ReplayerDurable":true}
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-0 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-1 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-2 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-3 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-4 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-5 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-6 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/08 10:36:44 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-7 start working with jobs batch queue. buffer capacity 64
[2018/08/08 10:36:44 CST] [INFO] [collector.(*OplogSyncer).start:129] Poll oplog syncer start. ckpt_interval[5000ms], gid[], shard_key[collection]
[2018/08/08 10:36:44 CST] [INFO] [collector.(*OplogSyncer).newCheckpointManager:19] Oplog sync create checkpoint manager with [database] [ckpt_default]
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/08 10:36:44 CST] [INFO] [ckpt.(*MongoCheckpoint).Get:144] Load exist checkpoint. content &{risk_sfc 4065856564857143296}
[2018/08/08 10:36:44 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/08 10:36:49 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:36:54 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:36:59 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:04 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:09 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:14 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:19 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:24 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:29 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:34 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:39 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:44 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:49 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:54 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:37:59 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:38:04 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/08 10:38:09 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
from mongoshake.
[2018/08/08 10:36:44 CST] [INFO] [ckpt.(*MongoCheckpoint).Get:144] Load exist checkpoint. content &{risk_sfc 4065856564857143296}
it looks like checkpoint is already exsit so the oplogs before this timestamp are all fitered.
You can try to remove checkpoint collection which by default is located in the mongoshake
database and try again.
Plz let me know whether your problem solved.
from mongoshake.
@vinllen
I Do as you say, still not syn data.
from mongoshake.
log as following:
[root@bogon mongo-shake]# cat logs/collector.log
[2018/08/10 12:24:05 CST] [WARN] [common.Welcome:172]
\ \ _ ______ |
\ \ / _-=O'/|O'/|
\ Here we go !!! _\ / | / )
/ / '/-== _/|/=-| -GM
/ / * \ | |
/ / (o)
[2018/08/10 12:24:05 CST] [INFO] [main.main:66] yang test .........1.... xxxxxxxxxxxxxxxx
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/10 12:24:05 CST] [INFO] [dbpool.(*MongoConn).HasUniqueIndex:110] Found unique index id on yyz2.test_collection in auto shard mode
[2018/08/10 12:24:05 CST] [INFO] [dbpool.(*MongoConn).Close:46] Close session with mongodb://172.23.240.29:27018
[2018/08/10 12:24:05 CST] [INFO] [collector.(*ReplicationCoordinator).Run:42] Collector startup. shard_by[collection] gids[]
[2018/08/10 12:24:05 CST] [INFO] [collector.(*ReplicationCoordinator).Run:46] Collector configuration {"MongoUrls":["mongodb://172.23.240.29:27018"],"CollectorId":"mongoshake","CheckpointInterval":5000,"HTTPListenPort":9100,"SystemProfile":9200,"LogLevel":"info","LogFileName":"collector.log","LogBuffer":true,"OplogGIDS":"","ShardKey":"collection","WorkerNum":8,"WorkerOplogCompressor":"none","WorkerBatchQueueSize":64,"Tunnel":"direct","TunnelAddress":["mongodb://172.23.240.29:27019"],"MasterQuorum":false,"ContextStorage":"database","ContextStorageUrl":"mongodb://172.23.240.29:27018","ContextAddress":"ckpt_default","ContextStartPosition":946656001,"FilterNamespaceBlack":[],"FilterNamespaceWhite":[],"ReplayerDMLOnly":true,"ReplayerExecutor":1,"ReplayerExecutorUpsert":true,"ReplayerExecutorInsertOnDupUpdate":false,"ReplayerCollisionEnable":false,"ReplayerConflictWriteTo":"none","ReplayerDurable":true}
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-0 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-1 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-2 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-3 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-4 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-5 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-6 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27019 successfully
[2018/08/10 12:24:05 CST] [INFO] [collector.(*Worker).startWorker:110] Collector-worker-7 start working with jobs batch queue. buffer capacity 64
[2018/08/10 12:24:05 CST] [INFO] [collector.(*OplogSyncer).start:129] Poll oplog syncer start. ckpt_interval[5000ms], gid[], shard_key[collection]
[2018/08/10 12:24:05 CST] [INFO] [collector.(*OplogSyncer).newCheckpointManager:19] Oplog sync create checkpoint manager with [database] [ckpt_default]
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/10 12:24:05 CST] [INFO] [ckpt.(*MongoCheckpoint).Get:152] Regenerate checkpoint. content &{risk_sfc 4065856564857143296}
[2018/08/10 12:24:05 CST] [INFO] [dbpool.NewMongoConn:41] New session to mongodb://172.23.240.29:27018 successfully
[2018/08/10 12:24:10 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:15 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:20 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:25 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:30 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:35 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
[2018/08/10 12:24:40 CST] [INFO] [common.(*ReplicationMetric).startup.func1:137] [name=risk_sfc, filter=0, get=0, consume=0, apply=0, failed_times=0, success=0, tps=0, ckpt_times=0, retransimit_times=0, tunnel_traffic=0B, lsn_ckpt={0,1970-01-01 08:00:00}, lsn_ack={0,1970-01-01 08:00:00}]
from mongoshake.
Is oplog in local.oplog.rs
collection? Could you give me some oplogs?
from mongoshake.
@vinllen
[root@bogon mongodb-test]# ./bin/mongo 172.23.240.29:27018
MongoDB shell version v3.6.6
connecting to: mongodb://172.23.240.29:27018/test
MongoDB server version: 2.4.10
WARNING: shell and server versions do not match
risk_sfc:PRIMARY> use local
switched to db local
risk_sfc:PRIMARY> show collections
oplog.refs
oplog.rs
replInfo
startup_log
system.indexes
system.replset
system.version
risk_sfc:PRIMARY> db.oplog.rs.find()
{ "_id" : BinData(0,"AAAAAAAAAAEAAAAAAAAAAA=="), "ts" : ISODate("2018-08-07T13:14:37.519Z"), "h" : NumberLong(0), "a" : true, "ops" : [ { "op" : "n", "o" : { "msg" : "initiating set" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAAA=="), "ts" : ISODate("2018-08-07T13:18:21.847Z"), "h" : NumberLong("26072014331399"), "a" : true, "ops" : [ { "op" : "c", "ns" : "mongoshake.$cmd", "o" : { "create" : "ckpt_default" } }, { "op" : "i", "ns" : "mongoshake.ckpt_default", "o" : { "_id" : ObjectId("5b699c1df4aeb042398b46b1"), "name" : "risk_sfc", "ckpt" : Timestamp(946656001, 0) } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAAQ=="), "ts" : ISODate("2018-08-07T13:18:44.464Z"), "h" : NumberLong("58088447930741461"), "a" : true, "ops" : [ { "op" : "c", "ns" : "yyz2.$cmd", "o" : { "create" : "test_collection" } }, { "op" : "i", "ns" : "yyz2.test_collection", "o" : { "_id" : ObjectId("5b699c344b6bb10da985a323"), "x" : 1111111111111111 } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAAg=="), "ts" : ISODate("2018-08-07T13:34:46.468Z"), "h" : NumberLong("235791097825442291"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAAw=="), "ts" : ISODate("2018-08-07T13:44:46.468Z"), "h" : NumberLong("8597966865433806765"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAABA=="), "ts" : ISODate("2018-08-07T13:54:46.468Z"), "h" : NumberLong("-48113117375441797"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAABQ=="), "ts" : ISODate("2018-08-07T14:04:46.468Z"), "h" : NumberLong("3532578119210097733"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAABg=="), "ts" : ISODate("2018-08-07T14:14:46.469Z"), "h" : NumberLong("8738522152690532948"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAABw=="), "ts" : ISODate("2018-08-07T14:24:46.469Z"), "h" : NumberLong("-626137649678009711"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAACA=="), "ts" : ISODate("2018-08-07T14:34:46.469Z"), "h" : NumberLong("7544029841090566392"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAACQ=="), "ts" : ISODate("2018-08-07T14:44:46.469Z"), "h" : NumberLong("-4429368968607697219"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAACg=="), "ts" : ISODate("2018-08-07T14:54:46.469Z"), "h" : NumberLong("4803412417381077820"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAACw=="), "ts" : ISODate("2018-08-07T15:04:46.469Z"), "h" : NumberLong("-1912083171756762167"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAADA=="), "ts" : ISODate("2018-08-07T15:14:46.469Z"), "h" : NumberLong("2988683596730147360"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAADQ=="), "ts" : ISODate("2018-08-07T15:24:46.470Z"), "h" : NumberLong("-3476214618966692666"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAADg=="), "ts" : ISODate("2018-08-07T15:34:46.470Z"), "h" : NumberLong("6102580591340581528"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAADw=="), "ts" : ISODate("2018-08-07T15:44:46.470Z"), "h" : NumberLong("-4803379336300808146"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAEA=="), "ts" : ISODate("2018-08-07T15:54:46.470Z"), "h" : NumberLong("1985806881814066128"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAEQ=="), "ts" : ISODate("2018-08-07T16:04:46.470Z"), "h" : NumberLong("-4826625818183050794"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
{ "_id" : BinData(0,"AAAAAAAAAAIAAAAAAAAAEg=="), "ts" : ISODate("2018-08-07T16:14:46.470Z"), "h" : NumberLong("5556123951208743880"), "a" : true, "ops" : [ { "op" : "n", "o" : { "comment" : "keepOplogAlive" } } ] }
Type "it" for more
risk_sfc:PRIMARY>
from mongoshake.
The difference of oplog type between 2.x and 3.0+ is quite different, we won't support this version.
from mongoshake.
Related Issues (20)
- 是否支持同时从多个源向目标数据库同步? HOT 1
- filter过滤collection不生效 HOT 1
- can i use this tool to sync incr ops from replset mongodb(as src) to standalone mongodb(as dst)?
- 关于权限问题
- mac m1 无法启动 ./collector.linux -conf=collector.conf
- Bad checksum?
- crash
- mongo-shake支持同步kafka显示修改数据的整条数据吗?目前是只同步修改了的字段的前后信息
- mongoshake mongodb版本V5.0 同步到kafka一直提示There has no oplog collection in mongo db server,mogodb已开启oplog HOT 2
- tps加入配置
- 全量之后没有增量同步
- FilterNamespaceBlack db.collection support for transactions HOT 3
- panic: move chunk oplog found, 错误优化
- 配置mongoshake启动报错
- mongoshake 配置mongod问题
- mongoshake全量拉取大表数据游标获取不到数据问题
- 升级版本后ckpt字段格式变化,如何获取时间戳信息 HOT 1
- mongodb 3.2.10使用mongo-shake 2.6.6往kafka同步,启动任务报#topic@ip:port] create writer error[address format error]
- 单实例之间同步no oplog HOT 1
- Add MongoShake Prometheus Exporter to Monitoring HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mongoshake.