Giter VIP home page Giter VIP logo

databus's People

Contributors

andyqzb avatar buptyzc avatar chavdar avatar gitter-badger avatar groelofs avatar jordanorelli avatar peoplemerge avatar phanindraganti avatar programmax avatar psrinivasulu avatar srinipunuru avatar xiangyuf avatar yaojingguo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

databus's Issues

Issue with Cluster Client when a new client comes up

Hi

I am using Cluster Clients.
I have 6 partitions. When the first client is started it receives events for all partitions.
When i start second client client-one gets shutdown request for 3 partitions and the relay puller for those are shut down.
When i check in Zookeeper it shows 3 partitions online for 1st client.

However when events are produced for partitions assigned to 1st client it does not get them.
Even the second client is not getting these events.

It works sometimes and sometimes it does not work. Sometimes when second client comes up the partitions are divided cleanly and both clients events pertaining to their partition. However many times i see above mentioned erroneous behavior.

Below are the logs for first client that appear after the second client is started.

What i have observed is the auto rebalance loggers are not seen for the times where it does not work. Only "shutdown requested" loggers are being seen.

01:09:56.910 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 33 START:INVOKE /pz_mysql_cluster/CONFIGS/PARTICIPANT listener:org.apache.helix.controller.GenericHelixController
01:09:56.910 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - controller_330929239 subscribes child-change. path: /pz_mysql_cluster/CONFIGS/PARTICIPANT, listener: org.apache.helix.controller.GenericHelixController@1c39bf12
01:09:56.921 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onConfigChange 414 o.a.h.c.GenericHelixController - START: GenericClusterController.onConfigChange()
01:09:56.925 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.ReadClusterDataStage.process 41 o.a.h.c.stages.ReadClusterDataStage - START ReadClusterDataStage.process()
01:09:56.936 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.ReadClusterDataStage.process 70 o.a.h.c.stages.ReadClusterDataStage - END ReadClusterDataStage.process(). took: 11 ms
01:09:56.937 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.BestPossibleStateCalcStage.process 48 o.a.h.c.s.BestPossibleStateCalcStage - START BestPossibleStateCalcStage.process()
01:09:56.937 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 131 o.a.h.c.rebalancer.AutoRebalancer - currentMapping: {default-resource_5={pg-stage1-11125-416586186=ONLINE}, default-resource_0={pg-stage1-11125-416586186=ONLINE}, default-resource_4={pg-stage1-11125-416586186=ONLINE}, default-resource_3={pg-stage1-11125-416586186=ONLINE}, default-resource_2={pg-stage1-11125-416586186=ONLINE}, default-resource_1={pg-stage1-11125-416586186=ONLINE}}
01:09:56.937 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 132 o.a.h.c.rebalancer.AutoRebalancer - stateCountMap: {ONLINE=1}
01:09:56.938 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 133 o.a.h.c.rebalancer.AutoRebalancer - liveNodes: [pg-stage1-11125-416586186]
01:09:56.938 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 134 o.a.h.c.rebalancer.AutoRebalancer - allNodes: [pg-stage1-11125-416586186, pg-stage2-11125-207100785]
01:09:56.938 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 135 o.a.h.c.rebalancer.AutoRebalancer - maxPartition: 2147483647
01:09:56.938 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.AutoRebalanceStrategy.computePartitionAssignment 127 o.a.h.c.s.AutoRebalanceStrategy - orphan = []
01:09:56.939 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 146 o.a.h.c.rebalancer.AutoRebalancer - newMapping: default-resource, {}{default-resource_0={pg-stage1-11125-416586186=ONLINE}, default-resource_1={pg-stage1-11125-416586186=ONLINE}, default-resource_2={pg-stage1-11125-416586186=ONLINE}, default-resource_3={pg-stage1-11125-416586186=ONLINE}, default-resource_4={pg-stage1-11125-416586186=ONLINE}, default-resource_5={pg-stage1-11125-416586186=ONLINE}}{default-resource_0=[pg-stage1-11125-416586186], default-resource_1=[pg-stage1-11125-416586186], default-resource_2=[pg-stage1-11125-416586186], default-resource_3=[pg-stage1-11125-416586186], default-resource_4=[pg-stage1-11125-416586186], default-resource_5=[pg-stage1-11125-416586186]}
01:09:56.939 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.BestPossibleStateCalcStage.process 65 o.a.h.c.s.BestPossibleStateCalcStage - END BestPossibleStateCalcStage.process(). took: 2 ms
01:09:56.940 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.process 47 o.a.h.c.stages.TaskAssignmentStage - START TaskAssignmentStage.process()
01:09:56.940 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.process 78 o.a.h.c.stages.TaskAssignmentStage - END TaskAssignmentStage.process(). took: 0 ms
01:09:56.940 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onConfigChange 420 o.a.h.c.GenericHelixController - END: GenericClusterController.onConfigChange()
01:09:56.940 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 227 o.a.helix.manager.zk.CallbackHandler - 33 END:INVOKE /pz_mysql_cluster/CONFIGS/PARTICIPANT listener:org.apache.helix.controller.GenericHelixController Took: 30ms
01:09:57.229 [ZkClient-EventThread-24-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 24 START:INVOKE /pz_mysql_cluster/LIVEINSTANCES listener:com.linkedin.databus.cluster.DatabusCluster.DatabusHelixWatcher
01:09:57.229 [ZkClient-EventThread-24-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - watcher_pz_mysql_cluster_330929239 subscribes child-change. path: /pz_mysql_cluster/LIVEINSTANCES, listener: com.linkedin.databus.cluster.DatabusCluster$DatabusHelixWatcher@590eb535
01:09:57.229 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 33 START:INVOKE /pz_mysql_cluster/LIVEINSTANCES listener:org.apache.helix.controller.GenericHelixController
01:09:57.230 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - controller_330929239 subscribes child-change. path: /pz_mysql_cluster/LIVEINSTANCES, listener: org.apache.helix.controller.GenericHelixController@1c39bf12
01:09:57.235 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onLiveInstanceChange 355 o.a.h.c.GenericHelixController - START: Generic GenericClusterController.onLiveInstanceChange()
01:09:57.235 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 33 START:INVOKE /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/CURRENTSTATES/346cfe8bf7b0024 listener:org.apache.helix.controller.GenericHelixController
01:09:57.235 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - controller_330929239 subscribes child-change. path: /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/CURRENTSTATES/346cfe8bf7b0024, listener: org.apache.helix.controller.GenericHelixController@1c39bf12
01:09:57.235 [ZkClient-EventThread-24-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 227 o.a.helix.manager.zk.CallbackHandler - 24 END:INVOKE /pz_mysql_cluster/LIVEINSTANCES listener:com.linkedin.databus.cluster.DatabusCluster.DatabusHelixWatcher Took: 6ms
01:09:57.237 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onStateChange 313 o.a.h.c.GenericHelixController - START: GenericClusterController.onStateChange()
01:09:57.238 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.ReadClusterDataStage.process 41 o.a.h.c.stages.ReadClusterDataStage - START ReadClusterDataStage.process()
01:09:57.248 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.ReadClusterDataStage.process 70 o.a.h.c.stages.ReadClusterDataStage - END ReadClusterDataStage.process(). took: 10 ms
01:09:57.248 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.BestPossibleStateCalcStage.process 48 o.a.h.c.s.BestPossibleStateCalcStage - START BestPossibleStateCalcStage.process()
01:09:57.249 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 131 o.a.h.c.rebalancer.AutoRebalancer - currentMapping: {default-resource_5={pg-stage1-11125-416586186=ONLINE}, default-resource_0={pg-stage1-11125-416586186=ONLINE}, default-resource_4={pg-stage1-11125-416586186=ONLINE}, default-resource_3={pg-stage1-11125-416586186=ONLINE}, default-resource_2={pg-stage1-11125-416586186=ONLINE}, default-resource_1={pg-stage1-11125-416586186=ONLINE}}
01:09:57.249 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 132 o.a.h.c.rebalancer.AutoRebalancer - stateCountMap: {ONLINE=1}
01:09:57.249 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 133 o.a.h.c.rebalancer.AutoRebalancer - liveNodes: [pg-stage1-11125-416586186, pg-stage2-11125-207100785]
01:09:57.249 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 134 o.a.h.c.rebalancer.AutoRebalancer - allNodes: [pg-stage1-11125-416586186, pg-stage2-11125-207100785]
01:09:57.249 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 135 o.a.h.c.rebalancer.AutoRebalancer - maxPartition: 2147483647
01:09:57.250 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.AutoRebalanceStrategy.computePartitionAssignment 127 o.a.h.c.s.AutoRebalanceStrategy - orphan = []
01:09:57.250 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.rebalancer.AutoRebalancer.computeNewIdealState 146 o.a.h.c.rebalancer.AutoRebalancer - newMapping: default-resource, {}{default-resource_0={pg-stage1-11125-416586186=ONLINE}, default-resource_1={pg-stage2-11125-207100785=ONLINE}, default-resource_2={pg-stage1-11125-416586186=ONLINE}, default-resource_3={pg-stage2-11125-207100785=ONLINE}, default-resource_4={pg-stage1-11125-416586186=ONLINE}, default-resource_5={pg-stage2-11125-207100785=ONLINE}}{default-resource_0=[pg-stage1-11125-416586186], default-resource_1=[pg-stage2-11125-207100785], default-resource_2=[pg-stage1-11125-416586186], default-resource_3=[pg-stage2-11125-207100785], default-resource_4=[pg-stage1-11125-416586186], default-resource_5=[pg-stage2-11125-207100785]}
01:09:57.251 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.BestPossibleStateCalcStage.process 65 o.a.h.c.s.BestPossibleStateCalcStage - END BestPossibleStateCalcStage.process(). took: 3 ms
01:09:57.252 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.process 47 o.a.h.c.stages.TaskAssignmentStage - START TaskAssignmentStage.process()
01:09:57.252 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.sendMessages 134 o.a.h.c.stages.TaskAssignmentStage - Sending Message 6c1c138c-b333-4f81-921f-96321026748d to pg-stage1-11125-416586186 transit default-resource_3|[] from:ONLINE to:OFFLINE
01:09:57.252 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.sendMessages 134 o.a.h.c.stages.TaskAssignmentStage - Sending Message dad1b33a-0f61-48fe-b770-72639e9439e5 to pg-stage1-11125-416586186 transit default-resource_4|[] from:ONLINE to:OFFLINE
01:09:57.252 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.sendMessages 134 o.a.h.c.stages.TaskAssignmentStage - Sending Message f1878c09-bcc2-4d3a-a515-54b03f81094c to pg-stage1-11125-416586186 transit default-resource_5|[] from:ONLINE to:OFFLINE
01:09:57.268 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 28 START:INVOKE /pz_mysql_cluster/INSTANCES/pg-stage1-11125-416586186/MESSAGES listener:org.apache.helix.messaging.handling.HelixTaskExecutor
01:09:57.268 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - pg-stage1-11125-416586186 subscribes child-change. path: /pz_mysql_cluster/INSTANCES/pg-stage1-11125-416586186/MESSAGES, listener: org.apache.helix.messaging.handling.HelixTaskExecutor@487bd46a
01:09:57.281 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.TaskAssignmentStage.process 78 o.a.h.c.stages.TaskAssignmentStage - END TaskAssignmentStage.process(). took: 29 ms
01:09:57.281 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.ExternalViewComputeStage.process 57 o.a.h.c.s.ExternalViewComputeStage - START ExternalViewComputeStage.process()
01:09:57.283 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.s.ExternalViewComputeStage.process 153 o.a.h.c.s.ExternalViewComputeStage - END ExternalViewComputeStage.process(). took: 2 ms
01:09:57.283 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onStateChange 320 o.a.h.c.GenericHelixController - END: GenericClusterController.onStateChange()
01:09:57.284 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 227 o.a.helix.manager.zk.CallbackHandler - 33 END:INVOKE /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/CURRENTSTATES/346cfe8bf7b0024 listener:org.apache.helix.controller.GenericHelixController Took: 49ms
01:09:57.284 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.ZKHelixManager.addListener 325 o.a.helix.manager.zk.ZKHelixManager - Added listener: org.apache.helix.controller.GenericHelixController@1c39bf12 for type: CURRENTSTATES to path: /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/CURRENTSTATES/346cfe8bf7b0024
01:09:57.284 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.checkLiveInstancesObservation 516 o.a.h.c.GenericHelixController - controller_330929239 added current-state listener for instance: pg-stage2-11125-207100785, session: 346cfe8bf7b0024, listener: org.apache.helix.controller.GenericHelixController@1c39bf12
01:09:57.284 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 143 o.a.helix.manager.zk.CallbackHandler - 33 START:INVOKE /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/MESSAGES listener:org.apache.helix.controller.GenericHelixController
01:09:57.284 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.subscribeChildChange 236 o.a.helix.manager.zk.CallbackHandler - controller_330929239 subscribes child-change. path: /pz_mysql_cluster/INSTANCES/pg-stage2-11125-207100785/MESSAGES, listener: org.apache.helix.controller.GenericHelixController@1c39bf12
01:09:57.286 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.GenericHelixController.onMessage 336 o.a.h.c.GenericHelixController - START: GenericClusterController.onMessage()
01:09:57.287 [ZkClient-EventThread-33-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.c.stages.ReadClusterDataStage.process 41 o.a.h.c.stages.ReadClusterDataStage - START ReadClusterDataStage.process()
01:09:57.328 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 229 o.a.h.m.handling.HelixTaskExecutor - Scheduling message: 6c1c138c-b333-4f81-921f-96321026748d
01:09:57.329 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 257 o.a.h.m.handling.HelixTaskExecutor - Message: 6c1c138c-b333-4f81-921f-96321026748d handling task scheduled
01:09:57.329 [pool-6-thread-7] INFO o.a.h.messaging.handling.HelixTask.call 78 o.a.h.messaging.handling.HelixTask - handling task: 6c1c138c-b333-4f81-921f-96321026748d begin, at: 1403725197329
01:09:57.329 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 229 o.a.h.m.handling.HelixTaskExecutor - Scheduling message: dad1b33a-0f61-48fe-b770-72639e9439e5
01:09:57.330 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 257 o.a.h.m.handling.HelixTaskExecutor - Message: dad1b33a-0f61-48fe-b770-72639e9439e5 handling task scheduled
01:09:57.330 [pool-6-thread-8] INFO o.a.h.messaging.handling.HelixTask.call 78 o.a.h.messaging.handling.HelixTask - handling task: dad1b33a-0f61-48fe-b770-72639e9439e5 begin, at: 1403725197330
01:09:57.330 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 229 o.a.h.m.handling.HelixTaskExecutor - Scheduling message: f1878c09-bcc2-4d3a-a515-54b03f81094c
01:09:57.331 [pool-6-thread-7] INFO c.l.d.c.r.DatabusV2ClusterRegistrationImpl.onLostPartitionOwnership 865 c.l.d.c.r.D.pz_mysql_cluster - Partition (3) getting removed !!
01:09:57.331 [pool-6-thread-9] INFO o.a.h.messaging.handling.HelixTask.call 78 o.a.h.messaging.handling.HelixTask - handling task: f1878c09-bcc2-4d3a-a515-54b03f81094c begin, at: 1403725197331
01:09:57.331 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.h.m.handling.HelixTaskExecutor.scheduleTask 257 o.a.h.m.handling.HelixTaskExecutor - Message: f1878c09-bcc2-4d3a-a515-54b03f81094c handling task scheduled
01:09:57.331 [pool-6-thread-7] INFO c.l.d.c.DatabusSourcesConnection.stop 513 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - Stopping ... :true
01:09:57.331 [ZkClient-EventThread-28-pg-stage1.nm.flipkart.com:2181,pg-stage2.nm.flipkart.com:2181,pg-stage3.nm.flipkart.com:2181] INFO o.a.helix.manager.zk.CallbackHandler.invoke 227 o.a.helix.manager.zk.CallbackHandler - 28 END:INVOKE /pz_mysql_cluster/INSTANCES/pg-stage1-11125-416586186/MESSAGES listener:org.apache.helix.messaging.handling.HelixTaskExecutor Took: 63ms
01:09:57.332 [pool-6-thread-7] INFO c.l.d.c.DatabusSourcesConnection$SourcesConnectionStatus.shutdown 749 c.l.d.c.DatabusComponentStatus_conn[AnyPPart_alltypes_AnyPPart_TransactionMaster_AnyPPart_TransactionUserDetail_AnyPPart_TransactionSaleDetail_AnyPPart_TransactionAuditSummary_AnyPPart_TransactionFingerprint_AnyPPart_TransactionCardDetail_AnyPPart_TransactionFraudStatus_AnyPPart_TransactionPGResponse]_ClusterId_3 - shutting down connection ...
01:09:57.333 [pool-6-thread-7] INFO c.l.databus.client.BasePullThread.shutdown 263 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - mbean unregistered
01:09:57.333 [pool-6-thread-7] INFO c.l.d.c.a.AbstractActorMessageQueue.shutdown 375 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - ClusterId_3-RelayPuller: shutdown requested.
01:09:57.334 [ClusterId_3-RelayPuller] WARN c.l.d.c.a.AbstractActorMessageQueue.enqueueMessage 343 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - ClusterId_3-RelayPuller: shutdown requested: ignoring REQUEST_STREAM
01:09:57.334 [pool-6-thread-7] INFO c.l.d.c.a.AbstractActorMessageQueue.shutdown 375 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - ClusterId_3-RelayDispatcher: shutdown requested.
01:09:57.334 [ClusterId_3-RelayPuller] INFO c.l.databus.client.RelayPullThread.onShutdown 176 databus_sandbox_alltypes_payment_TransactionAuditSummary_payment_TransactionCardDetail_payment_TransactionFingerprint_payment_TransactionFraudStatus_payment_TransactionMaster_payment_TransactionPGResponse_payment_TransactionSaleDetail_payment_TransactionUserDetail_ClusterId_3 - closing open connection during onShutdown()
01:09:57.335 [pool-6-thread-7] INFO c.l.d.c.DatabusSourcesConnection$SourcesConnectionStatus.shutdown 772 c.l.d.c.DatabusComponentStatus_conn[AnyPPart_alltypes_AnyPPart_TransactionMaster_AnyPPart_TransactionUserDetail_AnyPPart_TransactionSaleDetail_AnyPPart_TransactionAuditSummary_AnyPPart_TransactionFingerprint_AnyPPart_TransactionCardDetail_AnyPPart_TransactionFraudStatus_AnyPPart_TransactionPGResponse]_ClusterId_3 - connection shut down.
01:09:57.335 [ClusterId_3-RelayDispatcher] WARN c.l.d.c.DbusEventBuffer$DbusEventIterator.await 720 c.l.databus.core.DbusEventBuffer - DbusEventIterator: {identifier: ClusterId_3-RelayDispatcher.iter-2048003251, currentPosition: 1629711:[GenId=0;Index=0(lim=8388608,cap=8388608);Offset=1629711], iteratorTail: 1629711:[GenId=0;Index=0(lim=8388608,cap=8388608);Offset=1629711], lockToken={ownerName:ClusterId_3-RelayDispatcher.iter-2048003251, range:Range [start=1629711, end=1629711], created:1403725050480, lastUpdated:1403725099090}}: await/refresh interrupted

databus2 example client exception nul

when I tail -f logs/databus2-client-person.out

2014-03-27 15:48:00,652 +364546 [-RelayPuller] (INFO) {person_Person_null} picked a relay:[server=DatabusServerCoordinates [_name=localhost, _address=localhost/127.0.0.1:11115, _state=ONLINE], subs=[[ps=[uri=databus:physical-source:ANY;role=ANY;rk=], pp=*:*, ls=[name=com.linkedin.events.example.person.Person]]]]
2014-03-27 15:48:00,654 +364548 [-RelayPuller] (INFO) {person_Person_null} Relay Puller switching to request sources
2014-03-27 15:48:00,682 +364576 [-RelayPuller] (ERROR) {person_Person_null} Source not found on server: com.linkedin.events.example.person.Person
2014-03-27 15:48:00,682 +364576 [-RelayPuller] (INFO) {person_Person_null} picking a relay; retries left:2147483647, Backoff Timer :BackoffTimer [_config=BackoffTimerStaticConfig [_initSleep=1, _maxSleep=60000, _sleepIncFactor=1.1, _sleepIncDelta=1, _maxRetryNum=-1], _name=-RelayPuller.errorRetries, _retrySleepMs=32894, _retriesNum=89, _retryStartTs=1395906116783], Are we retrying because of SCNNotFoundException : false

Whether based on trigger or database logging?

hello,guys
May I ask whether I can get the data source change event baseing on the database logging, when getting the change event based on trigger have Increased performance load to the source database.

Could not find group:com.oracle, module:ojdbc6, version:11.2.0.2.0. but the jar file exiting

hi all,
I build databus on windows XP, with jdk1.6 ,gradle1.0 when i excute this command: gradle -Dopen_source=true assemble I got the error below:

Building > :databus2-relay:databus2-event-producer-mock:compileJava > Resolvin
:databus2-relay:databus2-event-producer-mock:compileJava

FAILURE: Build failed with an exception.

  • What went wrong:
    Could not resolve all dependencies for configuration ':databus2-relay:databus2-e
    vent-producer-mock:compile'.

    Could not find group:com.oracle, module:ojdbc6, version:11.2.0.2.0.
    Required by:
    databus-master.databus2-relay:databus2-event-producer-mock:2.0.0 > databus
    -master.databus-util-cmdline:databus-util-cmdline-impl:2.0.0

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug
    option to get more log output.

BUILD FAILED

and I did add the ojdbc6 jar in my enviroment, location is:

F:\work\component\databus-master\sandbox-repo\com\oracle\ojdbc6\11.2.0.2.0\ojdbc6-11.2.0.2.0.jar

is there anyone give some advise ?

Can't generate the avro schema file for mysql.

I just create a table on mysql:

CREATE TABLE devloper (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (id)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

And can't generate the avro schema file, seems that can't specify the int data type.

Can't run the mysql example!

Hi,

I'm a new user to databus, and I follow all the steps in Databus for MySQL, I run the relay successful, but when I update the DB, no event found in the relay. I use all the configure in the example package, and see no message in log.

The last log in relay.log is:
2014-02-14 16:15:54,478 +1566 OpenReplicator_person {OpenReplicator_person} Open Replicator starting from mysql-bin.000001@4
2014-02-14 16:15:54,552 +1640 OpenReplicator_person {OpenReplicator_person} Event Producer Thread done

The last log in databus2-relay-or_person.out is:
2014-02-14 16:15:53,435 +523 main {DbusEventV1Factory} BIG_ENDIAN
2014-02-14 16:15:53,436 +524 main {DbusEventV1Factory}
java.lang.Thread.getStackTrace(Thread.java:1436)
com.linkedin.databus.core.DbusEventV1Factory.(DbusEventV1Factory.java:48)
com.linkedin.databus.core.DbusEventV1Factory.(DbusEventV1Factory.java:34)
com.linkedin.databus.container.netty.HttpRelay.(HttpRelay.java:131)
com.linkedin.databus2.relay.DatabusRelayMain.(DatabusRelayMain.java:91)
com.linkedin.databus.relay.example.PersonRelayServer.main(PersonRelayServer.java:74)
2014-02-14 16:15:54,496 +1584 OpenReplicator_person {TransportImpl} connecting to host: 10.1.8.208, port: 33066
2014-02-14 16:15:54,529 +1617 OpenReplicator_person {TransportImpl} connected to host: 10.1.8.208, port: 33066, context: AbstractTransport.Context[threadId=68,scramble=
X*hN/>)SEC'I|Ur`ipw,protocolVersion=10,serverHost=10.1.8.208,serverPort=33066,serverStatus=2,serverCollation=33,serverVersion=5.1.58-log,serverCapabilities=63487]
2014-02-14 16:15:54,529 +1617 OpenReplicator_person {AuthenticatorImpl} start to login, user: or_test, host: 10.1.8.208, port: 33066
2014-02-14 16:15:54,535 +1623 OpenReplicator_person {AuthenticatorImpl} login successfully, user: or_test, detail: OKPacket[packetMarker=0,affectedRows=0,insertId=0,ser
verStatus=2,warningCount=0,message=]

Is there any configure need I change?

Thanks!

Not able to connect to the DB for example project.

Hello,

I have updated the sources-person.json with my DB details but from the logs I can see that it is not able to find the table although all the tables/views/procedures required for databusification are created in DB including table Person. PFB the error snippet:

015-02-22 20:07:03,823 +3371 EventProducerThread_person {person} JDBC Version is: 11.2.0.2.0
2015-02-22 20:07:06,437 +5985 EventProducerThread_person {OracleEventProducer_person} DatabusException occurred while reading events from person. This error may be due to a transient issue (database is down?):java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

com.linkedin.databus2.core.DatabusException: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist

Help on this will be highly appreciated. Thanks.

Build Falied

Building databus on mac os
jvm :
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

gradle 1.10

Exception Stack as follow ~
10:13:31.991 [ERROR] [org.gradle.BuildExceptionReporter]
10:13:31.992 [ERROR] [org.gradle.BuildExceptionReporter] FAILURE: Build failed with an exception.
10:13:31.992 [ERROR] [org.gradle.BuildExceptionReporter]
10:13:31.993 [ERROR] [org.gradle.BuildExceptionReporter] * What went wrong:
10:13:31.993 [ERROR] [org.gradle.BuildExceptionReporter] Could not resolve all dependencies for configuration ':databus2-relay:databus2-event-producer-mock:compile'.
10:13:31.994 [ERROR] [org.gradle.BuildExceptionReporter] > Could not download artifact 'com.oracle:ojdbc6:11.2.0.2.0:ojdbc6.jar'
10:13:31.994 [ERROR] [org.gradle.BuildExceptionReporter] > Artifact 'com.oracle:ojdbc6:11.2.0.2.0:ojdbc6.jar' not found.
10:13:31.995 [ERROR] [org.gradle.BuildExceptionReporter]
10:13:31.996 [ERROR] [org.gradle.BuildExceptionReporter] * Exception is:
10:13:31.998 [ERROR] [org.gradle.BuildExceptionReporter] org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies for configuration ':databus2-relay:databus2-event-producer-mock:compile'.
10:13:31.999 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingArtifactDependencyResolver.wrapException(ErrorHandlingArtifactDependencyResolver.java:57)
10:13:31.999 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingArtifactDependencyResolver.access$000(ErrorHandlingArtifactDependencyResolver.java:34)
10:13:32.000 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.ivyservice.ErrorHandlingArtifactDependencyResolver$ErrorHandlingResolvedConfiguration.getFiles(ErrorHandlingArtifactDependencyResolver.java:186)
10:13:32.000 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:467)
10:13:32.000 [ERROR] [org.gradle.BuildExceptionReporter] at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.getFiles(DefaultConfiguration.java:202)
10:13:32.001 [ERROR] [org.gradle.BuildExceptionReporter]

Delete Event

Hi,

I am new to this databus and trying to test , Is delete event is supported by databus ? e.g If any record gets deleted in a source , is it captured into relay and then databus client ?

Thanks
Roshan

resubmitted: Trying to build on Mac OS 10.7.5, unit test failure

Somehow I closed the other open issue and couldnt see a way to open it again.
No problem, thanks for helping. Its a workstation, so the ethernet adapter is always on and always has an IP address. I tried again, this time got 3 errors (see below). I dont see how from my view of the web page how to upload a zip file -- I put the 3 logs from /tmp in a zip file. I also ran gradle test with --debug and put that in a zip file. not sure it helps much.

keith

Running test: test method testPullerRetriesExhausted(com.linkedin.databus.client.TestDatabusHttpClient)

Gradle test > com.linkedin.databus.client.TestDatabusHttpClient.testPullerRetriesExhausted FAILED
java.lang.AssertionError at TestDatabusHttpClient.java:494

Running test: test method testRequestError(com.linkedin.databus.client.netty.TestGenericHttpResponseHandler)

Gradle test > com.linkedin.databus.client.netty.TestGenericHttpResponseHandler.testRequestError FAILED
java.lang.AssertionError at TestGenericHttpResponseHandler.java:743

Running test: test method testServerRandomScenario(com.linkedin.databus.client.netty.TestNettyHttpDatabusRelayConnection)

Gradle test > com.linkedin.databus.client.netty.TestNettyHttpDatabusRelayConnection.testServerRandomScenario FAILED
java.lang.AssertionError at TestNettyHttpDatabusRelayConnection.java:1803

how to make databus suport sqlserver 2005 as primary DB server

hello everyone, I am unfamiliar with databus now. I want to use the project for data capturing in real time from a sqlServer DB, is there any adapter or componnet for the work? if I want to do it base on databus, what knowlege i need to know, how to do it? any advice appreciate.

Thanks
larry

How could I do the base and increament through databus?

I now use databus to catch the changes of mysql. Now I meet one problem:
Every day my DBA will backup the DB, and once we need to rebuild the data, we could do it from the lastest day. But I don't know where to start the offset since databus always from the last point. Could I save the point at some time every day? Or databus has the feature to do so?

Thanks
Hopecheng

Databus got assert error when read binlog after sometime

I use databus read the binlog, but after some time, it got assert error on the target table records. I found the last position before the assert error. After I restart the databus from the last position, it could read the binlog records as normal.

I know the assert error means the transaction status is not right. But I can't understand it happens after some position and disappear after restart.

Could anyone help me?

Thanks

Meet exception when read events from multi db.

On one relay, I set 2 database configure(2 physical database which have different ip), and I got the right event when I saw the relay.log. But after I run a client on which I want to get the events from 2 db, I meet the exception on client:
ERROR) {NettyHttpDatabusRelayConnection} server error detected class=com.linkedin.databus.core.ScnNotFoundException message=No message provided
ERROR) {GenericHttpResponseHandler} <2040402337_WAIT_FOR_CHUNK>got closed channel while waiting for response
(ERROR) {StreamHttpResponseProcessor} Exception during /stream response: . Exception message = java.nio.channels.ClosedChannelExc eption. Exception cause = null

and on the relay, it throw the exception:
(ERROR) {person:1} sinceScn is less than minScn and prevScn : sinceScn=30064776074 minScn=30064776868 PrevScn= 30064776586

Does any configure need to do? I try to get only one source on client and it's ok.

Thanks!
Gavin Chen

How catch the tables in a DB with many tables?

Hi,
I just want to use databus to catch one table of a DB. But the DB have many tables and even some tables don't have the primary key, when I run the databus, I find many exceptions like:

java.lang.NullPointerException
at com.linkedin.databus2.producers.ORListener.frameAvroRecord(ORListener.java:470)

or:

com.linkedin.databus.core.DatabusRuntimeException: Could not find a matching logical source for table Uri (evt.keyflowlog)
at com.linkedin.databus2.producers.ORListener.startSource(ORListener.java:392)
at com.linkedin.databus2.producers.ORListener.processTableMapEvent(ORListener.java:290)
at com.linkedin.databus2.producers.ORListener.onEvents(ORListener.java:262)
at com.google.code.or.binlog.impl.AbstractBinlogParser$Context.onEvents(AbstractBinlogParser.java:313)

I think this exception should not stop the databus, but after some exceptions, the replicator stopped, since:

2014-04-17 15:42:55,199 +1751 binlog-parser-1 {TransportImpl} disconnected from 10.1.1.174:3306
2014-04-17 15:42:55,199 +1751 binlog-parser-1 {XThreadFactory} unhandled exception in thread: 18:binlog-parser-1
java.lang.AssertionError
at com.linkedin.databus.core.DbusEventBuffer.startEvents(DbusEventBuffer.java:1670)
at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.addTxnToBuffer(OpenReplicatorEventProducer.java:420)
at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.onEndTransaction(OpenReplicatorEventProducer.java:366)
at com.linkedin.databus2.producers.ORListener.endXtion(ORListener.java:350)
at com.linkedin.databus2.producers.ORListener.onEvents(ORListener.java:197)

Is this a bug of the window status? I find the state had been STARTED.

Thanks
Gavin Chen

Build error with centos6 and Gradle 2.1

My operating system is CentOS release 6.4.
I cloned the master branch from github.
But I got the following errors when I run gradle -Dopen_source=true assemble, I have downloaded the ojdbc6.jar and renamed it to ojdbc6-11.2.0.2.0.jar.

gradle -Dopen_source=true assemble
Configuration on demand is an incubating feature.

FAILURE: Build failed with an exception.

* Where:
Script '/home/wangjian/databus-master/subprojects.gradle' line: 60

* What went wrong:
Could not compile script '/home/wangjian/databus-master/subprojects.gradle'.
> startup failed:
  script '/home/wangjian/databus-master/subprojects.gradle': 60: unable to resolve class Compile
   @ line 60, column 29.
     tasks.withType(Compile).all { Compile compile ->
                                 ^

  1 error


* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 1.573 secs

Error during build with gradle 1.11

Hello,

I have an issue during my build process. I am using gradle v1.11 and it seems that the build is Failed due to javadoc compilation.

screenshot

I guess I missed something but cannot find out what's going wrong...
Could you help me please?
Thanks in advance,

Damien

databus2 example relay use mysql can't run

the sources-person.json :

{
    "name" : "person",
    "id"  : 1,
    "uri" : "mysql://or_test%2For_test@localhost:33066/33066/mysql-bin",
    "slowSourceQueryThreshold" : 2000,
    "sources" :
    [
        {
        "id" : 40,
        "name" : "com.linkedin.events.example.or_test.Person",
        "uri": "or_test.person",
        "partitionFunction" : "constant:1"
         }
    ]
}

bash login mysql is success

[oracle@localhost logs]$ mysql -uor_test -por_test
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.5.28-log Source distribution

Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
Exception in thread "OpenReplicator_person" com.linkedin.databus.core.DatabusRuntimeException: failed to start open replicator: Access denied for user 'or_test'@'localhost' (using password: YES)
        at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.run(OpenReplicatorEventProducer.java:341)
Caused by: com.google.code.or.net.TransportException: Access denied for user 'or_test'@'localhost' (using password: YES)
        at com.google.code.or.net.impl.AuthenticatorImpl.login(AuthenticatorImpl.java:89)
        at com.google.code.or.net.impl.TransportImpl.connect(TransportImpl.java:105)
        at com.google.code.or.OpenReplicator.start(OpenReplicator.java:91)
        at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.run(OpenReplicatorEventProducer.java:338)

Multiple Physical Sources and Clustered Clients

Hi Phani

I had couple of questions:
The Relay Server supports multiple physical sources. So i can configure my relay to connect to different databases (Sharded). These would be different physical sources with different ip. I can give different id to each physical source. In this case i am assuming there would be 2 event buffers on server with separate SCN. Please let me know if this is right.

Now on the client how do i specify which physical source to connect to. I thought that the id specified in runtime.relay may be the relay id. But that does not seem to be the case. I am able to give any number there (different from id specified on server and it works).
Does the client get events from all physical sources on the relay server that it registers to. In this case is the SCN check point done on client for both physical sources.

On client it appears that a client can register to multiple relay servers. This appeared to me from the config code. Is this possible.

In case of clustered clients i am assuming that the id specified is just to identify clusters and has got nothing to do with id of the physical source on the relay server. In this case can more than one cluster be defined on same client which connects to same relay server. The consumers in each cluster would be different in my case. Each writing to a separate destination.

When using clustered clients is checkpointing done on Zookeeper by default. It did not seem to be the case. When i tried clustered clients load balancing was working fine as well as partitions were coming up on a client when another dies. However the checkpoint for the partition was not put into Zookeeper. Due to this when partition was activated on another client it would read from the beginning. Do i need to use "Shared State Cluster Configuration" or "Checkpoint Persistent Provider Configuration" section from wiki. Is there any example with this.

I tried to go through the code to figure answers to these but got lost along the way. :-).

the mistake about loading index.schema_registry

when i run test case schemas.TestFileSystemSchemaRegistryService,a null pointer exception appearing,and maybe some resources not founded,How can I fix it ?

Exception in thread "main" java.lang.NullPointerException
at java.io.Reader.(Reader.java:61)
at java.io.InputStreamReader.(InputStreamReader.java:55)
at com.linkedin.databus2.schemas.ResourceVersionedSchemaSetProvider.loadSchemas(ResourceVersionedSchemaSetProvider.java:61)
at com.linkedin.databus2.schemas.FileSystemSchemaRegistryService.initializeSchemaSet(FileSystemSchemaRegistryService.java:133)
at com.linkedin.databus2.schemas.FileSystemSchemaRegistryService.build(FileSystemSchemaRegistryService.java:56)
at com.linkedin.databus2.schemas.StandardSchemaRegistryFactory.createSchemaRegistry(StandardSchemaRegistryFactory.java:50)
at com.linkedin.databus.container.netty.HttpRelay.(HttpRelay.java:117)
at com.linkedin.databus2.relay.DatabusRelayMain.(DatabusRelayMain.java:101)
at com.linkedin.databus.relay.example.PersonRelayServer.(PersonRelayServer.java:66)
at com.linkedin.databus.relay.example.PersonRelayServer.main(PersonRelayServer.java:109)

Does anyone meet this assert error ?

I register one table and after running some time, I find the error in logs:
java.lang.AssertionError
29 at com.linkedin.databus.core.DbusEventBuffer.startEvents(DbusEventBuffer.java:1671)
30 at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.addTxnToBuffer(OpenReplicatorEventProducer.java:620)
31 at com.linkedin.databus2.producers.OpenReplicatorEventProducer$EventProducerThread.onEndTransaction(OpenReplicatorEventProducer.java:566)
32 at com.linkedin.databus2.producers.ORListener.endXtion(ORListener.java:360)
33 at com.linkedin.databus2.producers.ORListener.onEvents(ORListener.java:206)
34 at com.google.code.or.binlog.impl.AbstractBinlogParser$Context.onEvents(AbstractBinlogParser.java:313)
35 at com.google.code.or.binlog.impl.parser.QueryEventParser.parse(QueryEventParser.java:76)
36 at com.google.code.or.binlog.impl.ReplicationBasedBinlogParser.doParse(ReplicationBasedBinlogParser.java:129)
37 at com.google.code.or.binlog.impl.AbstractBinlogParser$Task.run(AbstractBinlogParser.java:244)
38 at java.lang.Thread.run(Thread.java:636)

What does this error mean? After this error, I can't catch any changes of MySQL.

Fault Tolerant Clients

How would you deploy clients in a fault-tolerant fashion where at least 2 clients are processing the same event stream without both processing the event?

From the diagrams it looks like they would at least need to use a shared checkpoint store. I was able to find a SharedCheckpointPersistenceProvider that seems to use ZK under the hood. But that would only allow them to see how far the stream has been processed, how would I configure them so that only one processes an event but also account for failure while processing the event where the other instance would pick it up?

Thank you

GET Info with curl

Would you please give me a example of this request. Especially, the field of checkPoint.

GET /stream?sources=src_id,...&streamFromLatestScn=<true_or_false>&checkPoint=&size=&output=json/binary&filters=

Thanks~

hang in ":databus-cluster:databus-cluster-manager-impl”

Hi,
When I building system , the process hung in ":databus-cluster:databus-cluster-manager-impl” without any log, as follows:

Download http://maven.restlet.org/org/restlet/org.restlet/1.1.10/org.restlet-1.1.10.pom
Download http://repo1.maven.org/maven2/org/apache/camel/camel-core/2.5.0/camel-core-2.5.0.pom
> Building 2% > :databus-cluster:databus-cluster-manager-impl:compileJava > Resolving dependencies ':databus-cluster:databus-cluster-manager-impl:co

Could you give me a favor?

Thank you~
Best Regards.

Does databus support AWS RDS?

Hi all,

Am Totally newbie on MySQL, I know there is limit on AWS RDS, just don't know the limitation is require by Databus.

Clustered Load balanced clients not working

I followed steps present in "Databus Load Balancing Client". I initially setup client with numPartitions=1. At this stage everything was working fine. Client was connected to relay server and was getting events.
I then changed numPartitions=2. The client was then not starting. I got the following exception.

Exception in thread "main" com.linkedin.databus.client.pub.DatabusClientException: com.linkedin.databus.cluster.DatabusCluster$DatabusClusterException: Cannot create DatabusCluster! Cluster exists with num partitions=1. Tried to join with 2 partitions
at com.linkedin.databus.client.registration.DatabusV2ClusterRegistrationImpl.start(DatabusV2ClusterRegistrationImpl.java:273)
at com.linkedin.databus.client.example.PersonRegisterAndStart.registerDatabus2ClientAndStart(PersonRegisterAndStart.java:36)
at com.linkedin.databus.client.example.PersonRegisterAndStart.main(PersonRegisterAndStart.java:56)
Caused by: com.linkedin.databus.cluster.DatabusCluster$DatabusClusterException: Cannot create DatabusCluster! Cluster exists with num partitions=1. Tried to join with 2 partitions
at com.linkedin.databus.cluster.DatabusCluster.(DatabusCluster.java:108)
at com.linkedin.databus.client.registration.DatabusV2ClusterRegistrationImpl.createCluster(DatabusV2ClusterRegistrationImpl.java:992)
at com.linkedin.databus.client.registration.DatabusV2ClusterRegistrationImpl.start(DatabusV2ClusterRegistrationImpl.java:269)
... 2 more

The settings are as follow
databus.client.clientCluster(1).clusterName=Person_Cluster
databus.client.clientCluster(1).zkAddr=localhost:2181
databus.client.clientCluster(1).numPartitions=2
databus.client.clientCluster(1).quorum=1
databus.client.clientCluster(1).zkSessionTimeoutMs=3000
databus.client.clientCluster(1).zkConnectionTimeoutMs=3000
databus.client.clientCluster(1).checkpointIntervalMs=5

I then just tried to see if just the filtering works for the client. That too was not working.
The settings used were

serversidefilter.filter(40).type=MOD
serversidefilter.filter(40).mod.numBuckets=4
serversidefilter.filter(40).mod.buckets=[0-3]
serversidefilter.filter(41).type=MOD
serversidefilter.filter(41).mod.numBuckets=4
serversidefilter.filter(41).mod.buckets=[0-3]

Have i missed something? Is there an example with for clustering clients that i can use?

Exception while running test

Hi,
I am gettign below error while running the below. Is something I am missing ?

gradle -Dopen_source=true test

23:44:30.238 [DEBUG] [org.gradle.process.internal.DefaultWorkerProcess] Received connection org.gradle.messaging.remote.internal.hub.MessageHubBackedObjectConnection@361e5287 from Gradle Worker 1
23:44:30.330 [DEBUG] [TestEventLogger]
23:44:30.332 [DEBUG] [TestEventLogger] Gradle Worker 1 STARTED
23:44:30.374 [DEBUG] [TestEventLogger]
23:44:30.375 [DEBUG] [TestEventLogger] Gradle Worker 1 FAILED
23:44:30.386 [DEBUG] [TestEventLogger] org.gradle.api.internal.tasks.testing.TestSuiteExecutionException: Could not complete execution for test process 'Gradle Worker 1'.
23:44:30.389 [DEBUG] [TestEventLogger] at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:60)
23:44:30.390 [DEBUG] [TestEventLogger] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
23:44:30.392 [DEBUG] [TestEventLogger] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
23:44:30.407 [DEBUG] [TestEventLogger] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
23:44:30.409 [DEBUG] [TestEventLogger] at java.lang.reflect.Method.invoke(Method.java:597)
23:44:30.410 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
23:44:30.411 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
23:44:30.414 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
23:44:30.415 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
23:44:30.416 [DEBUG] [TestEventLogger] at $Proxy2.stop(Unknown Source)
23:44:30.418 [DEBUG] [TestEventLogger] at org.gradle.api.internal.tasks.testing.worker.TestWorker.stop(TestWorker.java:113)
23:44:30.420 [DEBUG] [TestEventLogger] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
23:44:30.421 [DEBUG] [TestEventLogger] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
23:44:30.423 [DEBUG] [TestEventLogger] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
23:44:30.424 [DEBUG] [TestEventLogger] at java.lang.reflect.Method.invoke(Method.java:597)
23:44:30.427 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
23:44:30.428 [DEBUG] [TestEventLogger] at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
23:44:30.429 [DEBUG] [TestEventLogger] at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
23:44:30.431 [DEBUG] [TestEventLogger] at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66)
23:44:30.433 [DEBUG] [TestEventLogger] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
23:44:30.434 [DEBUG] [TestEventLogger] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
23:44:30.436 [DEBUG] [TestEventLogger] at java.lang.Thread.run(Thread.java:662)
23:44:30.448 [DEBUG] [TestEventLogger]
23:44:30.450 [DEBUG] [TestEventLogger] Caused by:
23:44:30.451 [DEBUG] [TestEventLogger] java.lang.NoClassDefFoundError: com/beust/jcommander/ParameterException
23:44:30.452 [DEBUG] [TestEventLogger] at org.gradle.api.internal.tasks.testing.testng.TestNGTestClassProcessor.stop(TestNGTestClassProcessor.java:72)
23:44:30.455 [DEBUG] [TestEventLogger] at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:58)
23:44:30.456 [DEBUG] [TestEventLogger] ... 21 more
23:44:30.457 [DEBUG] [TestEventLogger]
23:44:30.458 [DEBUG] [TestEventLogger] Caused by:
23:44:30.459 [DEBUG] [TestEventLogger] java.lang.ClassNotFoundException: com.beust.jcommander.ParameterException
23:44:30.462 [DEBUG] [TestEventLogger] at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
23:44:30.463 [DEBUG] [TestEventLogger] at java.security.AccessController.doPrivileged(Native Method)
23:44:30.464 [DEBUG] [TestEventLogger] at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
23:44:30.466 [DEBUG] [TestEventLogger] at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
23:44:30.468 [DEBUG] [TestEventLogger] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
23:44:30.469 [DEBUG] [TestEventLogger] at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
23:44:30.475 [DEBUG] [TestEventLogger] ... 23 more
23:44:30.478 [QUIET] [system.out] 23:44:30.477 [INFO] [org.gradle.api.internal.tasks.testing.worker.TestWorker] Gradle Worker 1 finished executing tests.
23:44:30.485 [QUIET] [system.out] 23:44:30.485 [DEBUG] [org.gradle.process.internal.child.ActionExecutionWorker] Completed Gradle Worker 1.
23:44:30.493 [QUIET] [system.out] 23:44:30.493 [DEBUG] [org.gradle.process.internal.child.ActionExecutionWorker] Stopping client connection.
23:44:30.500 [DEBUG] [org.gradle.process.internal.DefaultExecHandle] Changing state to: SUCCEEDED
23:44:30.504 [INFO] [org.gradle.process.internal.DefaultExecHandle] Process 'Gradle Worker 1' finished with exit value 0 (state: SUCCEEDED)
23:44:30.507 [DEBUG] [TestEventLogger]
23:44:30.507 [DEBUG] [TestEventLogger] Test Run FAILED
23:44:30.511 [INFO] [org.gradle.api.internal.tasks.testing.junit.result.Binary2JUnitXmlReportGenerator] Finished generating test XML results (0.0 secs)
23:44:30.512 [INFO] [org.gradle.api.internal.tasks.testing.junit.report.DefaultTestReport] Generating HTML test report...
23:44:30.527 [INFO] [org.gradle.api.internal.tasks.testing.junit.report.DefaultTestReport] Finished generating test html results (0.015 secs)
23:44:30.527 [DEBUG] [org.gradle.logging.internal.DefaultLoggingConfigurer] Finished configuring with level: DEBUG, configurers: [org.gradle.logging.internal.OutputEventRenderer@2705d88a, org.gradle.logging.internal.logback.LogbackLoggingConfigurer@70cb6009, org.gradle.logging.internal.JavaUtilLoggingConfigurer@380e28b9]
23:44:30.527 [DEBUG] [org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter] Finished executing task ':databus-bootstrap-server:databus-bootstrap-server-impl:test'
23:44:30.527 [LIFECYCLE] [org.gradle.TaskExecutionLogger] :databus-bootstrap-server:databus-bootstrap-server-impl:test FAILED
23:44:30.528 [INFO] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] :databus-bootstrap-server:databus-bootstrap-server-impl:test (Thread[main,5,main]) - complete
23:44:30.528 [DEBUG] [org.gradle.execution.taskgraph.AbstractTaskPlanExecutor] Task worker [Thread[main,5,main]] finished, busy: 15.146 secs, idle: 0.369 secs
23:44:30.569 [ERROR] [org.gradle.BuildExceptionReporter]
23:44:30.569 [ERROR] [org.gradle.BuildExceptionReporter] FAILURE: Build failed with an exception.
23:44:30.569 [ERROR] [org.gradle.BuildExceptionReporter]
23:44:30.570 [ERROR] [org.gradle.BuildExceptionReporter] * What went wrong:
23:44:30.570 [ERROR] [org.gradle.BuildExceptionReporter] Execution failed for task ':databus-bootstrap-server:databus-bootstrap-server-impl:test'.
23:44:30.570 [ERROR] [org.gradle.BuildExceptionReporter] > There were failing tests. See the report at: file:///home/weblogic/databus-master/build/databus-bootstrap-server-impl/reports/tests/index.html
23:44:30.571 [ERROR] [org.gradle.BuildExceptionReporter]

Databus distribution jars

Hi

Are the databus distribution jars maintained or pushed to some public repository. That would help users who want to use databus. They need not build it.

Thanks
Jagadeesh

Is there any milestone to open source databus v3?

I have noticed that Linkedin have been using databus v3 at internal usage in an introduction article of expresso databus. And databus v2 code has been stalled for a while. So I'm curious what features has been developed in v3? and when will the v3 be open sourced? Thanks for good work.

Compile error with gradle1.11

in windows7 java 1.6 and gradle 1.11
same error at ubuntu 12.04, centos 5.9 java 1.6 gradle 1.11

D:\project\databus>gradle -Dopen_source clean
Configuration on demand is an incubating feature.

FAILURE: Build failed with an exception.

  • Where:
    Script 'D:\project\databus\subprojects.gradle' line: 100

  • What went wrong:
    A problem occurred evaluating project ':databus-bootstrap-client:databus-bootstrap-client-impl'.

    cannot get property 'externalDependency' on extra properties extension as it does not exist

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 22.809 secs

Error running example

Hey, I'm trying to run the example relay server and it's throwing me an error when I do bin/start-example-relay.sh person:

databus2-example-relay-pkg $ Exception in thread "main" java.lang.NullPointerException
    at java.io.Reader.<init>(Reader.java:61)
    at java.io.InputStreamReader.<init>(InputStreamReader.java:55)
    at com.linkedin.databus2.schemas.ResourceVersionedSchemaSetProvider.loadSchemas(ResourceVersionedSchemaSetProvider.java:61)
    at com.linkedin.databus2.schemas.FileSystemSchemaRegistryService.initializeSchemaSet(FileSystemSchemaRegistryService.java:133)
    at com.linkedin.databus2.schemas.FileSystemSchemaRegistryService.build(FileSystemSchemaRegistryService.java:56)
    at com.linkedin.databus2.schemas.StandardSchemaRegistryFactory.createSchemaRegistry(StandardSchemaRegistryFactory.java:50)
    at com.linkedin.databus.container.netty.HttpRelay.<init>(HttpRelay.java:117)
    at com.linkedin.databus2.relay.DatabusRelayMain.<init>(DatabusRelayMain.java:101)
    at com.linkedin.databus.relay.example.PersonRelayServer.<init>(PersonRelayServer.java:66)
    at com.linkedin.databus.relay.example.PersonRelayServer.main(PersonRelayServer.java:109)

Any ideas about what I'm doing wrong?
Thanks!

Client cluster partitioned consumers with multiple consumers not working as expected

Hi,

I am new to linkedIn databus. Until now, we were using partitioned consumers with only one consumer in each partition. We changed it to have multiple (2) consumers with each of the partitions under the impression that, for any event both of the consumers of any partition will be called parallely. But, looking at the code, we got that actually onDataEvent function of MultipleConsumerCallback is picking any one of the consumer of a partition randomly.

Can multiple consumers of a partition be called parallely for any event? If yes, what are the config changes that we'd have to make? If no, does the concept of multiple consumer there for load balancing purposes only?

We tried configuring the consumerParallelism (using key as databus.client.connectionDefaults.consumerParallelism) to 2, but that didn't seem to work. Is it there for different purpose or the key we used is wrong?

test case com.linkedin.databus.client.TestDatabusHttpClient.testPullerRetriesExhausted failed

I am running the build on Java 1.8.0_40.

Test stack trace -
java.lang.AssertionError: wait for error expected: but was:
at org.testng.Assert.fail(Assert.java:89)
at org.testng.Assert.failNotEquals(Assert.java:489)
at org.testng.Assert.assertTrue(Assert.java:37)
at com.linkedin.databus2.test.TestUtil.assertWithBackoff(TestUtil.java:148)
at com.linkedin.databus.client.TestDatabusHttpClient.testPullerRetriesExhausted(TestDatabusHttpClient.java:494)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:701)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:893)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1218)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:758)
at org.testng.TestRunner.run(TestRunner.java:613)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:87)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1170)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1095)
at org.testng.TestNG.run(TestNG.java:1007)
at org.gradle.api.internal.tasks.testing.testng.TestNGTestClassProcessor.stop(TestNGTestClassProcessor.java:115)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.stop(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.stop(TestWorker.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

sterr -

Jun 24, 2015 3:44:35 PM org.jboss.netty.channel.SimpleChannelUpstreamHandler
WARNING: EXCEPTION, please implement com.linkedin.databus2.test.container.SimpleObjectCaptureHandler.exceptionCaught() for proper handling.
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:321)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Jun 24, 2015 3:44:35 PM org.jboss.netty.channel.SimpleChannelUpstreamHandler
WARNING: EXCEPTION, please implement com.linkedin.databus2.test.container.SimpleObjectCaptureHandler.exceptionCaught() for proper handling.
java.nio.channels.ClosedChannelException
at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:616)
at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:592)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:355)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

How to generate Mysql table schema?

Dear all
I want to use databus with Mysql.But the com.linkedin.databus.util.InteractiveSchemaGenerator tool and related classes can only generate Oracle DB Table schemas. It seems that there is not a tool for Mysql to generate schemas.Is there any alternative ways for me to do with mysql?
Thanks.

How to distinguish the events by types?

I run the mysql example and notice that no matter I add, update or delete a record, in the client I only got the record but don't know whether it's add, update or delete. Could I get the event type like it is in OpenReplicator?

open replicator thread exit when caught some exception

I found that open replicator thread will exit when it caught some exception. For example, the connection occurs error, the open replicator thread will caught a exception and exit. And this will let the stream be stopped.

the patch to improve mysql open replicator event producer's performance

I want to make a patch to improve the mysql open replicator event producer's performance.

The design is very simple. The binlog parser just put the binlog event to a queue and return immediately. Then another thread fetch binlog event from the queue, and transform it to databus event, put the event into databus buffer.

Under the patch, the process of the binlog divide into two parts. So I think it can improve the performance greatly.

Trying to build on Mac OS 10.7.5, unit test failure

I am trying to build and test and develop on my Mac running OS v10.7.5. The assemble works fine but when I run the unit tests I get 2 failures that stop gradle. See message below. I am not familiar enough with the internals software to understand how to correct for this, or to understand if this is fundamentally that the software cant be run on a Mac.

  1. Has anyone got this running on a Mac? I can go back to Linux, but was hoping to do it on my local workstation.
  2. If working on Mac, anything special need to do to make it work?

thanks
keith

Running test: test method testInStreamError(com.linkedin.databus.client.TestDatabusHttpClient)

Gradle test > com.linkedin.databus.client.TestDatabusHttpClient.testInStreamError FAILED
java.lang.AssertionError at TestDatabusHttpClient.java:825
Running test: test method testInStreamTimeOut(com.linkedin.databus.client.TestDatabusHttpClient)
Running test: test method testInStreamTimeOut2(com.linkedin.databus.client.TestDatabusHttpClient)
Running test: test method testInStreamTimeOut3(com.linkedin.databus.client.TestDatabusHttpClient)
Running test: test method testListGenerics(com.linkedin.databus.client.TestDatabusHttpClient)
Running test: test method testPullerRetriesExhausted(com.linkedin.databus.client.TestDatabusHttpClient)

Gradle test > com.linkedin.databus.client.TestDatabusHttpClient.testPullerRetriesExhausted FAILED
java.lang.AssertionError at TestDatabusHttpClient.java:494

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.