Giter VIP home page Giter VIP logo

Comments (18)

dbuschman7 avatar dbuschman7 commented on August 15, 2024

I also looked at the log.

burrow@d44fde07d1b1:/var/log/burrow$ cat burrow.out
Started Burrow at March 12, 2016 at 2:26am (UTC)
2016-03-12 02:26:58 [INFO] Starting Zookeeper client
2016-03-12 02:26:58 [INFO] Starting Offsets Storage module
2016-03-12 02:26:58 [INFO] Starting HTTP server
2016-03-12 02:26:58 [INFO] Starting Zookeeper client for cluster kafka1
2016-03-12 02:26:58 [INFO] Starting Kafka client for cluster kafka1
2016-03-12 02:26:58 [INFO] Starting consumers for 50 partitions of __consumer_offsets in cluster kafka1
2016-03-12 02:27:15 [INFO] Acquired Zookeeper notifier lock

from burrow.

pcallewaert avatar pcallewaert commented on August 15, 2024

I've the same problem, also using kafka 0.9.
However, I am not sure kafka 0.9 as the consumer offsets were changed alot?

from burrow.

pcallewaert avatar pcallewaert commented on August 15, 2024

I am working on a version that can read kafka consumer groups. Still needs some work...

from burrow.

toddpalino avatar toddpalino commented on August 15, 2024

Burrow knows about consumer groups as they commit offsets. This means that Burrow does not know about all consumer groups, just the ones that are committing offsets. Additionally, if you are using Zookeeper-committed offsets (with the older consumer), you will pull up consumer groups from there.

Unless there are also error messages about not being able to decode offset commits, there should not be an issue strictly with 0.9. The version I'm using internally is somewhere close to 0.9 (it's an intermediate version, but I'm not sure exactly where between 0.8.2.2 and 0.9 we are right now).

from burrow.

stringaling avatar stringaling commented on August 15, 2024

Having the same issue on a 0.9.01 Kafka Cluster. Seeing the following in the burrow log:

2016-03-16 13:38:38 [WARN] Failed to decode __consumer_offsets:15 offset 7: keyver
2016-03-16 13:38:38 [WARN] Failed to decode __consumer_offsets:3 offset 14: keyver
...
2016-03-16 13:39:59 [WARN] Failed to decode __consumer_offsets:46 offset 18: keyver

(not using zookeeper committed offsets)

from burrow.

toddpalino avatar toddpalino commented on August 15, 2024

OK, keyver errors is almost certainly a failure to decode the offset messages. We just finished a build and test on a new 0.9 build internally which should be close to trunk. I'll take a look at what's in there shortly and try and get a fix together. Unless someone else gets to a PR first.

from burrow.

travisjeffery avatar travisjeffery commented on August 15, 2024

Any updates?

from burrow.

icebourg avatar icebourg commented on August 15, 2024

I've noticed something similar in my 0.9 kafka cluster. We dual publish commits to zk and __consumer_offsets and the consumer groups for cluster in Burrow is always empty.

Noticing I'm getting some errors trying to decode, related to valver. Trying to understand the code -- doesn't make much sense to me. Any suggestions would be great!

1457703171866275948 [Warn] Failed to decode __consumer_offsets:0 offset 5087563782: valver
1457703171866281487 [Warn] Failed to decode __consumer_offsets:0 offset 5087563781: valver
1457703171866286435 [Warn] Failed to decode __consumer_offsets:0 offset 5087563784: valver
1457703171866296049 [Warn] Failed to decode __consumer_offsets:0 offset 5087563783: valver
1457703171866301288 [Warn] Failed to decode __consumer_offsets:0 offset 5087563786: valver
1457703171866306383 [Warn] Failed to decode __consumer_offsets:0 offset 5087563785: valver
1457703171866311416 [Warn] Failed to decode __consumer_offsets:0 offset 5087563788: valver
1457703171866316584 [Warn] Failed to decode __consumer_offsets:0 offset 5087563787: valver

from burrow.

toddpalino avatar toddpalino commented on August 15, 2024

I just got a chance to take a quick look at this. It does not look like there should be any problem with the keyver decode failures. A new key type was added for group metadata messages, but Burrow does not need to know about those. It's worth clarifying the logging (print the key version that was received and is not decodable), and make sure Burrow correctly discards group metadata messages without an error.

Likewise, it doesn't appear on the surface that there should be any problems with decoding the message values for the two known key types (0 and 1). Those messages are unchanged from when Burrow was initially released. The group metadata messages will never get to having the value of the message decoded due to the key failure.

I'm running a test against one of our internal clusters that has been upgraded to trunk (past 0.9), and I'm not able to duplicate this, as far as not getting consumer offsets is concerned. However, I do not have any new consumer clients running against this cluster either.

from burrow.

obaidsalikeen avatar obaidsalikeen commented on August 15, 2024

Hi,
I am experiencing the same issue. I have a storm cluster and Kafka cluster. The consumers array is always shown empty (http://:8888/v2/kafka/prod/consumer).
The znode link for Storm (zookeeper-path=/storm) seems correct, however I am not sure if I am using the correct znode for Kafka (zookeeper-path=/). Looking at Zookeeper, I see consumers and brokers directories/objects so / seems to be a valid choice. Kafka version that we are using is 0.8.2.X.
Should this configuration work, or am I mis-configuring it? Thanks

[general]
pidfile=burrow.pid
client-id=burrow-lagchecker
group-blacklist=^(console-consumer-|python-kafka-consumer-).*$

[zookeeper]
hostname=kafkazoo001
hostname=kafkazoo002
hostname=kafkazoo003
port=2181
timeout=6
lock-path=/burrow/notifier

[kafka "prodd"]
broker=kafka001
...
broker=kafka024

broker-port=9092

zookeeper=kafkazoo1
zookeeper=kafkazoo2
zookeeper=kafkazoo3
zookeeper-port=2181
zookeeper-path=/
offsets-topic=__consumer_offsets

[storm "prodd"]
zookeeper=zk1
zookeeper=zk2
zookeeper=zk3
zookeeper-port=2181
zookeeper-path=/storm

[tickers]
broker-offsets=60

[lagcheck]
intervals=10
expire-group=604800
min-distance=1
zookeeper-interval=60
zk-group-refresh=300

[httpserver]
server=on
port=8080

from burrow.

kmuthupa avatar kmuthupa commented on August 15, 2024

any word on this issue? does this have to do with the new consumer model?

from burrow.

toddpalino avatar toddpalino commented on August 15, 2024

I have not been able to duplicate this with a recent (0.9+) version of the Kafka broker. I did add some additional logging to the latest version, both at the warn and debug levels, to clarify problems with the message decoding. But this still doesn't reveal any problems on my internal testing - Burrow works properly.

It's possible that this is related to the new consumer, but it shouldn't be. There was a new message type added (with the key version == 2) for group metadata. Previous versions of Burrow should have discarded this message, but would throw the keyver error message when doing so. The current version discards the messages and logs the discard at debug level.

from burrow.

icebourg avatar icebourg commented on August 15, 2024

Just as an update to this, it turns out that my issue was the result of an old, broken topic that was assigned to brokerIds that didn't exist any more. This topic was in a constant state of trying to elect a leader but couldn't, and that prevents burrow from populating the topic list. PR #70 adds error logging around this state, so I would encourage others to try a version of Burrow with #70 applied and see if they have a broken topic somewhere.

Anyway, that was my issue, thought it may be helpful for others as well.

from burrow.

toddpalino avatar toddpalino commented on August 15, 2024

As there hasn't been any activity on this, and I haven't been able to duplicate it, I'm going to close it out. If you have an problem still, please open a separate issue with configuration and details on logs. Preferably, try running master.

from burrow.

rkling01 avatar rkling01 commented on August 15, 2024

@icebourg @toddpalino 👍 This was/is my issue as well so thanks for providing the visibility! I am seeing the "Topic leader error" so that definitely helped. Will work on fixing up the broken topic (basically a topic that no longer had a valid brokerid assigned).

from burrow.

pavansai8 avatar pavansai8 commented on August 15, 2024

I'm also facing the same issue. The consumer list is returned but it is showing empty.
Can anyone help me on this.

from burrow.

petergdoyle avatar petergdoyle commented on August 15, 2024

I am missing a consumer-group as well. I think it is a new consumer / old consumer issue. Here is the code used to create a consumer-group to load data from a kafka topic into couchbase https://github.com/petergdoyle/StreamWorks/blob/master/couchbase/CouchbaseKafkaLoader/src/main/java/com/cleverfishsoftware/streaming/couchbase/kafkaloader/KafkaCouchbaseLoader.java.

When I run a ConsumerGroupCommand --list using the "new consumer" format of the command the consumer-group is missing

bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand --list --new-consumer --bootstrap-server localhost:9092

When I run a ConsumerGroupCommand --list using the "old consumer" format of the command, the missing consumer-group is listed

$KAFKA_HOME/bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand --list --zookeeper localhost:2181

This is the same whether the library dependency is version 0.9.0.1 or version>0.8.2.1

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.10</artifactId>
            <version>0.9.0.1</version>  || <version>0.8.2.1</version>
        </dependency>

So I am guessing burrow is using the "new client" polling mechanism

from burrow.

petergdoyle avatar petergdoyle commented on August 15, 2024

Actually the issue (I was facing anyhow) was related to where consumer offsets were being stored zk or kafka.

I needed to set the dual.commit.enabled and offsets.storage config params and follow Todds advice to bounce the consumers here (#7) and my missing consumer-group appears

        Properties config = new Properties();
        config.put("zookeeper.connect", defaultKafkaZk);
        config.put("zookeeper.connectiontimeout.ms", defaultKafkaZkTimeout);
        config.put("group.id", defaultKafkaConsumerGroup);
...
        // this is required because of in order for burrow to get the offsets 
        config.put("dual.commit.enabled", "false");
        config.put("offsets.storage", "kafka");

from burrow.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.