Giter VIP home page Giter VIP logo

kafka-stack-docker-compose's People

Contributors

aaabramov avatar aureliemarcuzzo avatar baresse avatar charlescd avatar devshawn avatar diablo2050 avatar donhuvy avatar guizmaii avatar jackriv avatar jamesw1 avatar mitchell-h avatar nitomartinez avatar oleksandrbelonozhkin avatar polomarcus avatar pomber avatar qboileau avatar raghav2211 avatar retorres avatar rsunder10 avatar sai3010 avatar simplesteph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-stack-docker-compose's Issues

Unable to resolve address: zoo1:2181

[main-SendThread(zoo1:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zoo1:2181, unexpected error, closing socket connection and attempting reconnect
kafka1_1 | java.lang.IllegalArgumentException: Unable to canonicalize address zoo1:2181 because it's not resolvable
kafka1_1 | at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:66)
kafka1_1 | at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
kafka1_1 | [main-SendThread(zoo1:2181)] ERROR org.apache.zookeeper.client.StaticHostProvider - Unable to resolve address: zoo1:2181
kafka1_1 | java.net.UnknownHostException: zoo1
kafka1_1 | at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
kafka1_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1193)
kafka1_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1127)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
kafka1_1 | [main-SendThread(zoo1:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server zoo1:2181, unexpected error, closing socket connection and attempting reconnect
kafka1_1 | java.lang.IllegalArgumentException: Unable to canonicalize address zoo1:2181 because it's not resolvable
kafka1_1 | at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:66)
kafka1_1 | at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:39)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1087)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139)
kafka1_1 | [main] ERROR io.confluent.admin.utils.ClusterStatus - Timed out waiting for connection to Zookeeper server [zoo1:2181].
kafka1_1 | [main-SendThread(zoo1:2181)] ERROR org.apache.zookeeper.client.StaticHostProvider - Unable to resolve address: zoo1:2181
kafka1_1 | java.net.UnknownHostException: zoo1
kafka1_1 | at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
kafka1_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1193)
kafka1_1 | at java.net.InetAddress.getAllByName(InetAddress.java:1127)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:92)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:147)
kafka1_1 | at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:375)
kafka1_1 | at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137)
kafka1_1 | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x0 closed
kafka1_1 | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x0

Kafka is not coming up

Hi, I am using Docker Toolbox for windows and trying to bring Single Zookeeper and Single Broker up.
I used the following command from the source folder
'docker-compose -f zk-single-kafka-single.yml up
with which i can see the servers are up but when i go to source folder by opening another docker window and try to create a topic it says 'Kafka-Topics' is a bad command. Not sure why?

My Folder structure
Here are my configuration files i am
KafkaFail
using which you have provided in the Udemy course. From this location only i used this command 'C:\Program Files\Docker Toolbox\kafka-stack-docker-compose-master'. Please let me know what i am doing wrong.

WARN Ignoring version conflicts for items: [Key{deposit_transactions/87114812}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)

Hello,

I have set this up with multiple brokers and mutliple connect for reduncancy however when i use the elastic sink connector i get a lot of WARN Ignoring version conflicts for items: errors.

Please can you advise on how i can eradicate the errors.

Kind regards.

see sample error logs below:

kafka-connect1_1 | [2021-04-29 07:08:56,556] WARN Ignoring version conflicts for items: [Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87115999}, Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87115980}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
kafka-connect1_1 | [2021-04-29 07:08:57,056] WARN Ignoring version conflicts for items: [Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87115986}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
kafka-connect1_1 | [2021-04-29 07:08:58,056] WARN Ignoring version conflicts for items: [Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87116014}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
kafka-connect1_1 | [2021-04-29 07:09:01,052] WARN Ignoring version conflicts for items: [Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87116014}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)
kafka-connect1_1 | [2021-04-29 07:09:03,553] WARN Ignoring version conflicts for items: [Key{moniepoint.monnify_agency_banking.deposit_transactions/deposit_transactions/87116045}] (io.confluent.connect.elasticsearch.jest.JestElasticsearchClient)

Changing ports

Hi there—thanks for this Compose stack!

I am trying to change the host ports used for Kafka as I will be using this stack multiple times concurrently.

This is my compose file. For some reason I can connect to Kafka but I can't send messages. This works when using the default config. What's the correct way to change the ports?

version: '2.1'
services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - '20021:2181'
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 20021
      ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - ./.data/zoo1/data:/data
      - ./.data/zoo1/datalog:/datalog

  kafka1:
    image: confluentinc/cp-kafka:5.0.0
    hostname: kafka1
    ports:
      - '20031:9092'
    environment:
      KAFKA_ADVERTISED_LISTENERS:
        LISTENER_DOCKER_INTERNAL://kafka1:9092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:20031
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
        LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: 'zoo1:20021'
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS:
        'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
    volumes:
      - ./.data/kafka1/data:/var/lib/kafka/data
    depends_on:
      - zoo1

  kafka2:
    image: confluentinc/cp-kafka:5.0.0
    hostname: kafka2
    ports:
      - '20032:9093'
    environment:
      KAFKA_ADVERTISED_LISTENERS:
        LISTENER_DOCKER_INTERNAL://kafka2:9093,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:20032
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
        LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: 'zoo1:20021'
      KAFKA_BROKER_ID: 2
      KAFKA_LOG4J_LOGGERS:
        'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
    volumes:
      - ./.data/kafka2/data:/var/lib/kafka/data
    depends_on:
      - zoo1

  kafka3:
    image: confluentinc/cp-kafka:5.0.0
    hostname: kafka3
    ports:
      - '20033:9094'
    environment:
      KAFKA_ADVERTISED_LISTENERS:
        LISTENER_DOCKER_INTERNAL://kafka3:9094,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:20033
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
        LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: 'zoo1:20021'
      KAFKA_BROKER_ID: 3
      KAFKA_LOG4J_LOGGERS:
        'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
    volumes:
      - ./.data/kafka3/data:/var/lib/kafka/data
    depends_on:
      - zoo1

ERROR: Invalid interpolation format - docker-compose v2.0

[root@localhost kafka-stack-docker-compose]# docker-compose -f full-stack.yml up -d
ERROR: Invalid interpolation format for "environment" option in service "kafka1": "LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092"

I have already exported to env variable: export DOCKER_HOST_IP=127.0.0.1

I had to edit the .yml file to remove the ':-127.0.0.1' for it to work

edited line:
"kafka1": "LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP}:9092"

kafka-connect-ui connectivity error

Hello simplesteph, thanks for creating these useful docker-compose files.

I just finished the kafka cluster setup course and I'm trying to use the full-stack.yml docker-compose file but I'm getting a connectivity error on the web page.

I found this answer in the course Q&A but removing network_mode: host is not an option anymore as the full-stack.yml file doesn't include it.

I also tried running the command below because the person said that it worked for him and even this doesn't work for me.

sudo docker run --rm -p 8003:8000 -e "CONNECT_URL=http://[different-options]:8083" landoop/kafka-connect-ui:0.9.4

Different options for the CONNECT_URL: public-ec2-ip, kafka-connect, my-public-dns

I also looked for answers to this issue in the Q&A section of the course and here in Github but I can't find anything related to the connectivity error.

Thanks again for taking the time to suggest a solution

Monitor Data Status

I am new to using docker and kafka connect. Now I am creating my own kafka cluster and kafka connect cluster on the local machine with docker compose.

Here is the scenario, I am using JDBC connector to connect from DB2 DB to Oracle DB and sync up my data. The volume of the data can be more than 10 million, so after generating the source & sink connector, how to check/monitor/verify if all the data(e.g. 10 million data) move into topics properly and move out to sink DB properly without any error.

If there is any error, for example, missing some data, how may I identify that data?

I know this might be some very easy questions, but again, I am very new here, so please share the idea. Thanks

All services listen to 0.0.0.0 instead of 127.0.0.1

Hi, I have /etc/hosts like this


127.0.0.1     kafka1
127.0.0.1     kafka2
127.0.0.1     kafka3
127.0.0.1     zoo1
127.0.0.1     zoo2
127.0.0.1     zoo3
...

However, docker-compose -f full-stack.yml up still starts all services listening to 0.0.0.0.

Question about ymls

Hi all

thanks for the awesome docker composer configs, they are really cool.

But I had to change ${DOCKER_HOST_IP:-127.0.0.1} (link) to exact my virtual machine IP and set version to '2' (link) to make it work.

I'm not sure that ${DOCKER_HOST_IP:-127.0.0.1} is a valid. I saw errors that this value can not be parsed.

Can you please confirm that my mentioned changes are correct?

KafkaConnectionError: 111 ECONNREFUSED

I have an Airflow DAG calling a kafka-python producer. I am getting the error below trying to send a message from the Airflow container to the Kafka container.

I am using Docker Desktop 3.3.3 on a macbook pro. Airflow and Kafka are in separate docker container apps. Both container apps are using the same docker network, which is specified in the compose files:

networks:
  default:
    external: true
    name: ti-network

I am able to connect to the Kafka container using the conduktor client on my mac pointing to 127.0.0.1:9092

Here is the error message I see in Airflow:

[2021-07-09 17:27:20,322] {conn.py:381} INFO - <BrokerConnection node_id=bootstrap-0 host=kafka1:9092 <connecting> [IPv4 ('172.26.0.9', 9092)]>: connecting to kafka1:9092 [('172.26.0.9', 9092) IPv4]
[2021-07-09 17:27:20,323] {conn.py:1205} INFO - Probing node bootstrap-0 broker version
[2021-07-09 17:27:20,324] {conn.py:410} INFO - <BrokerConnection node_id=bootstrap-0 host=kafka1:9092 <connecting> [IPv4 ('172.26.0.9', 9092)]>: Connection complete.
[2021-07-09 17:27:20,432] {conn.py:1267} INFO - Broker version identified as 2.5.0
[2021-07-09 17:27:20,433] {conn.py:1269} INFO - Set configuration api_version=(2, 5, 0) to skip auto check_version requests on startup
[2021-07-09 17:27:20,589] {conn.py:381} INFO - <BrokerConnection node_id=1 host=127.0.0.1:9092 <connecting> [IPv4 ('127.0.0.1', 9092)]>: connecting to 127.0.0.1:9092 [('127.0.0.1', 9092) IPv4]
[2021-07-09 17:27:20,590] {conn.py:419} ERROR - Connect attempt to <BrokerConnection node_id=1 host=127.0.0.1:9092 <connecting> [IPv4 ('127.0.0.1', 9092)]> returned error 111. Disconnecting.
[2021-07-09 17:27:20,591] {conn.py:919} INFO - <BrokerConnection node_id=1 host=127.0.0.1:9092 <connecting> [IPv4 ('127.0.0.1', 9092)]>: Closing connection. KafkaConnectionError: 111 ECONNREFUSED

Here is the Kafka compose file:

version: '2.1'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - ./zk-single-kafka-single/zoo1/data:/data
      - ./zk-single-kafka-single/zoo1/datalog:/datalog

  kafka1:
    image: confluentinc/cp-kafka:5.5.1
    hostname: kafka1
    ports:
      - "9092:9092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
      KAFKA_CREATE_TOPICS: "TopicCrypto:1:1"
      # For testing small segments 16MB and retention of 128MB
      KAFKA_LOG_SEGMENT_BYTES: 16777216
      KAFKA_LOG_RETENTION_BYTES: 134217728
    volumes:
      - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
    depends_on:
      - zoo1
      
networks:
  default:
    external: true
    name: ti-network

Here is the kafka-python code:

from kafka import KafkaProducer
import random
import pickle
import os
import logging
from time import sleep
from json import dumps

TOPIC = 'TopicCrypto'
SERVER = 'kafka1:9092'

def generate_message(**kwargs):

	message = kwargs['message']

	logging.info("Message: ", message)

	producer = KafkaProducer(bootstrap_servers=[SERVER], 
		api_version=(2, 5, 0),
		value_serializer=lambda x: dumps(x).encode('utf-8')
	)

	logging.info("Partitions: ", producer.partitions_for(TOPIC))

	producer.send(TOPIC, value=message)

	logging.info("Sent message")

	producer.close()

Confluence metrics

Would be great if there was an env var that allows us to disable the metrics being sent to confluence :)

You are the best!!

thank you guys, it's so easy to deploy and use it from outside containers!!

Full Stack Kafka1 Node Crashes on Startup

I git cloned the repo and ran

export DOCKER_HOST_IP=$(docker-machine ip dev)
docker-compose -f full-stack.yml up

The stack runs but crashes. Looks like something to do with the logs not initializing correctly.

===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_MAJOR_VERSION=5
CONFLUENT_MINOR_VERSION=1
CONFLUENT_MVN_LABEL=
CONFLUENT_PATCH_VERSION=0
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.1.0
CUB_CLASSPATH=/etc/confluent/docker/docker-utils.jar
HOME=/root
HOSTNAME=kafka1
KAFKA_ADVERTISED_LISTENERS=LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092
KAFKA_BROKER_ID=1
KAFKA_INTER_BROKER_LISTENER_NAME=LISTENER_DOCKER_INTERNAL
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_LOG4J_LOGGERS=kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
KAFKA_VERSION=2.1.0
KAFKA_ZOOKEEPER_CONNECT=zoo1:2181
LANG=C.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
PYTHON_PIP_VERSION=8.1.2
PYTHON_VERSION=2.7.9-1
SCALA_VERSION=2.11
SHLVL=1
ZULU_OPENJDK_VERSION=8=8.30.0.1
_=/usr/bin/env
===> User
uid=0(root) gid=0(root) groups=0(root)
===> Configuring ...
===> Running preflight checks ... 
===> Check if /var/lib/kafka/data is writable ...
===> Check if Zookeeper is healthy ...
===> Launching ... 
===> Launching kafka ... 
[2019-02-16 19:28:34,619] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-02-16 19:28:35,572] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = LISTENER_DOCKER_INTERNAL
	inter.broker.protocol.version = 2.1-IV2
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
	listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.1-IV2
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = zoo1:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-02-16 19:28:35,763] WARN The package io.confluent.support.metrics.collectors.FullCollector for collecting the full set of support metrics could not be loaded, so we are reverting to anonymous, basic metric collection. If you are a Confluent customer, please refer to the Confluent Platform documentation, section Proactive Support, on how to activate full metrics collection. (io.confluent.support.metrics.KafkaSupportConfig)
[2019-02-16 19:28:35,831] WARN Please note that the support metrics collection feature ("Metrics") of Proactive Support is enabled.  With Metrics enabled, this broker is configured to collect and report certain broker and cluster metadata ("Metadata") about your use of the Confluent Platform (including without limitation, your remote internet protocol address) to Confluent, Inc. ("Confluent") or its parent, subsidiaries, affiliates or service providers every 24hours.  This Metadata may be transferred to any country in which Confluent maintains facilities.  For a more in depth discussion of how Confluent processes such information, please read our Privacy Policy located at http://www.confluent.io/privacy. By proceeding with `confluent.support.metrics.enable=true`, you agree to all such collection, transfer, storage and use of Metadata by Confluent.  You can turn the Metrics feature off by setting `confluent.support.metrics.enable=false` in the broker configuration and restarting the broker.  See the Confluent Platform documentation for further information. (io.confluent.support.metrics.SupportedServerStartable)
[2019-02-16 19:28:35,845] INFO starting (kafka.server.KafkaServer)
[2019-02-16 19:28:35,847] INFO Connecting to zookeeper on zoo1:2181 (kafka.server.KafkaServer)
[2019-02-16 19:28:35,898] INFO [ZooKeeperClient] Initializing a new session to zoo1:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-02-16 19:28:35,923] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,924] INFO Client environment:host.name=kafka1 (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,924] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,924] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,924] INFO Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,924] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/javax.annotation-api-1.2.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.7.2.jar:/usr/bin/../share/java/kafka/lz4-java-1.5.0.jar:/usr/bin/../share/java/kafka/javax.inject-1.jar:/usr/bin/../share/java/kafka/jersey-server-2.27.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/httpmime-4.5.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.25.jar:/usr/bin/../share/java/kafka/common-utils-5.1.0.jar:/usr/bin/../share/java/kafka/connect-runtime-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/support-metrics-common-5.1.0.jar:/usr/bin/../share/java/kafka/netty-3.10.6.Final.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-test-sources.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-test.jar:/usr/bin/../share/java/kafka/connect-api-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/httpclient-4.5.2.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.27.jar:/usr/bin/../share/java/kafka/connect-json-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/javassist-3.22.0-CR2.jar:/usr/bin/../share/java/kafka/scala-library-2.11.12.jar:/usr/bin/../share/java/kafka/guava-20.0.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka-clients-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/reflections-0.9.11.jar:/usr/bin/../share/java/kafka/commons-compress-1.8.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.1.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/connect-file-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-javadoc.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.1.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.8.3.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/commons-codec-1.9.jar:/usr/bin/../share/java/kafka/jackson-databind-2.9.7.jar:/usr/bin/../share/java/kafka/xz-1.5.jar:/usr/bin/../share/java/kafka/hk2-utils-2.5.0-b42.jar:/usr/bin/../share/java/kafka/zkclient-0.10.jar:/usr/bin/../share/java/kafka/scala-reflect-2.11.12.jar:/usr/bin/../share/java/kafka/log4j-1.2.17.jar:/usr/bin/../share/java/kafka/commons-validator-1.4.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.13.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-core-asl-1.9.13.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-mapper-asl-1.9.13.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-scaladoc.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.0.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/kafka_2.11-2.1.0-cp1-sources.jar:/usr/bin/../share/java/kafka/javax.inject-2.5.0-b42.jar:/usr/bin/../share/java/kafka/paranamer-2.7.jar:/usr/bin/../share/java/kafka/jline-0.9.94.jar:/usr/bin/../share/java/kafka/hk2-api-2.5.0-b42.jar:/usr/bin/../share/java/kafka/plexus-utils-3.1.0.jar:/usr/bin/../share/java/kafka/jersey-common-2.27.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/httpcore-4.4.4.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/audience-annotations-0.5.0.jar:/usr/bin/../share/java/kafka/kafka-streams-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/scala-logging_2.11-3.9.0.jar:/usr/bin/../share/java/kafka/support-metrics-client-5.1.0.jar:/usr/bin/../share/java/kafka/avro-1.8.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.5.jar:/usr/bin/../share/java/kafka/jackson-core-2.9.7.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.9.7.jar:/usr/bin/../share/java/kafka/maven-artifact-3.5.4.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.27.jar:/usr/bin/../share/java/kafka/validation-api-1.1.0.Final.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.27.jar:/usr/bin/../share/java/kafka/kafka-tools-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/rocksdbjni-5.14.2.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.7.25.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.9.7.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.12.v20180830.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.5.0-b42.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.9.7.jar:/usr/bin/../share/java/kafka/jersey-media-jaxb-2.27.jar:/usr/bin/../share/java/kafka/jersey-client-2.27.jar:/usr/bin/../share/java/kafka/hk2-locator-2.5.0-b42.jar:/usr/bin/../share/java/kafka/connect-transforms-2.1.0-cp1.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.9.7.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.3.5-4.jar:/usr/bin/../share/java/kafka/commons-lang3-3.1.jar:/usr/bin/../share/java/kafka/commons-digester-1.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.11-2.1.0-cp1.jar:/usr/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,927] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,928] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,928] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,928] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,929] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,929] INFO Client environment:os.version=4.4.89-boot2docker (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,929] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,929] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,929] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,931] INFO Initiating client connection, connectString=zoo1:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@dd05255 (org.apache.zookeeper.ZooKeeper)
[2019-02-16 19:28:35,979] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-02-16 19:28:35,985] INFO Opening socket connection to server zoo1/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-02-16 19:28:35,997] INFO Socket connection established to zoo1/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-02-16 19:28:36,013] INFO Session establishment complete on server zoo1/172.18.0.2:2181, sessionid = 0x168f7c7d2160001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-02-16 19:28:36,016] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-02-16 19:28:36,880] INFO Cluster ID = c7GS8FhATlqZfshc6U3_2g (kafka.server.KafkaServer)
[2019-02-16 19:28:36,886] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-02-16 19:28:37,034] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = LISTENER_DOCKER_INTERNAL
	inter.broker.protocol.version = 2.1-IV2
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
	listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.1-IV2
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = zoo1:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-02-16 19:28:37,066] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://192.168.99.100:9092
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 1
	broker.id.generation.enable = true
	broker.interceptor.class = class org.apache.kafka.server.interceptor.DefaultBrokerInterceptor
	broker.rack = null
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 3000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.listener.name = LISTENER_DOCKER_INTERNAL
	inter.broker.protocol.version = 2.1-IV2
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
	listeners = LISTENER_DOCKER_INTERNAL://0.0.0.0:19092,LISTENER_DOCKER_EXTERNAL://0.0.0.0:9092
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /var/lib/kafka/data
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.1-IV2
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 2
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 3
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.connect = zoo1:2181
	zookeeper.connection.timeout.ms = null
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-02-16 19:28:37,145] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-02-16 19:28:37,145] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-02-16 19:28:37,154] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-02-16 19:28:37,236] INFO Loading logs. (kafka.log.LogManager)
[2019-02-16 19:28:37,317] INFO Logs loading complete in 81 ms. (kafka.log.LogManager)
[2019-02-16 19:28:37,411] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-02-16 19:28:37,414] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-02-16 19:28:37,421] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2019-02-16 19:28:38,196] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2019-02-16 19:28:39,219] INFO Awaiting socket connections on 0.0.0.0:19092. (kafka.network.Acceptor)
[2019-02-16 19:28:39,336] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2019-02-16 19:28:39,429] INFO [SocketServer brokerId=1] Started 2 acceptor threads (kafka.network.SocketServer)
[2019-02-16 19:28:39,531] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:39,536] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:39,550] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:39,667] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-02-16 19:28:39,891] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-02-16 19:28:39,898] INFO Result of znode creation at /brokers/ids/1 is: OK (kafka.zk.KafkaZkClient)
[2019-02-16 19:28:39,908] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(kafka1,19092,ListenerName(LISTENER_DOCKER_INTERNAL),PLAINTEXT), EndPoint(192.168.99.100,9092,ListenerName(LISTENER_DOCKER_EXTERNAL),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2019-02-16 19:28:39,921] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-02-16 19:28:40,392] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2019-02-16 19:28:40,410] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:40,447] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:40,452] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-16 19:28:40,471] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2019-02-16 19:28:40,541] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController)
[2019-02-16 19:28:40,542] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController)
[2019-02-16 19:28:40,564] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController)
[2019-02-16 19:28:40,578] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController)
[2019-02-16 19:28:40,579] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-02-16 19:28:40,582] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController)
[2019-02-16 19:28:40,585] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-02-16 19:28:40,621] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 39 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-16 19:28:40,681] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2019-02-16 19:28:40,822] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-02-16 19:28:40,856] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-02-16 19:28:40,856] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-02-16 19:28:40,928] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread)
[2019-02-16 19:28:40,931] INFO [Controller id=1] Partitions being reassigned: Map() (kafka.controller.KafkaController)
[2019-02-16 19:28:40,936] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController)
[2019-02-16 19:28:40,936] INFO [Controller id=1] Currently shutting brokers in the cluster: Set() (kafka.controller.KafkaController)
[2019-02-16 19:28:40,936] INFO [Controller id=1] Current list of topics in the cluster: Set() (kafka.controller.KafkaController)
[2019-02-16 19:28:40,937] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController)
[2019-02-16 19:28:40,951] INFO [Controller id=1] List of topics to be deleted:  (kafka.controller.KafkaController)
[2019-02-16 19:28:40,954] INFO [Controller id=1] List of topics ineligible for deletion:  (kafka.controller.KafkaController)
[2019-02-16 19:28:40,956] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController)
[2019-02-16 19:28:40,957] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController)
[2019-02-16 19:28:40,993] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ReplicaStateMachine)
[2019-02-16 19:28:41,001] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ReplicaStateMachine)
[2019-02-16 19:28:41,012] INFO [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> Map() (kafka.controller.ReplicaStateMachine)
[2019-02-16 19:28:41,021] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka1:19092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
[2019-02-16 19:28:41,022] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.PartitionStateMachine)
[2019-02-16 19:28:41,035] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.PartitionStateMachine)
[2019-02-16 19:28:41,039] INFO [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine)
[2019-02-16 19:28:41,044] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2019-02-16 19:28:41,047] INFO [Controller id=1] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController)
[2019-02-16 19:28:41,061] INFO [Controller id=1] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController)
[2019-02-16 19:28:41,054] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-02-16 19:28:41,096] INFO [Controller id=1] Partitions undergoing preferred replica election:  (kafka.controller.KafkaController)
[2019-02-16 19:28:41,102] INFO [Controller id=1] Partitions that completed preferred replica election:  (kafka.controller.KafkaController)
[2019-02-16 19:28:41,102] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion:  (kafka.controller.KafkaController)
[2019-02-16 19:28:41,102] INFO [Controller id=1] Resuming preferred replica election for partitions:  (kafka.controller.KafkaController)
[2019-02-16 19:28:41,102] INFO [Controller id=1] Starting preferred replica leader election for partitions  (kafka.controller.KafkaController)
[2019-02-16 19:28:41,110] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController)
[2019-02-16 19:28:41,123] INFO [SocketServer brokerId=1] Started processors for 2 acceptors (kafka.network.SocketServer)
[2019-02-16 19:28:41,126] INFO Kafka version : 2.1.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-16 19:28:41,137] INFO Kafka commitId : bda8715f42a1a3db (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-16 19:28:41,163] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2019-02-16 19:28:41,183] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2019-02-16 19:28:41,196] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2019-02-16 19:28:41,201] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2019-02-16 19:28:42,183] WARN The replication factor of topic __confluent.support.metrics will be set to 1, which is less than the desired replication factor of 3 (reason: this cluster contains only 1 brokers).  If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2019-02-16 19:28:42,193] INFO Attempting to create topic __confluent.support.metrics with 1 replicas, assuming 1 total brokers (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2019-02-16 19:28:42,359] INFO Topic creation Map(__confluent.support.metrics-0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
[2019-02-16 19:28:42,448] INFO [Controller id=1] New topics: [Set(__confluent.support.metrics)], deleted topics: [Set()], new partition replica assignment [Map(__confluent.support.metrics-0 -> Vector(1))] (kafka.controller.KafkaController)
[2019-02-16 19:28:42,455] INFO [Controller id=1] New partition creation callback for __confluent.support.metrics-0 (kafka.controller.KafkaController)
[2019-02-16 19:28:42,808] INFO ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = []
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 10000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-02-16 19:28:42,960] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-02-16 19:28:42,972] ERROR Could not submit metrics to Kafka topic __confluent.support.metrics: Failed to construct kafka producer (io.confluent.support.metrics.BaseMetricsReporter)
[2019-02-16 19:28:42,978] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(__confluent.support.metrics-0) (kafka.server.ReplicaFetcherManager)
[2019-02-16 19:28:43,786] ERROR Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data (kafka.server.LogDirFailureChannel)
java.io.IOException: Invalid argument
	at sun.nio.ch.FileChannelImpl.map0(Native Method)
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:926)
	at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
	at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
	at kafka.log.LogSegment$.open(LogSegment.scala:634)
	at kafka.log.Log.loadSegments(Log.scala:542)
	at kafka.log.Log.<init>(Log.scala:276)
	at kafka.log.Log$.apply(Log.scala:2071)
	at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:691)
	at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:659)
	at scala.Option.getOrElse(Option.scala:121)
	at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659)
	at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:199)
	at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:195)
	at kafka.utils.Pool$$anon$2.apply(Pool.scala:61)
	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
	at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
	at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:194)
	at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373)
	at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:373)
	at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:367)
	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
	at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
	at kafka.cluster.Partition.makeLeader(Partition.scala:367)
	at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1162)
	at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1160)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
	at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
	at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
	at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1160)
	at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1072)
	at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:192)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:117)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
	at java.lang.Thread.run(Thread.java:748)
[2019-02-16 19:28:43,852] INFO [ReplicaManager broker=1] Stopping serving replicas in dir /var/lib/kafka/data (kafka.server.ReplicaManager)
[2019-02-16 19:28:43,857] ERROR [Broker id=1] Skipped the become-leader state change with correlation id 1 from controller 1 epoch 1 for partition __confluent.support.metrics-0 (last update controller epoch 1) since the replica for the partition is offline due to disk error org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data (state.change.logger)
[2019-02-16 19:28:43,869] ERROR [ReplicaManager broker=1] Error while making broker the leader for partition Topic: __confluent.support.metrics; Partition: 0; Leader: None; AllReplicas: ; InSyncReplicas:  in dir None (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.KafkaStorageException: Error while creating log for __confluent.support.metrics-0 in dir /var/lib/kafka/data
Caused by: java.io.IOException: Invalid argument
	at sun.nio.ch.FileChannelImpl.map0(Native Method)
	at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:926)
	at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:126)
	at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
	at kafka.log.LogSegment$.open(LogSegment.scala:634)
	at kafka.log.Log.loadSegments(Log.scala:542)
	at kafka.log.Log.<init>(Log.scala:276)
	at kafka.log.Log$.apply(Log.scala:2071)
	at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:691)
	at kafka.log.LogManager$$anonfun$getOrCreateLog$1.apply(LogManager.scala:659)
	at scala.Option.getOrElse(Option.scala:121)
	at kafka.log.LogManager.getOrCreateLog(LogManager.scala:659)
	at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:199)
	at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:195)
	at kafka.utils.Pool$$anon$2.apply(Pool.scala:61)
	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
	at kafka.utils.Pool.getAndMaybePut(Pool.scala:60)
	at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:194)
	at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373)
	at kafka.cluster.Partition$$anonfun$5$$anonfun$7.apply(Partition.scala:373)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:891)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:373)
	at kafka.cluster.Partition$$anonfun$5.apply(Partition.scala:367)
	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
	at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
	at kafka.cluster.Partition.makeLeader(Partition.scala:367)
	at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1162)
	at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:1160)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:130)
	at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
	at scala.collection.mutable.HashMap.foreach(HashMap.scala:130)
	at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:1160)
	at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1072)
	at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:192)
	at kafka.server.KafkaApis.handle(KafkaApis.scala:117)
	at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
	at java.lang.Thread.run(Thread.java:748)
[2019-02-16 19:28:43,955] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set() (kafka.server.ReplicaFetcherManager)
[2019-02-16 19:28:43,976] INFO [ReplicaAlterLogDirsManager on broker 1] Removed fetcher for partitions Set() (kafka.server.ReplicaAlterLogDirsManager)
[2019-02-16 19:28:43,992] INFO [Controller id=1] Mark replicas __confluent.support.metrics-0 on broker 1 as offline (state.change.logger)
[2019-02-16 19:28:44,059] INFO [ReplicaManager broker=1] Broker 1 stopped fetcher for partitions  and stopped moving logs for partitions  because they are in the failed log directory /var/lib/kafka/data. (kafka.server.ReplicaManager)
[2019-02-16 19:28:44,108] INFO Stopping serving logs in dir /var/lib/kafka/data (kafka.log.LogManager)
[2019-02-16 19:28:44,126] ERROR Shutdown broker because all log dirs in /var/lib/kafka/data have failed (kafka.log.LogManager)

Leaving this here while I debug the issue in case someone else has run into this.

➜  kafka-stack-docker-compose git:(master) ✗ docker-machine --version
docker-machine version 0.16.1, build cce350d7
➜  kafka-stack-docker-compose git:(master) ✗ docker-compose --version
docker-compose version 1.23.2, build 1110ad01
➜  kafka-stack-docker-compose git:(master) ✗ docker --version
Docker version 18.09.2, build 6247962

Hosts file changes

@simplesteph could you explain to me why do I have to change hosts file on machine where I run app that connect to kafka? I guess it's only required for HOST where kafka is running.

kafka.zookeeper.ZooKeeperClientTimeoutException

Steps to reproduce:

  1. Run docker-compose -f zk-single-kafka-single.yml up
  2. docker exec -it kafka-stack-docker-compose_kafka1_1 bash
  3. Run kafka-topics --zookeeper 127.0.0.1:2181 --list or try to create topic
    Actual result:

Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING

OS: Windows

Single Zookeeper / Single Kafka fails

I am trying to fire up Kafka as described in the README, but I'm getting the error below.

  • Os: Linux (kernel 5.4.0-66-generic)
$ docker-compose -f zk-single-kafka-single.yml up
Creating gr-z_zoo1_1 ... done
Creating gr-z_kafka1_1 ... done
Attaching to gr-z_zoo1_1, gr-z_kafka1_1
zoo1_1    | ZooKeeper JMX enabled by default
kafka1_1  | ===> User
zoo1_1    | Using config: /conf/zoo.cfg
zoo1_1    | 2021-03-08 18:37:16,207 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
kafka1_1  | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
zoo1_1    | 2021-03-08 18:37:16,219 [myid:] - INFO  [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zoo1 to address: zoo1/172.18.0.2
zoo1_1    | 2021-03-08 18:37:16,219 [myid:] - ERROR [main:QuorumPeerConfig@301] - Invalid configuration, only one server specified (ignoring)
zoo1_1    | 2021-03-08 18:37:16,220 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zoo1_1    | 2021-03-08 18:37:16,220 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
zoo1_1    | 2021-03-08 18:37:16,220 [myid:] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
zoo1_1    | 2021-03-08 18:37:16,220 [myid:] - WARN  [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running  in standalone mode
kafka1_1  | ===> Configuring ...
zoo1_1    | 2021-03-08 18:37:16,229 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
zoo1_1    | 2021-03-08 18:37:16,229 [myid:] - INFO  [main:QuorumPeer$QuorumServer@149] - Resolved hostname: zoo1 to address: zoo1/172.18.0.2
zoo1_1    | 2021-03-08 18:37:16,229 [myid:] - ERROR [main:QuorumPeerConfig@301] - Invalid configuration, only one server specified (ignoring)
zoo1_1    | 2021-03-08 18:37:16,229 [myid:] - INFO  [main:ZooKeeperServerMain@96] - Starting server
zoo1_1    | 2021-03-08 18:37:16,233 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zoo1_1    | 2021-03-08 18:37:16,233 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=zoo1
zoo1_1    | 2021-03-08 18:37:16,233 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_121
zoo1_1    | 2021-03-08 18:37:16,233 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
zoo1_1    | 2021-03-08 18:37:16,233 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
zoo1_1    | 2021-03-08 18:37:16,234 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf:
zoo1_1    | 2021-03-08 18:37:16,234 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
zoo1_1    | 2021-03-08 18:37:16,234 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
zoo1_1    | 2021-03-08 18:37:16,234 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
zoo1_1    | 2021-03-08 18:37:16,234 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
zoo1_1    | 2021-03-08 18:37:16,235 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
zoo1_1    | 2021-03-08 18:37:16,235 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=5.4.0-66-generic
zoo1_1    | 2021-03-08 18:37:16,235 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=zookeeper
zoo1_1    | 2021-03-08 18:37:16,235 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/home/zookeeper
zoo1_1    | 2021-03-08 18:37:16,235 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper-3.4.9
zoo1_1    | 2021-03-08 18:37:16,240 [myid:] - INFO  [main:ZooKeeperServer@815] - tickTime set to 2000
zoo1_1    | 2021-03-08 18:37:16,240 [myid:] - INFO  [main:ZooKeeperServer@824] - minSessionTimeout set to -1
zoo1_1    | 2021-03-08 18:37:16,240 [myid:] - INFO  [main:ZooKeeperServer@833] - maxSessionTimeout set to -1
zoo1_1    | 2021-03-08 18:37:16,247 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
kafka1_1  | ===> Running preflight checks ... 
kafka1_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka1_1  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
gr-z_kafka1_1 exited with code 1

In case it helps:

$ uname -r
5.4.0-66-generic

$ docker-compose --version
docker-compose version 1.26.2, build eefe0d31

$ cd zk-single-kafka-single
$ tree
.
├── kafka1
│   └── data
└── zoo1
    ├── data
    │   ├── myid
    │   └── version-2
    └── datalog
        └── version-2

7 directories, 1 file

$ ll
drwxr-xr-x - root  8 mars  19:37  zoo1
drwxr-xr-x - root  8 mars  19:37  kafka1

Kafka Fails. Socket error occurred: zoo1/172.28.0.2:2181

I've tried all of the docker-compose files given in this repository.
After spinning up the containers I get

Socket error occurred: zoo1/172.28.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.28.0.2:2181. Will not attempt to authenticate using SASL (unknown error)

Docker compose

version: '2.1'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - ./zk-single-kafka-single/zoo1/data:/data
      - ./zk-single-kafka-single/zoo1/datalog:/datalog

  kafka1:
    image: confluentinc/cp-kafka:5.5.1
    hostname: kafka1
    ports:
      - "9092:9092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
    volumes:
      - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
    depends_on:
      - zoo1

Docker log

docker compose up --build
[+] Running 3/2
 ⠿ Network logflow_default     Created                                                                                                                                                                                                   3.8s
 ⠿ Container logflow_zoo1_1    Created                                                                                                                                                                                                   0.0s
 ⠿ Container logflow_kafka1_1  Created                                                                                                                                                                                                   0.0s
Attaching to kafka1_1, zoo1_1
kzoo1_1    | ZooKeeper JMX enabled by default
zoo1_1    | Using config: /conf/zoo.cfg
^R
kafka1_1  | ===> User
kafka1_1  | uid=0(root) gid=0(root) groups=0(root)
kafka1_1  | ===> Configuring ...
kafka1_1  | ===> Running preflight checks ... 
kafka1_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka1_1  | ===> Check if Zookeeper is healthy ...
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=kafka1
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_212
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Azul Systems, Inc.
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/zulu-8-amd64/jre
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/etc/confluent/docker/docker-utils.jar
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=5.10.25-linuxkit
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=root
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/root
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=28MB
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=443MB
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=31MB
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=zoo1:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@cc34f4d
kafka1_1  | [main] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
kafka1_1  | [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes
kafka1_1  | [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket error occurred: zoo1/172.19.0.2:2181: Connection refused
kafka1_1  | [main] ERROR io.confluent.admin.utils.ClusterStatus - Timed out waiting for connection to Zookeeper server [zoo1:2181].
kafka1_1  | [main-SendThread(zoo1:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server zoo1/172.19.0.2:2181. Will not attempt to authenticate using SASL (unknown error)
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x0 closed
kafka1_1  | [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x0

using Docker Engine 20.10.6 Docker Client 20.10.6 on Apple Macbook Air M1.

How to add other connectors?

Hi, how to add connectors like Cassandra, MQTT, InfluxDB, and MongoDB. In the list, there are only limited connectors? Also is there any way to secure this page by using username: password?

image

Setup works only in localhost

When I setup the broker and zookeeper in a remote machine, then any message produced throws this error. Same for single or multi broker/zk arrangement.

$ bin/kafka-console-producer.sh --broker-list server-url:9092,server-url:9093,server-url:9094 --topic test

message1

ERROR Error when sending message to topic test with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-1: 1511 ms has passed since batch creation plus linger time

Note: I have the entry for /etc/hosts for kafka1

Unavailability of DOCKER_HOST_IP gives invalid receive exception

I forgot to set the DOCKER_HOST_IP environment variable before launching the stack by docker-compsoe -f zk-single-kafka-single.yml up -d. It did launched and everything seemed alright until I wrote a consumer in PHP using librdkafka and php-rdkafka.

The php code wasn't able to connect to kafka instance. Upon checking the kafka logs in the stack docker-compose -f zk-single-kafka-single.yml I see this:

[2019-02-02 14:10:13,267] WARN [SocketServer brokerId=1] Unexpected error from /172.18.0.1; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = -196097)
	at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:102)
	at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:390)
	at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:351)
	at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
	at kafka.network.Processor.poll(SocketServer.scala:689)
	at kafka.network.Processor.run(SocketServer.scala:594)
	at java.lang.Thread.run(Thread.java:748)

I couldn't figure this out for couple of hours until I stopped the stack, set the DOCKER_HOST_IP env var and starting the stack up again. It worked perfectly after.

IMO, you can provide a entrypoint script which can halt the startup if env var is unavailable or if possible can use the DNS if provided

Kafka connect Plugins are not working.

not able to create source/sink connector using elasticsearch ,s3 only filestream available (In full-stack.yml) can be seen via using the landoop ui for kafka connect it only shows Filestreams and all the other plugins for source and sink connectors were missing.

Fails to get access to /data in Fedora

[sai@sai-pc kafka-stack-docker-compose]$ sudo docker-compose -f zk-single-kafka-multiple.yml up --remove-orphans
Removing orphan container "kafka-stack-docker-compose_zoo3_1"
Removing orphan container "kafka-stack-docker-compose_zoo2_1"
Starting kafka-stack-docker-compose_zoo1_1 ... done
Starting kafka-stack-docker-compose_kafka2_1 ... done
Starting kafka-stack-docker-compose_kafka3_1 ... done
Starting kafka-stack-docker-compose_kafka1_1 ... done
Attaching to kafka-stack-docker-compose_zoo1_1, kafka-stack-docker-compose_kafka2_1, kafka-stack-docker-compose_kafka1_1, kafka-stack-docker-compose_kafka3_1
kafka1_1  | ===> User
kafka2_1  | ===> User
kafka1_1  | uid=0(root) gid=0(root) groups=0(root)
kafka1_1  | ===> Configuring ...
kafka3_1  | ===> User
kafka3_1  | uid=0(root) gid=0(root) groups=0(root)
kafka2_1  | uid=0(root) gid=0(root) groups=0(root)
kafka3_1  | ===> Configuring ...
kafka2_1  | ===> Configuring ...
zoo1_1    | chown: /data: Permission denied
zoo1_1    | chown: /datalog: Permission denied
kafka-stack-docker-compose_zoo1_1 exited with code 1
kafka2_1  | ===> Running preflight checks ... 
kafka2_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka3_1  | ===> Running preflight checks ... 
kafka3_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka1_1  | ===> Running preflight checks ... 
kafka1_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka2_1  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
kafka3_1  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
kafka1_1  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !
kafka-stack-docker-compose_kafka2_1 exited with code 1
kafka-stack-docker-compose_kafka3_1 exited with code 1
kafka-stack-docker-compose_kafka1_1 exited with code 1

Even tried creating those folders manually -

[sai@sai-pc /]$ ls -al
total 2172
dr-xr-xr-x.  20 root root    4096 Aug 11 18:47 .
dr-xr-xr-x.  20 root root    4096 Aug 11 18:47 ..
lrwxrwxrwx.   1 root root       7 Jan 28  2020 bin -> usr/bin
dr-xr-xr-x.   7 root root    4096 Aug  5 09:16 boot
drwxr-xr-x.   2 sai  sai     4096 Aug 11 18:47 data
drwxr-xr-x.   2 sai  sai     4096 Aug 11 18:47 datalog

I tried running it with sudo and non sudo also. Any suggestions on what I could be missing?

No resolvable bootstrap urls given in bootstrap.servers

Hi I have followed your instructions. But I am unable to make my connect up. Its exiting with error:
Removing server monitoring-kafka:29092 from bootstrap.servers as DNS resolution failed for monitoring-kafka.
monitoring-kafka is my kafka advertised host name. Could you suggest on it?

version: '3.4'
services:
kafka-connect:
image: confluentinc/cp-kafka-connect:4.4.0
hostname: connect
networks:
- monitoring
ports:
- "8083:80"
environment:
KAFKA_HEAP_OPTS: "-Xms512m -Xmx1g"
CONNECT_BOOTSTRAP_SERVERS: monitoring-kafka:9092
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: connect.group
CONNECT_CONFIG_STORAGE_TOPIC: connect-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-status
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "myconnect"
CONNECT_PLUGIN_PATH: /usr/share/java/kafka-connect-jdbc/
CONNECT_OFFSET_FLUSH_TIMEOUT_MS: 500000
links:
- kafka

kafka:
image: confluentinc/cp-kafka:4.0.0
container_name: kafka
networks:
  - monitoring
ports:
  - "9092:9092"
links:
  - zookeeper
environment:
  KAFKA_ADVERTISED_HOST_NAME: monitoring-kafka
  KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://monitoring-kafka:29092,LISTENER_DOCKER_EXTERNAL://IP_ADDR:9092
  KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
  KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
  KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  KAFKA_ZOOKEEPER_CONNECT: IP_ADDR:2181
  KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 16000
  KAFKA_ZOOKEEPER_SYNC_TIMEOUT_MS: 20000

volumes:
- ./full-stack/kafka1/data:/var/lib/kafka/data
zookeeper:
image: confluentinc/cp-zookeeper:4.0.0
container_name: zookeeper
ports:
- "2181:2181"
environment:
- ZOOKEEPER_CLIENT_PORT: 2181
- ZOOKEEPER_TICK_TIME: 20000

Export more options and security

Dear simplesteph,

How can I export more enviroment like as KAFKA_OPTS, also for config security SASL_PLAINTEXT and SCRAM-SHA-512? I don't see anything which relate to config it.

Thank you!

License

Please add a license file

ubuntu_kafka1_1_fa91fb287f60 exited with code 137

Hello,

I'm trying to use it in a NAT network with Vbox, steps:

docker-compose -f zk-single-kafka-single.yml down
export DOCKER_HOST_IP=10.0.2.8
rm -rf zk-single-kafka-single
docker-compose -f zk-single-kafka-single.yml up

...
icaStateMachine)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,658] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.PartitionStateMachine)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,703] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.PartitionStateMachine)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,749] INFO [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> Map() (kafka.controller.PartitionStateMachine)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,756] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,813] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka1:19092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,835] INFO [Controller id=1] Removing partitions Set() from the list of reassigned partitions in zookeeper (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:56,855] INFO [Controller id=1] No more partitions need to be reassigned. Deleting zk path /admin/reassign_partitions (kafka.controller.KafkaController)
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:42:57,099 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x167cb6c42c90001 type:multi cxid:0x36 zxid:0x1e txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/reassign_partitions Error:KeeperErrorCode = NoNode for /admin/reassign_partitions
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:57,531] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:57,762] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:57,764] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:57,803] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:57,822] INFO [Controller id=1] Starting preferred replica leader election for partitions (kafka.controller.KafkaController)
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:42:57,956 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x167cb6c42c90001 type:multi cxid:0x38 zxid:0x1f txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka1_1_fa91fb287f60 | [2018-12-20 11:42:58,213] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController)
kafka1_1_fa91fb287f60 | [2018-12-20 11:44:14,887] WARN Client session timed out, have not heard from server in 4631ms for sessionid 0x167cb6c42c90001 (org.apache.zookeeper.ClientCnxn)
kafka1_1_fa91fb287f60 | [2018-12-20 11:45:42,158] INFO Client session timed out, have not heard from server in 4631ms for sessionid 0x167cb6c42c90001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:44:51,281 [myid:] - INFO [SessionTracker:ZooKeeperServer@358] - Expiring session 0x167cb6c42c90001, timeout of 6000ms exceeded
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:46:44,050 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x167cb6c42c90001
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:46:44,075 [myid:] - INFO [SyncThread:0:NIOServerCnxn@1008] - Closed socket connection for client /172.20.0.3:48598 which had sessionid 0x167cb6c42c90001
zoo1_1_b1281ae2f3c5 | 2018-12-20 11:46:44,088 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@213] - Ignoring unexpected runtime exception
zoo1_1_b1281ae2f3c5 | java.nio.channels.CancelledKeyException
zoo1_1_b1281ae2f3c5 | at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
zoo1_1_b1281ae2f3c5 | at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87)
zoo1_1_b1281ae2f3c5 | at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:182)
zoo1_1_b1281ae2f3c5 | at java.lang.Thread.run(Thread.java:745)
ubuntu_kafka1_1_fa91fb287f60 exited with code 137

ifconfig:

br-7904a60fc3b9: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.20.0.1 netmask 255.255.0.0 broadcast 172.20.255.255
inet6 fe80::42:c6ff:fea5:a632 prefixlen 64 scopeid 0x20
ether 02:42:c6:a5:a6:32 txqueuelen 0 (Ethernet)
RX packets 1 bytes 28 (28.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12 bytes 936 (936.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:77:11:f2:5e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.8 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:fe2e:488c prefixlen 64 scopeid 0x20
ether 08:00:27:2e:48:8c txqueuelen 1000 (Ethernet)
RX packets 282686 bytes 388713407 (388.7 MB)
RX errors 1795 dropped 0 overruns 0 frame 0
TX packets 32242 bytes 2453771 (2.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 9 base 0xd020

Thank you in advance

Can't connect App on docker to Kafka

I run the docker-compose file with multiple zookeeper and multiple broker on my app. I use KafkaJS and try to send messages from an external application. At this time, the subscribing of consumer is fine, but when I send messages from an app in docker that is on the same network as Kafka, I can't connect

This is docker-compose file:

https://drive.google.com/file/d/1WvwELcouEd9n5mZr1ns2w8G6whfndTnz/view?usp=sharing

After that, I get an error as below
Screen Shot 2021-03-23 at 09 25 45

Please help me fix that. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.