Giter VIP home page Giter VIP logo

cp-all-in-one's Introduction

Confluent Platform All-In-One

This repository contains a set of Docker Compose files for running Confluent Platform. It is organized as follows:

  1. cp-all-in-one: Confluent Enterprise License version of Confluent Platform, including Confluent Server (and ZooKeeper), Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, and ksqlDB.
  2. cp-all-in-one-kraft: Confluent Enterprise License version of Confluent Platform, including Confluent Server using KRaft (no ZooKeeper), Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, and ksqlDB.
  3. cp-all-in-one-flink: Confluent Enterprise License version of Confluent Platform, including Confluent Server using KRaft (no ZooKeeper), Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, ksqlDB, Flink Job Manager, Flink Task Manager, and Flink SQL CLI.
  4. cp-all-in-one-community: Confluent Community License version of Confluent Platform include the Kafka broker, Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, and ksqlDB.
  5. cp-all-in-one-cloud: Docker Compose files that can be used to run Confluent Platform components (Schema Registry, a Kafka Connect worker with the Datagen Source connector plugin installed, Confluent Control Center, REST Proxy, or ksqlDB) against Confluent Cloud.

Please refer to the instructions in each example folder's README.

cp-all-in-one's People

Contributors

aesteve avatar andrewegel avatar awalther28 avatar confluentjenkins avatar confluenttools avatar davetroiano avatar javabrett avatar jimbethancourt avatar joel-hamill avatar laser avatar nhaq-confluent avatar rspurgeon avatar sdandu-gh avatar smithjohntaylor avatar stejani-cflt avatar ybyzek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cp-all-in-one's Issues

org.apache.kafka.clients.NetworkClient Error connecting to node broker:29092 (id: -1 rack: null) java.net.UnknownHostException: broker: Name or service not known

control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.3:29092) could not be established. Broker may not be available.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.3:29092) could not be established. Broker may not be available.
rest-proxy | [kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
rest-proxy | org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1606501070911, tries=1, nextAllowedTryMs=1606501071012) timed out at 1606501070912 after 1 attempt(s)
rest-proxy | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.3:29092) could not be established. Broker may not be available.
ksqldb-server | [kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
ksqldb-server | org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1606501071531, tries=1, nextAllowedTryMs=1606501071632) timed out at 1606501071532 after 1 attempt(s)
ksqldb-server | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
schema-registry | [kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
schema-registry | org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1606501072795, tries=1, nextAllowedTryMs=1606501072897) timed out at 1606501072797 after 1 attempt(s)
schema-registry | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.3:29092) could not be established. Broker may not be available.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (broker/172.18.0.3:29092) could not be established. Broker may not be available.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
control-center | java.net.UnknownHostException: broker: Name or service not known
control-center | at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
control-center | at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
control-center | at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515)
control-center | at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
control-center | at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
control-center | at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
control-center | at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
control-center | at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:958)
control-center | at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:294)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1039)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1281)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
control-center | at java.base/java.lang.Thread.run(Thread.java:834)
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
control-center | java.net.UnknownHostException: broker
control-center | at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
control-center | at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
control-center | at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
control-center | at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
control-center | at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:958)
control-center | at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:294)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1039)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1281)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
control-center | at java.base/java.lang.Thread.run(Thread.java:834)
control-center | [kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
control-center | org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1606501079587, tries=1, nextAllowedTryMs=1606501079698) timed out at 1606501079598 after 1 attempt(s)
control-center | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
control-center | java.net.UnknownHostException: broker
control-center | at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
control-center | at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
control-center | at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
control-center | at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
control-center | at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:958)
control-center | at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:294)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:1039)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1281)
control-center | at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
control-center | at java.base/java.lang.Thread.run(Thread.java:834)
rest-proxy | [main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
rest-proxy | java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1606501080901, tries=1, nextAllowedTryMs=1606501081005) timed out at 1606501080905 after 1 attempt(s)
rest-proxy | at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
rest-proxy | at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
rest-proxy | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
rest-proxy | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
rest-proxy | at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
rest-proxy | at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
rest-proxy | Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1606501080901, tries=1, nextAllowedTryMs=1606501081005) timed out at 1606501080905 after 1 attempt(s)
rest-proxy | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
control-center | [kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Error connecting to node broker:29092 (id: -1 rack: null)
control-center | java.net.UnknownHostException: broker
control-center | at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
control-center | at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
control-center | at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
control-center | at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
control-center | at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)

Replace cnfldemos/kafka-connect-datagen with confluentinc/cp-kafka-connect

I've tried to replace image in "cp-all-in-one/cp-all-in-one-community/docker-compose.yml" cnfldemos/kafka-connect-datagen:0.3.2-5.5.0 with confluentinc/cp-kafka-connect:5.5.1, and provided some commands to add datagen package:

  connect:
    image: confluentinc/cp-kafka-connect:5.5.1
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - broker
      - schema-registry
    ports:
      - "8083:8083"
    command:
      - sh
      - -exc
      - |
        confluent-hub install --no-prompt --component-dir /usr/share/confluent-hub-components/ confluentinc/kafka-connect-datagen:latest
        exec /etc/confluent/docker/run

So that this community cluster apply the latest version of kafka-connect. I've exercised all steps in tutorial doc https://docs.confluent.io/current/quickstart/cos-docker-quickstart.html successfully.

Kraft mode docker wont start

When running in kraft mode via docker, it gives the following error:
update_run

It basically says that the update_run.sh script is not a directory, which is correct.

This is my structure:
missing

@ybyzek What am I missing here?

enable.auto.commit

Hello there,
please I couldn't find how to set enable.auto.commit to false in docker-compose file?
Thank you

How to run the contents of update_run and /etc/confluent/docker/run inline without file

Description
I want to run the commands in the update_run.sh file and /etc/confluent/docker/run

Troubleshooting
I tried:

    entrypoint: [ '/bin/sh', '-c' ]
    command: |
      "
        #!/bin/sh
        
        # Docker workaround: Remove check for KAFKA_ZOOKEEPER_CONNECT parameter
        sed -i '/KAFKA_ZOOKEEPER_CONNECT/d' /etc/confluent/docker/configure
        
        # Docker workaround: Ignore cub zk-ready
        sed -i 's/cub zk-ready/echo ignore zk-ready/' /etc/confluent/docker/ensure
        
        # KRaft required step: Format the storage directory with a new cluster ID
        echo "kafka-storage format --ignore-formatted --cluster-id=$(kafka-storage random-uuid) -c /etc/kafka/kafka.properties" >> /etc/confluent/docker/ensure


       /etc/confluent/docker/run
      "

Environment

  • GitHub branch: main
  • Operating System: MacOS
  • Version of Docker: 20.10.22
  • Version of Docker Compose: 2.15.1

ksqldb-cli cannot connect to ksqldb-server

Description
Hi,

I'm trying to use the docker-compose file community version to execute some ksql statements.
When connecting to the ksqldb-cli container through docker exec -it ksqldb-cli bash there is an error with the connection to ksqldb-server. The configuration seems to be correct but I'm not able to use ksql.

Any idea?

Thanks

Troubleshooting

  • Logs from ksqldb-server container (The ksqldb server seems to be up and running on the correct port)
[2021-03-25 10:31:22,311] INFO ksqlDB API server listening on http://0.0.0.0:8088 (io.confluent.ksql.rest.server.KsqlRestApplication)
[2021-03-25 10:31:22,312] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2021-03-25 10:31:22,313] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2021-03-25 10:31:22,313] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2021-03-25 10:31:22,313] INFO Server up and running (io.confluent.ksql.rest.server.KsqlServerMain)
[2021-03-25 10:31:24,092] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)
  • Error message from ksqldb-cli container
*************************************ERROR**************************************
Remote server at http://localhost:8088 does not appear to be a valid KSQL
server. Please ensure that the URL provided is for an active KSQL server.

The server responded with the following error: 
Error issuing GET to KSQL server. path:/info
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException:
	Connection refused: localhost/127.0.0.1:8088
Caused by: Could not connect to the server. Please check the server details are
	correct and that the server is running.
********************************************************************************

Environment

  • GitHub branch: 6.1.0-post
  • Operating System: macOS BigSur 11.1
  • Version of Docker Engine: v20.10.5
  • Version of Docker Compose: 2

Unable to query the Streams

I have setup the Cluster using the cp-all-in-one source via docker.
But while querying the streams I get the error "An unknown error has occurred. Check the connection settings." . Can you advice what could be wrong ?

image

Please find the dump of the cluster settings - broker_config.txt (renamed the file to txt as the filename with the json extension couldn't be uploaded. The contents of the file are not modified).

broker_config.txt

Here are the few more Control Center screenshots for reference -
image

image

image

image

Starting environment fails on Mac M1

Description
When starting the environment running docker compose up -d, all the containers start and after a while they start to exit one by one:

NAME                COMMAND                  SERVICE             STATUS              PORTS
broker              "/etc/confluent/dock…"   broker              exited (1)
connect             "/etc/confluent/dock…"   connect             exited (2)
control-center      "/etc/confluent/dock…"   control-center      exited (1)
ksql-datagen        "bash -c 'echo Waiti…"   ksql-datagen        exited (1)
ksqldb-cli          "/bin/sh"                ksqldb-cli          running
ksqldb-server       "/etc/confluent/dock…"   ksqldb-server       exited (255)
rest-proxy          "/etc/confluent/dock…"   rest-proxy          running             0.0.0.0:8082->8082/tcp
schema-registry     "/etc/confluent/dock…"   schema-registry     running             0.0.0.0:8081->8081/tcp
zookeeper           "/etc/confluent/dock…"   zookeeper           running             0.0.0.0:2181->2181/tcp

Only ksql-cli, rest-proxy, schema-registry and zookeeper remaing running. If I restart containers that had exited, broker stays up, but the others fail:

NAME                COMMAND                  SERVICE             STATUS              PORTS
broker              "/etc/confluent/dock…"   broker              running             0.0.0.0:9092->9092/tcp, 0.0.0.0:9101->9101/tcp
connect             "/etc/confluent/dock…"   connect             exited (1)
control-center      "/etc/confluent/dock…"   control-center      exited (1)
ksql-datagen        "bash -c 'echo Waiti…"   ksql-datagen        exited (1)
ksqldb-cli          "/bin/sh"                ksqldb-cli          running
ksqldb-server       "/etc/confluent/dock…"   ksqldb-server       exited (1)
rest-proxy          "/etc/confluent/dock…"   rest-proxy          running             0.0.0.0:8082->8082/tcp
schema-registry     "/etc/confluent/dock…"   schema-registry     running             0.0.0.0:8081->8081/tcp
zookeeper           "/etc/confluent/dock…"   zookeeper           running             0.0.0.0:2181->2181/tcp

Environment

  • GitHub branch: 7.0.1-post
  • Operating System: macOS Monterey 12.1, Apple M1 Pro
  • Version of Docker: 20.10.12
  • Version of Docker Compose: v2.2.3

Docker-compose for kafka and ksql quickstart fails for version 6.2

I am trying to run the docker-compose.yml as presented on the official site using the 6.2-post version and i keep getting this error:

ERROR: Get https://registry-1.docker.io/v2/confluentinc/ksqldb-examples/manifests/sha256:9f57cfdcd917df8b34a426bbd59632e0eaa31dd3b6181ab7a74692801296a323: EOF

Full Log

PS C:\WOkr\Adrian> docker-compose up -d
Pulling ksqldb-cli (confluentinc/cp-ksqldb-cli:6.2.0)...
6.2.0: Pulling from confluentinc/cp-ksqldb-cli
96965a3a8424: Already exists
4d0d850cd4ad: Already exists
7a73120408f4: Already exists
eb918a808c15: Already exists
5c2fffeabbf7: Already exists
68bcf2239ce4: Already exists
b479bf09eedc: Already exists
2e31c2ab64ea: Already exists
e5161e1fdbdc: Already exists
45a9c148af2c: Pull complete
8ab499c456f7: Pull complete
9a3ef141025b: Pull complete
84b746eb89a2: Pull complete
b2126e655619: Pull complete
119191acb3d3: Pull complete
909ec2a566a9: Pull complete
a18510bc355f: Pull complete
Digest: sha256:245ab3c0bde2c9abe0251c29c08aa0dafdd6c1806f6894a9c61a64f749ca3e4b
Status: Downloaded newer image for confluentinc/cp-ksqldb-cli:6.2.0
Pulling ksql-datagen (confluentinc/ksqldb-examples:6.2.0)...
ERROR: Get https://registry-1.docker.io/v2/confluentinc/ksqldb-examples/manifests/sha256:9f57cfdcd917df8b34a426bbd59632e0eaa31dd3b6181ab7a74692801296a323: EOF

replicationFactor error while adding a new topic

Description
I downloaded and launched the cp-all-in-one docker-compose file and followed the guide to start using Confluent locally by creating a new topic
However, I got the following error on the web page:

Screenshot 2021-05-25 at 16 19 58

And this is the log coming from docker-compose:

broker | [2021-05-25 14:19:45,748] INFO [Admin Manager on Broker 1]: Error processing create topic request CreatableTopic(name='pageviews', numPartitions=1, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='confluent.value.schema.validation', value='false'), CreateableTopicConfig(name='leader.replication.throttled.replicas', value=''), CreateableTopicConfig(name='confluent.key.subject.name.strategy', value='io.confluent.kafka.serializers.subject.TopicNameStrategy'), CreateableTopicConfig(name='message.downconversion.enable', value='true'), CreateableTopicConfig(name='min.insync.replicas', value='2'), CreateableTopicConfig(name='segment.jitter.ms', value='0'), CreateableTopicConfig(name='cleanup.policy', value='delete'), CreateableTopicConfig(name='flush.ms', value='9223372036854775807'), CreateableTopicConfig(name='confluent.tier.local.hotset.ms', value='86400000'), CreateableTopicConfig(name='follower.replication.throttled.replicas', value=''), CreateableTopicConfig(name='confluent.tier.local.hotset.bytes', value='-1'), CreateableTopicConfig(name='confluent.value.subject.name.strategy', value='io.confluent.kafka.serializers.subject.TopicNameStrategy'), CreateableTopicConfig(name='segment.bytes', value='1073741824'), CreateableTopicConfig(name='retention.ms', value='604800000'), CreateableTopicConfig(name='flush.messages', value='9223372036854775807'), CreateableTopicConfig(name='confluent.tier.enable', value='false'), CreateableTopicConfig(name='confluent.tier.segment.hotset.roll.min.bytes', value='104857600'), CreateableTopicConfig(name='message.format.version', value='2.7-IV2'), CreateableTopicConfig(name='confluent.segment.speculative.prefetch.enable', value='false'), CreateableTopicConfig(name='file.delete.delay.ms', value='60000'), CreateableTopicConfig(name='max.compaction.lag.ms', value='9223372036854775807'), CreateableTopicConfig(name='max.message.bytes', value='1048588'), CreateableTopicConfig(name='min.compaction.lag.ms', value='0'), CreateableTopicConfig(name='message.timestamp.type', value='CreateTime'), CreateableTopicConfig(name='preallocate', value='false'), CreateableTopicConfig(name='min.cleanable.dirty.ratio', value='0.5'), CreateableTopicConfig(name='index.interval.bytes', value='4096'), CreateableTopicConfig(name='unclean.leader.election.enable', value='false'), CreateableTopicConfig(name='retention.bytes', value='-1'), CreateableTopicConfig(name='delete.retention.ms', value='86400000'), CreateableTopicConfig(name='confluent.prefer.tier.fetch.ms', value='-1'), CreateableTopicConfig(name='segment.ms', value='604800000'), CreateableTopicConfig(name='confluent.key.schema.validation', value='false'), CreateableTopicConfig(name='message.timestamp.difference.max.ms', value='9223372036854775807'), CreateableTopicConfig(name='segment.index.bytes', value='10485760')], linkName=null, mirrorTopic=null) (kafka.server.AdminManager)

What can I do?

Troubleshooting
N/A

Environment

  • GitHub branch: 6.1.1-post
  • Operating System: Mac OS X Big Sur (11.3.1 (20E241))
  • Version of Docker: 3.3.3 (64133)
  • Version of Docker Compose: 1.29.1

Datagen for Personal Topics

Hello,

Is it possible to create datagen on my own topics ? If so, any guide will be greatly appreciated.
I tried but I'm unable to get it running.
image

The schema.filename setting is required to be filled ? If so what should be the location i.e., in which container ?

Clearing persistent data?

Description
I've been using the cp-all-in-one docker compose environment for a few days and it is now crashing with the following error
"Disk error when trying to access log file on disk"

I do notice that the environment persists topics between "docker-compose up" and "docker-compose down" so wherever it's writing to, it's probably filling up.

How can I clear it? docker system prune does not seem to help and I don't see it mounting any volumes in the docker-compose.yaml

Environment

  • GitHub branch: 6.0.1-post
  • Operating System: macOS 10.15.7
  • Version of Docker: docker 20.10.2
  • Version of Docker Compose: 1.27.4

Docker Compose - ERROR Broker

Description

After docker compose up

  • ERROR broker=1 is storing logs in /tmp/kraft-combined-logs, Kafka expects to store log data in a persistent location (io.confluent.controlcenter.healthcheck.AllHealthCheck)

  • broker | [2024-01-09 09:28:19,376] INFO [Controller 1] CreateTopics result(s): CreatableTopic(name='_confluent-link-metadata', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='min.insync.replicas', value='2')], linkName=null, mirrorTopic=null, sourceTopicId=AAAAAAAAAAAAAAAAAAAAAA, mirrorStartOffsetSpec=-9223372036854775808, mirrorStartOffsets=[]): INVALID_REPLICATION_FACTOR (Unable to replicate the partition 3 time(s): The target replication factor of 3 cannot be reached because only 1 broker(s) are registered.) (org.apache.kafka.controller.ReplicationControlManager)

is that free to use confluentinc/cp-all-in-one images in official laptops?

I'm new to kafka, I checkout github confluentinc/cp-all-in-one docker base branch and started using broker, zookeeper and control center images in my official laptop. I'm using this images for my local development. One of my friend told that confluent kafka is not free and we should not use them in our official laptops (especially control-center image) . He said we should use apache kafka for our local.

Is that free to use confluentinc/cp-all-in-one docker images in our official laptops or do we need get the paid license to use them in official laptops.

control-center problem

Description
I am up docker docker-compose.yml file on my server, all conteaners successfully working on my mushin.
When I using the control center and try the query to stream it was printed the following error message. An unknown error has occurred. Check the connection settings.

I want to mark that when I use ksqldb-cli and do the same query is working well.

Troubleshooting
Not exists

If applicable, please include the output of:

  • operties=%7B%22auto.offset.reset%22%3A%22latest%22%7D HTTP/1.1" 200 1208 8 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:14,355] INFO 10.132.223.169 - - [07/Oct/2021:09:01:14 +0000] "GET /dist/bootstrap-local.6dc3200.js HTTP/1.1" 200 48638 11 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:15,026] INFO 10.132.223.169 - - [07/Oct/2021:09:01:15 +0000] "GET /2.0/feature/flags HTTP/1.1" 200 340 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:15,026] INFO 10.132.223.169 - - [07/Oct/2021:09:01:15 +0000] "GET /3.0/license/payload HTTP/1.1" 200 171 2 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:15,088] INFO 10.132.223.169 - - [07/Oct/2021:09:01:15 +0000] "GET /dist/favicon.ico HTTP/1.1" 200 33310 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,110] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/feature/flags HTTP/1.1" 200 340 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,276] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/feature/flags HTTP/1.1" 200 340 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,748] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/health/status HTTP/1.1" 200 149 5 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,760] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /3.0/auth/principal HTTP/1.1" 200 30 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,760] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/kafka HTTP/1.1" 200 142 4 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,760] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/kafka/display/stream-monitoring HTTP/1.1" 200 110 4 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,761] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/kafka/display/cluster_management HTTP/1.1" 200 110 5 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,761] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/metrics/oE58IsrAT5iffZf8snZdKQ/maxtime HTTP/1.1" 200 27 4 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,926] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/ksql HTTP/1.1" 200 89 2 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,929] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/kafka/display/CLUSTER_MANAGEMENT HTTP/1.1" 200 110 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,930] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "GET /2.0/clusters/connect HTTP/1.1" 200 86 2 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:17,961] INFO 10.132.223.169 - - [07/Oct/2021:09:01:17 +0000] "POST /api/ksql/ksqldb1/ksql HTTP/1.1" 200 2165 35 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,077] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /2.0/clusters/schema-registry HTTP/1.1" 200 143 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,077] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /3.0/license HTTP/1.1" 200 447 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,097] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /api/kafka-rest/oE58IsrAT5iffZf8snZdKQ/v1/metadata/schemaRegistryUrls HTTP/1.1" 403 481 1 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,099] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /2.0/clusters/kafka HTTP/1.1" 200 142 3 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,128] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /2.0/clusters/ksql HTTP/1.1" 200 89 2 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,240] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "GET /api/ksql/ksqldb1/info HTTP/1.1" 200 133 5 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:18,277] INFO 10.132.223.169 - - [07/Oct/2021:09:01:18 +0000] "POST /api/ksql/ksqldb1/ksql HTTP/1.1" 200 2165 36 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:22,194] INFO 10.132.223.169 - - [07/Oct/2021:09:01:22 +0000] "GET /2.0/metrics/oE58IsrAT5iffZf8snZdKQ/broker/status HTTP/1.1" 200 246 9 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:22,576] INFO 10.132.223.169 - - [07/Oct/2021:09:01:22 +0000] "GET /2.0/kafka/oE58IsrAT5iffZf8snZdKQ/brokers/config HTTP/1.1" 200 6775 9 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:29,800] INFO 10.132.223.169 - - [07/Oct/2021:09:01:29 +0000] "POST /api/ksql/ksqldb1/ksql HTTP/1.1" 200 2165 34 (io.confluent.rest-utils.requests) control-center | [2021-10-07 09:01:32,758] INFO 10.132.223.169 - - [07/Oct/2021:09:01:32 +0000] "GET /2.0/metrics/oE58IsrAT5iffZf8snZdKQ/maxtime HTTP/1.1" 200 27 3 (io.confluent.rest-utils.requests)
  • any other relevant commands

Environment

  • GitHub branch: [6.2.1-post]
  • Operating System: Ubuntu 18.04.5 LTS (Bionic Beaver)
  • Version of Docker: 20.10.8
  • Version of Docker Compose: 1.29.2

Zookeeper stops working after some time.

Description
I've been using the cp-all-in-one docker-compose environment for a few days now. No changes to the repo have been made. Typically it works for a short period of time, less than 2hrs, and then the control-center loses touch with the brokers and I have to restart the entire environment.
Now, the entire thing is having trouble starting. Zookeeper says ERROR Unable to access datadir, exiting abnormally

Troubleshooting
I've restarted Docker for Mac and restarted my computer with no success.

If applicable, please include the output of:

  • docker-compose logs <container name>

Environment

  • GitHub branch:6.0.1-post
  • Operating System: macOS Catalina
  • Docker-compose version 1.27.4, build 40524192
  • Docker version: 4.3.1

Control-Center not able to connect to Kafka-Connect cluster

Description
Using the cp-all-in-one-kraft docker-compose file with confluentinc/cp-kafka-connect:7.1.1.amd64 and confluentinc/cp-enterprise-control-center:7.1.1.amd64 , kafka-connect is showing errors in the logs as follows:

[2022-06-04 14:38:15,210] ERROR Uncaught exception in REST call to /v1/metadata/id (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
javax.ws.rs.NotFoundException: HTTP 404 Not Found
        at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:252)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
        at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
        at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
        at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
        at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
        at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
        at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
        at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
        at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
        at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
        at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
        at org.eclipse.jetty.server.Server.handle(Server.java:516)
        at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
        at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
        at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
        at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
        at java.base/java.lang.Thread.run(Thread.java:829)

It seems control-center is sending those requests (i.e. /v1/metadata/id) to kafka-connect and as a result shows that no kafka-connect clusters can be found.

Using confluentinc/cp-enterprise-control-center:6.2.0 does not exhibit this issue.

Environment

  • GitHub branch: 7.1.1
  • Operating System: Windows
  • Version of Docker: 20.10.14, build a224086
  • Version of Docker Compose: 1.29.2, build 5becea4c

No Connect Clusters Found

Description

I'm following the steps from the documentation to add 3 connectors to the Docker Compose connect service found here. Next, it appears to be an issue in connecting to the Kafka Connect Cluster. When I navigate to the connect clusters, I see the following message:

No Connect Clusters Found

Steps

  1. cd cp-all-in-one-kraft
  2. docker compose up -d

Troubleshooting
Identify any existing issues that seem related: https://github.com/confluentinc/cp-all-in-one/issues?q=is%3Aissue

#94

If applicable, please include the output of:

  • docker-compose logs connect
connect  | 2022-08-30T07:31:18.781689202Z [2022-08-30 07:31:18,781] ERROR Uncaught exception in REST call to /v1/metadata/id (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
connect  | 2022-08-30T07:31:18.781764254Z javax.ws.rs.NotFoundException: HTTP 404 Not Found
connect  | 2022-08-30T07:31:18.781783166Z       at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:252)
connect  | 2022-08-30T07:31:18.781790492Z       at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
connect  | 2022-08-30T07:31:18.781793890Z       at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
connect  | 2022-08-30T07:31:18.781797862Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
connect  | 2022-08-30T07:31:18.781801839Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
connect  | 2022-08-30T07:31:18.781805012Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
connect  | 2022-08-30T07:31:18.781808209Z       at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
connect  | 2022-08-30T07:31:18.781811355Z       at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
connect  | 2022-08-30T07:31:18.781814517Z       at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
connect  | 2022-08-30T07:31:18.781817695Z       at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
connect  | 2022-08-30T07:31:18.781820796Z       at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
connect  | 2022-08-30T07:31:18.781823874Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
connect  | 2022-08-30T07:31:18.781829775Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
connect  | 2022-08-30T07:31:18.781846887Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
connect  | 2022-08-30T07:31:18.781850901Z       at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
connect  | 2022-08-30T07:31:18.781854014Z       at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
connect  | 2022-08-30T07:31:18.781857015Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
connect  | 2022-08-30T07:31:18.781860137Z       at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
connect  | 2022-08-30T07:31:18.781863256Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
connect  | 2022-08-30T07:31:18.781866258Z       at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
connect  | 2022-08-30T07:31:18.781869364Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
connect  | 2022-08-30T07:31:18.781872414Z       at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
connect  | 2022-08-30T07:31:18.781875429Z       at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
connect  | 2022-08-30T07:31:18.781880331Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
connect  | 2022-08-30T07:31:18.781883467Z       at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
connect  | 2022-08-30T07:31:18.781886595Z       at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
connect  | 2022-08-30T07:31:18.781889596Z       at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
connect  | 2022-08-30T07:31:18.781892659Z       at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
connect  | 2022-08-30T07:31:18.781895687Z       at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
connect  | 2022-08-30T07:31:18.781898808Z       at org.eclipse.jetty.server.Server.handle(Server.java:516)
connect  | 2022-08-30T07:31:18.781901881Z       at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
connect  | 2022-08-30T07:31:18.781904957Z       at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
connect  | 2022-08-30T07:31:18.781907939Z       at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
connect  | 2022-08-30T07:31:18.781911006Z       at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
connect  | 2022-08-30T07:31:18.781913982Z       at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
connect  | 2022-08-30T07:31:18.781917005Z       at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
connect  | 2022-08-30T07:31:18.781919984Z       at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
connect  | 2022-08-30T07:31:18.781922989Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
connect  | 2022-08-30T07:31:18.781925979Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
connect  | 2022-08-30T07:31:18.781932984Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
connect  | 2022-08-30T07:31:18.781936289Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
connect  | 2022-08-30T07:31:18.781939370Z       at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
connect  | 2022-08-30T07:31:18.781942419Z       at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
connect  | 2022-08-30T07:31:18.781945433Z       at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
connect  | 2022-08-30T07:31:18.781948548Z       at java.base/java.lang.Thread.run(Thread.java:829)
connect  | 2022-08-30T07:31:35.574316438Z [2022-08-30 07:31:35,573] ERROR Uncaught exception in REST call to /v1/metadata/id (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
connect  | 2022-08-30T07:31:35.574407775Z javax.ws.rs.NotFoundException: HTTP 404 Not Found
connect  | 2022-08-30T07:31:35.574434023Z       at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:252)
connect  | 2022-08-30T07:31:35.574442641Z       at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
connect  | 2022-08-30T07:31:35.574448324Z       at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
connect  | 2022-08-30T07:31:35.574452415Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
connect  | 2022-08-30T07:31:35.574456533Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
connect  | 2022-08-30T07:31:35.574460631Z       at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
connect  | 2022-08-30T07:31:35.574464777Z       at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
connect  | 2022-08-30T07:31:35.574468897Z       at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
connect  | 2022-08-30T07:31:35.574472961Z       at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
connect  | 2022-08-30T07:31:35.574477136Z       at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
connect  | 2022-08-30T07:31:35.574481224Z       at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
connect  | 2022-08-30T07:31:35.574485550Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
connect  | 2022-08-30T07:31:35.574489761Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
connect  | 2022-08-30T07:31:35.574493844Z       at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
connect  | 2022-08-30T07:31:35.574497898Z       at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
connect  | 2022-08-30T07:31:35.574501957Z       at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:550)
connect  | 2022-08-30T07:31:35.574506009Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
connect  | 2022-08-30T07:31:35.574510163Z       at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
connect  | 2022-08-30T07:31:35.574527870Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
connect  | 2022-08-30T07:31:35.574532953Z       at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
connect  | 2022-08-30T07:31:35.574537022Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
connect  | 2022-08-30T07:31:35.574541089Z       at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
connect  | 2022-08-30T07:31:35.574545104Z       at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
connect  | 2022-08-30T07:31:35.574550736Z       at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
connect  | 2022-08-30T07:31:35.574555220Z       at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
connect  | 2022-08-30T07:31:35.574559400Z       at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
connect  | 2022-08-30T07:31:35.574563537Z       at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234)
connect  | 2022-08-30T07:31:35.574567722Z       at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
connect  | 2022-08-30T07:31:35.574571736Z       at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
connect  | 2022-08-30T07:31:35.574575766Z       at org.eclipse.jetty.server.Server.handle(Server.java:516)
connect  | 2022-08-30T07:31:35.574579763Z       at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
connect  | 2022-08-30T07:31:35.574583790Z       at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
connect  | 2022-08-30T07:31:35.574587807Z       at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
connect  | 2022-08-30T07:31:35.574591908Z       at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
connect  | 2022-08-30T07:31:35.574595900Z       at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
connect  | 2022-08-30T07:31:35.574599999Z       at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
connect  | 2022-08-30T07:31:35.574604091Z       at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
connect  | 2022-08-30T07:31:35.574608184Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
connect  | 2022-08-30T07:31:35.574612219Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
connect  | 2022-08-30T07:31:35.574616262Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
connect  | 2022-08-30T07:31:35.574620309Z       at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
connect  | 2022-08-30T07:31:35.574624461Z       at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
connect  | 2022-08-30T07:31:35.574628520Z       at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
connect  | 2022-08-30T07:31:35.574632575Z       at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
connect  | 2022-08-30T07:31:35.574641104Z       at java.base/java.lang.Thread.run(Thread.java:829)
  • any other relevant commands

Dockerfile:

FROM confluentinc/cp-kafka-connect-base

RUN confluent-hub install --no-prompt jcustenborder/kafka-connect-twitter:0.3.34 && \
  confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.5.2 && \
  confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:14.0.0
---
# version: '2'
services:
  broker:
    image: confluentinc/cp-kafka:7.2.1
    hostname: broker
    container_name: broker
    ports:
      - '9092:9092'
      - '9101:9101'
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
    volumes:
      - ./update_run.sh:/tmp/update_run.sh
    command: 'bash -c ''if [ ! -f /tmp/update_run.sh ]; then echo "ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'''

  schema-registry:
    image: confluentinc/cp-schema-registry:7.2.1
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - broker
    ports:
      - '8081:8081'
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

  connect:
    # the default image appears to be AMD64 and not arm64
    # image: cnfldemos/cp-server-connect-datagen:0.5.3-7.2.1
    image: conradwt/kafka-connectors:1.0.0
    hostname: connect
    container_name: connect
    depends_on:
      - broker
      - schema-registry
    ports:
      - '8083:8083'
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      # CLASSPATH required due to CC-2422
      CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.2.1.jar
      CONNECT_PRODUCER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor'
      CONNECT_CONSUMER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor'
      CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components'
      CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR

  control-center:
    image: confluentinc/cp-enterprise-control-center:7.2.1
    hostname: control-center
    container_name: control-center
    depends_on:
      - broker
      - schema-registry
      - connect
      - ksqldb-server
    ports:
      - '9021:9021'
    environment:
      CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
      CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
      CONTROL_CENTER_KSQL_KSQLDB1_URL: 'http://ksqldb-server:8088'
      CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: 'http://localhost:8088'
      CONTROL_CENTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
      CONTROL_CENTER_REPLICATION_FACTOR: 1
      CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
      CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
      CONFLUENT_METRICS_TOPIC_REPLICATION: 1
      PORT: 9021

  ksqldb-server:
    image: confluentinc/cp-ksqldb-server:7.2.1
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
      - connect
    ports:
      - '8088:8088'
    environment:
      KSQL_CONFIG_DIR: '/etc/ksql'
      KSQL_BOOTSTRAP_SERVERS: 'broker:29092'
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: 'http://0.0.0.0:8088'
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor'
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: 'io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor'
      KSQL_KSQL_CONNECT_URL: 'http://connect:8083'
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'

  ksqldb-cli:
    image: confluentinc/cp-ksqldb-cli:7.2.1
    container_name: ksqldb-cli
    depends_on:
      - broker
      - connect
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true

  ksql-datagen:
    image: confluentinc/ksqldb-examples:7.2.1
    hostname: ksql-datagen
    container_name: ksql-datagen
    depends_on:
      - ksqldb-server
      - broker
      - schema-registry
      - connect
    command: "bash -c 'echo Waiting for Kafka to be ready... && \
      cub kafka-ready -b broker:29092 1 40 && \
      echo Waiting for Confluent Schema Registry to be ready... && \
      cub sr-ready schema-registry 8081 40 && \
      echo Waiting a few seconds for topic creation to finish... && \
      sleep 11 && \
      tail -f /dev/null'"
    environment:
      KSQL_CONFIG_DIR: '/etc/ksql'
      STREAMS_BOOTSTRAP_SERVERS: broker:29092
      STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
      STREAMS_SCHEMA_REGISTRY_PORT: 8081

  rest-proxy:
    image: confluentinc/cp-kafka-rest:7.2.1
    depends_on:
      - broker
      - schema-registry
    ports:
      - 8082:8082
    hostname: rest-proxy
    container_name: rest-proxy
    environment:
      KAFKA_REST_HOST_NAME: rest-proxy
      KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
      KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'

  #
  # customizations
  #

  # we will use elasticsearch as one of our sinks.
  # This configuration allows you to start elasticsearch
  elasticsearch:
    image: itzg/elasticsearch:2.4.3
    environment:
      PLUGINS: appbaseio/dejavu
      OPTS: -Dindex.number_of_shards=1 -Dindex.number_of_replicas=0
    ports:
      - '9200:9200'

  # we will use postgres as one of our sinks.
  # This configuration allows you to start postgres
  postgres:
    image: postgres:9.5-alpine
    environment:
      POSTGRES_USER: postgres # define credentials
      POSTGRES_PASSWORD: postgres # define credentials
      POSTGRES_DB: postgres # define database
    ports:
      - 5432:5432 # Postgres port

Environment

  • GitHub branch: [e.g. 6.0.1-post, etc] 7.2.1-post (cp-all-in-one/cp-all-in-one-kraft)
  • Operating System: macOS 12.5.1
  • Version of Docker: 20.10.17
  • Version of Docker Compose: Docker Compose version v2.7.0
  • Docker Desktop Resources: 8 GB RAM, 4 CPUs, 1 GB Swap, and 59.6 GB Storage

Hardware

  • Apple M1 Max MacBook Pro
  • 64 GB RAM
  • 4 TB SSD

Use consistent image references for services in cp-all-in-one yaml

In cp-all-in-one/docker-compose.yml all services except connect use a consistent image reference of the form:

image: ${REPOSITORY}/<<SERVICE_NAME>>:${CONFLUENT_DOCKER_TAG}

Why connect is using cnfldemos/cp-server-connect-datagen:0.2.0-5.4.0 instead should be documented/clarified or fixed to provide a consistent setup (i.e. to enable a version increment by simply changing the CONFLUENT_DOCKER_TAG in the .env

Is it possible to use the docker-compose stack on a cloud CI server such as GitLab or Travis?

Hi,

I am new Kafka user and have managed to get a docker-compose stack working locally to successfully run functional tests from an ASP.NET Core 3.1 test service. This exists within the same docker-compose stack as Kafa, Zookeeper and Rest-Proxy services, on the same network.

The SUT and tests use the .NET Core Client to create a topic at startup if it does not already exist.

As soon as I try to run this docker-compose stack on a remote GitLab CI server the test hangs while creating the topic. The logs (see below) show that the .NET client is connecting to the correct internal service kafka:19092 within the docker-compose stack. There is some activity from kafka service starting to create the topic and then it blocks.

How do I configure the kafka broker and rest proxy docker-compose services to run on an external CI server? Is this possible?

Details of the docker-compose stack and GitLab CI environment are included below...

create docker network

docker network create --gateway 172.19.0.1 --subnet 172.19.0.0/16 broker

** docker-compose stack with zookeeper, kafka, rest-proxy and ASP.NET Core 3.1 for integration test services **

---
version: "3.8"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.0.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    networks:
      camnet:
        ipv4_address: 172.19.0.11
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:6.0.0
    hostname: kafka
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
      - "9101:9101"
    networks:
      camnet:
        ipv4_address: 172.19.0.21
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092 
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_BROKER_ID: 1
      KAFKA_DEFAULT_REPLICATION_FACTOR : 1
      KAFKA_NUM_PARTITIONS: 3

  rest-proxy:
    image: confluentinc/cp-kafka-rest:6.0.0
    hostname: rest-proxy
    container_name: rest-proxy
    depends_on:
      - kafka
    ports:
      - 8082:8082
    networks:
      camnet:
        ipv4_address: 172.19.0.31
    environment:
      KAFKA_REST_HOST_NAME: rest-proxy
      KAFKA_REST_BOOTSTRAP_SERVERS: 'kafka:19092'
      KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
    
 # ASP.NET Core 3 integration tests
 # uses kafka dotnet client https://docs.confluent.io/current/clients/dotnet.html
 # to create topics from an ASP.NET Core Test Server 
  webapp:
    build:
      context: ../
      dockerfile: Docker/Test/Dockerfile
      target: test
    hostname: webapp
    image: dcs3spp/webapp
    container_name: webapp
    depends_on:
      - kafka
      - rest-proxy
    networks:
      camnet:
        ipv4_address: 172.19.0.61
    environment:
      - ASPNETCORE_ENVIRONMENT=Docker

networks:
  camnet:
    external:
      name: broker

GitLab.com CI Docker Network Environment

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
2b3286d21fee        bridge              bridge              local
a17bf57d1a86        host                host                local
0252525b2ca4        none                null                local
$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "2b3286d21fee076047e78188b67c2912dfd388a170de3e3cf2ba8d5238e1c6c7",
        "Created": "2020-11-16T14:53:35.574299006Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
$ docker network inspect host
[
    {
        "Name": "host",
        "Id": "a17bf57d1a865512bebd3f7f73e0fd761d40b1d4f87765edeac6099e86b94339",
        "Created": "2020-11-16T14:53:35.551372286Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "0252525b2ca4b28ddc0f950b472485167cfe18e003c62f3d09ce2a856880362a",
        "Created": "2020-11-16T14:53:35.536741983Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
$ docker network create --gateway 172.19.0.1 --subnet 172.19.0.0/16 broker
dbd923b4caacca225f52e8a82dfcad184a1652bde1b5976aa07bbddb2919126c

Gitab.com CI server logs

webapp        | A total of 1 test files matched the specified pattern.
webapp        | warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
webapp        |       Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
webapp        | info: WebApp.S3.S3Service[0]
webapp        |       Minio client created for endpoint minio:9000
webapp        | info: WebApp.Kafka.ProducerService[0]
webapp        |       ProducerService constructor called
webapp        | info: WebApp.Kafka.SchemaRegistry.Serdes.JsonDeserializer[0]
webapp        |       Constructed
webapp        | info: WebApp.Kafka.ConsumerService[0]
webapp        |       Kafka consumer listening to camera topics =>
webapp        | info: WebApp.Kafka.ConsumerService[0]
webapp        |       Camera Topic :: shinobi/RHSsYfiV6Z/xi5cncrNK6/trigger
webapp        | info: WebApp.Kafka.ConsumerService[0]
webapp        |       Camera Topic :: shinobi/group/monitor/trigger
webapp        | warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
webapp        |       No XML encryptor configured. Key {47af6978-c38e-429f-9b34-455ca445c2d8} may be persisted to storage in unencrypted form.
webapp        | info: WebApp.Kafka.Admin.KafkaAdminService[0]
webapp        |       Admin service trying to create Kafka Topic...
webapp        | info: WebApp.Kafka.Admin.KafkaAdminService[0]
webapp        |       Topic::eventbus, ReplicationCount::1, PartitionCount::3
webapp        | info: WebApp.Kafka.Admin.KafkaAdminService[0]
webapp        |       Bootstrap Servers::kafka:19092
kafka         | [2020-11-16 14:59:32,335] INFO Creating topic eventbus with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient)
kafka         | [2020-11-16 14:59:32,543] INFO [Controller id=1] New topics: [Set(eventbus)], deleted topics: [HashSet()], new partition replica assignment [Map(eventbus-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), eventbus-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), eventbus-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))] (kafka.controller.KafkaController)
kafka         | [2020-11-16 14:59:32,546] INFO [Controller id=1] New partition creation callback for eventbus-0,eventbus-1,eventbus-2 (kafka.controller.KafkaController)
kafka         | [2020-11-16 14:59:32,557] INFO [Controller id=1 epoch=1] Changed partition eventbus-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,558] INFO [Controller id=1 epoch=1] Changed partition eventbus-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,559] INFO [Controller id=1 epoch=1] Changed partition eventbus-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,560] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,613] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-0 from NonExistentReplica to NewReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,614] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-1 from NonExistentReplica to NewReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,615] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-2 from NonExistentReplica to NewReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,616] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,770] INFO [Controller id=1 epoch=1] Changed partition eventbus-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
kafka         | [2020-11-16 14:59:32,772] INFO [Controller id=1 epoch=1] Changed partition eventbus-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
kafka         | [2020-11-16 14:59:32,773] INFO [Controller id=1 epoch=1] Changed partition eventbus-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), zkVersion=0) (state.change.logger)
kafka         | [2020-11-16 14:59:32,804] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition eventbus-0 (state.change.logger)
kafka         | [2020-11-16 14:59:32,805] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition eventbus-1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,806] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) to broker 1 for partition eventbus-2 (state.change.logger)
kafka         | [2020-11-16 14:59:32,808] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 3 become-leader and 0 become-follower partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,818] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 3 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,820] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 1 epoch 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,821] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 1 epoch 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,822] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 3 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,823] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='eventbus', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true) correlation id 1 from controller 1 epoch 1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,828] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-0 from NewReplica to OnlineReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,829] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-1 from NewReplica to OnlineReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,830] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition eventbus-2 from NewReplica to OnlineReplica (state.change.logger)
kafka         | [2020-11-16 14:59:32,832] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:32,869] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition eventbus-2 (state.change.logger)
kafka         | [2020-11-16 14:59:32,870] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition eventbus-1 (state.change.logger)
kafka         | [2020-11-16 14:59:32,871] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition eventbus-0 (state.change.logger)
kafka         | [2020-11-16 14:59:32,873] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(eventbus-2, eventbus-1, eventbus-0) (kafka.server.ReplicaFetcherManager)
kafka         | [2020-11-16 14:59:32,874] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 3 partitions (state.change.logger)
kafka         | [2020-11-16 14:59:33,345] INFO [Log partition=eventbus-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
kafka         | [2020-11-16 14:59:33,421] INFO Created log for partition eventbus-2 in /var/lib/kafka/data/eventbus-2 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, retention.ms -> 604800000, segment.bytes -> 1073741824, flush.messages -> 9223372036854775807, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
kafka         | [2020-11-16 14:59:33,424] INFO [Partition eventbus-2 broker=1] No checkpointed highwatermark is found for partition eventbus-2 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,425] INFO [Partition eventbus-2 broker=1] Log loaded for partition eventbus-2 with initial high watermark 0 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,429] INFO [Broker id=1] Leader eventbus-2 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [1] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger)
kafka         | [2020-11-16 14:59:33,462] INFO [Log partition=eventbus-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
kafka         | [2020-11-16 14:59:33,468] INFO Created log for partition eventbus-1 in /var/lib/kafka/data/eventbus-1 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, retention.ms -> 604800000, segment.bytes -> 1073741824, flush.messages -> 9223372036854775807, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
kafka         | [2020-11-16 14:59:33,469] INFO [Partition eventbus-1 broker=1] No checkpointed highwatermark is found for partition eventbus-1 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,470] INFO [Partition eventbus-1 broker=1] Log loaded for partition eventbus-1 with initial high watermark 0 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,470] INFO [Broker id=1] Leader eventbus-1 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [1] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger)
kafka         | [2020-11-16 14:59:33,484] INFO [Log partition=eventbus-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
kafka         | [2020-11-16 14:59:33,488] INFO Created log for partition eventbus-0 in /var/lib/kafka/data/eventbus-0 with properties {compression.type -> producer, min.insync.replicas -> 1, message.downconversion.enable -> true, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, retention.ms -> 604800000, segment.bytes -> 1073741824, flush.messages -> 9223372036854775807, message.format.version -> 2.6-IV0, max.compaction.lag.ms -> 9223372036854775807, file.delete.delay.ms -> 60000, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, index.interval.bytes -> 4096, min.cleanable.dirty.ratio -> 0.5, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
kafka         | [2020-11-16 14:59:33,489] INFO [Partition eventbus-0 broker=1] No checkpointed highwatermark is found for partition eventbus-0 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,489] INFO [Partition eventbus-0 broker=1] Log loaded for partition eventbus-0 with initial high watermark 0 (kafka.cluster.Partition)
kafka         | [2020-11-16 14:59:33,490] INFO [Broker id=1] Leader eventbus-0 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [1] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger)
kafka         | [2020-11-16 14:59:33,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition eventbus-2 (state.change.logger)
kafka         | [2020-11-16 14:59:33,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition eventbus-1 (state.change.logger)
kafka         | [2020-11-16 14:59:33,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition eventbus-0 (state.change.logger)
kafka         | [2020-11-16 14:59:33,549] TRACE [Controller id=1 epoch=1] Received response {error_code=0,partition_errors=[{topic_name=eventbus,partition_index=0,error_code=0,_tagged_fields={}},{topic_name=eventbus,partition_index=1,error_code=0,_tagged_fields={}},{topic_name=eventbus,partition_index=2,error_code=0,_tagged_fields={}}],_tagged_fields={}} for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:19092 (id: 1 rack: null) (state.change.logger)
kafka         | [2020-11-16 14:59:33,564] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='eventbus', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition eventbus-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger)
kafka         | [2020-11-16 14:59:33,569] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='eventbus', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition eventbus-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger)
kafka         | [2020-11-16 14:59:33,570] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='eventbus', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition eventbus-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger)
kafka         | [2020-11-16 14:59:33,574] INFO [Broker id=1] Add 3 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger)
kafka         | [2020-11-16 14:59:33,576] TRACE [Controller id=1 epoch=1] Received response {error_code=0,_tagged_fields={}} for request UPDATE_METADATA with correlation id 2 sent to broker kafka:19092 (id: 1 rack: null) (state.change.logger)
kafka         | [2020-11-16 15:01:52,665] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:01:52,666] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:01:52,669] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:01:52,675] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:06:47,246] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka         | [2020-11-16 15:06:52,676] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:06:52,677] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:06:52,678] DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController)
kafka         | [2020-11-16 15:06:52,679] TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController)

cp-all-in-one not stable

Description
I try running the cp-all-in-one environment but it usually crashes after running for a period of time. If I try producing messages into its topics, it generally fails quicker.

Maybe the volume of messages is crashing the broker?

Troubleshooting
Once things go down, it cycles with the following error:

�[36mcontrol-center     |�[0m [2021-01-25 16:20:38,300] WARN [Consumer clientId=_confluent-controlcenter-6-0-1-1-c0537858-0b1c-4184-b6a1-24e33cb06cb3-StreamThread-1-consumer, groupId=_confluent-controlcenter-6-0-1-1] Error connecting to node broker:29092 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient)
�[36mcontrol-center     |�[0m java.net.UnknownHostException: broker: Name or service not known
�[36mcontrol-center     |�[0m 	at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
�[36mcontrol-center     |�[0m 	at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:966)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:74)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1139)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1027)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:164)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:254)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:994)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1499)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1447)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.TaskManager.commitOffsetsOrTransaction(TaskManager.java:1008)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.TaskManager.commit(TaskManager.java:963)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:851)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:714)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
�[36mcontrol-center     |�[0m 	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)

Environment

  • GitHub branch: 6.0.1-post
  • Operating System: Mac Catalina
  • Version of Docker: 20.10.0 (engine). 3.0.2 (desktop)
  • Version of Docker Compose: 1.27.4, build 40524192

How to configure log level

We are trying to reduce the logs when testing with other containers we are bringing up.
We get blasted with INFO level logging:

kafka_1 | [2020-06-11 18:56:22,052] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController)

We have tried a number of ways to resolve this in the .yml:
` KAFKA_LOG4J_LOGGERS: 'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'

  KAFKA_LOG4J_ROOT_LOGLEVEL: WARN

  KAFKA_TOOLS_LOG4J_LOGLEVEL: WARN`

None of that appears to work. How does one set the log level?

KRaft - Kafka nodes continuously generating metadata deltas from snapshots

Description
As Kafka cluster starts running in KRaft mode, the Kafka nodes continuously generates metadata from the shapshots.

I don't know if it is an issue, I want to be sure that this is how often it should happen.

Troubleshooting

Check out partial logs:
2023-05-22 11:58:13 kafka1 | [2023-05-22 08:58:13,524] INFO [MetadataLoader 1] handleSnapshot: generated a metadata delta from a snapshot at offset 47311 in 7 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:13 kafka2 | [2023-05-22 08:58:13,525] INFO [MetadataLoader 2] handleSnapshot: generated a metadata delta from a snapshot at offset 47312 in 8 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:14 kafka1 | [2023-05-22 08:58:14,028] INFO [MetadataLoader 1] handleSnapshot: generated a metadata delta from a snapshot at offset 47312 in 15 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:14 kafka3 | [2023-05-22 08:58:14,028] INFO [MetadataLoader 3] handleSnapshot: generated a metadata delta from a snapshot at offset 47312 in 19 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:14 kafka2 | [2023-05-22 08:58:14,030] INFO [MetadataLoader 2] handleSnapshot: generated a metadata delta from a snapshot at offset 47313 in 9 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:14 kafka1 | [2023-05-22 08:58:14,528] INFO [MetadataLoader 1] handleSnapshot: generated a metadata delta from a snapshot at offset 47313 in 35 us. (org.apache.kafka.image.loader.MetadataLoader) 2023-05-22 11:58:14 kafka3 | [2023-05-22 08:58:14,529] INFO [MetadataLoader 3] handleSnapshot: generated a metadata delta from a snapshot at offset 47313 in 46 us.

image

Environment

  • The setup: here
  • Operating System: Windows 10
  • Version of Docker: v20.10.23
  • Version of Docker Compose: 3

Can't connect/consume from a different machine on the same network

I have installed Confluent-Kafka using Docker with the help of docker-compose.yml. The installation was successful, but I am unable to consume the data from a different machine on the same network. Here are the logs when I try to consume from a different machine on same network.

%3|1693647055.081|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: localhost:9092: Connect to ipv6#[::1]:9092 failed: Unknown error (after 2048ms in state CONNECT)
%3|1693647065.071|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: localhost:9092: Connect to ipv6#[::1]:9092 failed: Unknown error (after 2038ms in state CONNECT, 1 identical error(s) suppressed)
%3|1693647087.132|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: localhost:9092: Connect to ipv4#127.0.0.1:9092 failed: Unknown error (after 2035ms in state CONNECT)
%3|1693647099.176|FAIL|rdkafka#consumer-1| [thrd:GroupCoordinator]: GroupCoordinator: localhost:9092: Connect to ipv4#127.0.0.1:9092 failed: Unknown error (after 2033ms in state CONNECT, 1 identical error(s) suppressed)

I am getting these lines repeatedly.

I have an intuition that this is the the problem with the dockerized version of kafka. So I installed it using source binaries, and it worked. Now I am able to consume from any machine on the same network.

But, I want dockerized version of Confluent-Kafka as my project requirements. Please help me in troubleshooting this issue/error.

Not connect between containers with docker-compose

when i run docker-compose up -d the containers don't connect, i explain:
control-center cannot connect to the containers: connect, ksqldb and schema-registry.

In the control-center dashboard it appears to me for the three services:

"You need to configure Control Center so it knows how to connect to your Connect-ksqldb-schema-registry cluster(s)."

log from control-center:

0-06-22 12:55:32,808] WARN The configuration 'consumer.session.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,808] WARN The configuration 'producer.max.block.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,808] WARN The configuration 'producer.retries' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,808] WARN The configuration 'upgrade.from' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,808] WARN The configuration 'producer.retry.backoff.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] WARN The configuration 'producer.linger.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] WARN The configuration 'producer.delivery.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] WARN The configuration 'cache.max.bytes.buffering' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] WARN The configuration 'producer.compression.type' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] WARN The configuration 'num.stream.threads' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,809] INFO Kafka version: 5.5.0-ce (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,809] INFO Kafka commitId: 6068e5d52c5e294e (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,809] INFO Kafka startTimeMs: 1592830532809 (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,832] INFO cluster 1TJGTVihQFaiVxDqoNfzLA registered with configuration controlcenter.cluster (io.confluent.controlcenter.kafka.ClusterManager)

[2020-06-22 12:55:32,850] INFO Starting Health Check (io.confluent.controlcenter.ControlCenter)

[2020-06-22 12:55:32,850] INFO Starting Alert Manager (io.confluent.controlcenter.ControlCenter)

[2020-06-22 12:55:32,851] INFO Starting Consumer Offsets Fetch (io.confluent.controlcenter.ControlCenter)

[2020-06-22 12:55:32,853] INFO current clusterId=1TJGTVihQFaiVxDqoNfzLA (io.confluent.controlcenter.healthcheck.HealthCheck)

[2020-06-22 12:55:32,855] INFO AdminClientConfig values:

bootstrap.servers = [broker:29092]


client.dns.lookup = default


client.id = 


connections.max.idle.ms = 300000


default.api.timeout.ms = 60000


metadata.max.age.ms = 300000


metric.reporters = []


metrics.num.samples = 2


metrics.recording.level = INFO


metrics.sample.window.ms = 30000


receive.buffer.bytes = 65536


reconnect.backoff.max.ms = 1000


reconnect.backoff.ms = 50


request.timeout.ms = 30000


retries = 2147483647


retry.backoff.ms = 100


sasl.client.callback.handler.class = null


sasl.jaas.config = null


sasl.kerberos.kinit.cmd = /usr/bin/kinit


sasl.kerberos.min.time.before.relogin = 60000


sasl.kerberos.service.name = null


sasl.kerberos.ticket.renew.jitter = 0.05


sasl.kerberos.ticket.renew.window.factor = 0.8


sasl.login.callback.handler.class = null


sasl.login.class = null


sasl.login.refresh.buffer.seconds = 300


sasl.login.refresh.min.period.seconds = 60


sasl.login.refresh.window.factor = 0.8


sasl.login.refresh.window.jitter = 0.05


sasl.mechanism = GSSAPI


security.protocol = PLAINTEXT


security.providers = null


send.buffer.bytes = 131072


ssl.cipher.suites = null


ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]


ssl.endpoint.identification.algorithm = https


ssl.key.password = null


ssl.keymanager.algorithm = SunX509


ssl.keystore.location = null


ssl.keystore.password = null


ssl.keystore.type = JKS


ssl.protocol = TLS


ssl.provider = null


ssl.secure.random.implementation = null


ssl.trustmanager.algorithm = PKIX


ssl.truststore.location = null


ssl.truststore.password = null


ssl.truststore.type = JKS

(org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'consumer.session.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.max.block.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.retries' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'upgrade.from' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.retry.backoff.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.linger.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.delivery.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'cache.max.bytes.buffering' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'producer.compression.type' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] WARN The configuration 'num.stream.threads' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

[2020-06-22 12:55:32,857] INFO Kafka version: 5.5.0-ce (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,857] INFO Kafka commitId: 6068e5d52c5e294e (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,857] INFO Kafka startTimeMs: 1592830532857 (org.apache.kafka.common.utils.AppInfoParser)

[2020-06-22 12:55:32,874] INFO broker id set has changed new={1=[broker:29092 (id: 1 rack: null)]} removed={} (io.confluent.controlcenter.healthcheck.HealthCheck)

[2020-06-22 12:55:32,877] INFO new controller=broker:29092 (id: 1 rack: null) (io.confluent.controlcenter.healthcheck.HealthCheck)

[2020-06-22 12:55:33,074] WARN DEPRECATION warning: listeners configuration is not configured. Falling back to the deprecated port configuration. (io.confluent.rest.ApplicationServer)

[2020-06-22 12:55:33,075] INFO Adding listener: http://0.0.0.0:9021 (io.confluent.rest.ApplicationServer)

[2020-06-22 12:55:33,568] INFO jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_212-b04 (org.eclipse.jetty.server.Server)

[2020-06-22 12:55:33,654] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session)

[2020-06-22 12:55:33,654] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session)

[2020-06-22 12:55:33,656] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session)

[2020-06-22 12:55:33,862] INFO name=monitoring-input-topic-progress-.count type=monitoring cluster= value=0.0 (io.confluent.controlcenter.util.StreamProgressReporter)

[2020-06-22 12:55:33,863] INFO name=monitoring-input-topic-progress-.rate type=monitoring cluster= value=NaN (io.confluent.controlcenter.util.StreamProgressReporter)

[2020-06-22 12:55:33,863] INFO name=monitoring-input-topic-progress-.timestamp type=monitoring cluster= value=1.7976931348623157E308 (io.confluent.controlcenter.util.StreamProgressReporter)

[2020-06-22 12:55:33,864] INFO name=monitoring-input-topic-progress-.min type=monitoring cluster= value=1.7976931348623157E308 (io.confluent.controlcenter.util.StreamProgressReporter)

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.FeatureFlagResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.FeatureFlagResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.CachedConsumerOffsetsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.CachedConsumerOffsetsResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.PermissionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.PermissionsResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.KafkaResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.KafkaResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.MetricsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.MetricsResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.ServiceHealthCheckResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.ServiceHealthCheckResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.AlertsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.AlertsResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.MessageDeliveryResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.MessageDeliveryResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.ClusterResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.ClusterResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.StatusResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.StatusResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.LicenseResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.LicenseResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.AuthResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.AuthResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.HealthCheckResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.HealthCheckResource will be ignored.

Jun 22, 2020 12:55:34 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime

WARNING: A provider io.confluent.controlcenter.rest.CommandResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.controlcenter.rest.CommandResource will be ignored.

[2020-06-22 12:55:35,291] INFO HV000001: Hibernate Validator null (org.hibernate.validator.internal.util.Version)

[2020-06-22 12:55:35,704] INFO {"confluentPlatformVersion":null,"informationForUser":null} (io.confluent.controlcenter.healthcheck.HealthCheckModule)

[2020-06-22 12:55:35,706] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)

[2020-06-22 12:55:35,867] INFO JVM Runtime does not support Modules (org.eclipse.jetty.util.TypeUtil)

[2020-06-22 12:55:35,917] INFO Started o.e.j.s.ServletContextHandler@996a546{/,[jar:file:/usr/share/java/acl/acl-5.5.0.jar!/io/confluent/controlcenter/rest/static],AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)

[2020-06-22 12:55:35,962] INFO Started o.e.j.s.ServletContextHandler@750f64fe{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler)

[2020-06-22 12:55:35,985] INFO Started NetworkTrafficServerConnector@4bdf{HTTP/1.1,[http/1.1]}{0.0.0.0:9021} (org.eclipse.jetty.server.AbstractConnector)

[2020-06-22 12:55:35,985] INFO Started @56406ms (org.eclipse.jetty.server.Server)

[2020-06-22 12:55:42,427] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /3.0/license HTTP/1.1" 200 433 222 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,427] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/clusters/kafka HTTP/1.1" 200 144 222 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,427] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/clusters/kafka/display/CLUSTER_MANAGEMENT HTTP/1.1" 200 110 222 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,431] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/clusters/connect HTTP/1.1" 200 95 226 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,431] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/clusters/ksql HTTP/1.1" 200 88 223 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,433] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/clusters/schema-registry HTTP/1.1" 200 143 228 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:42,583] INFO 172.21.0.1 - - [22/Jun/2020:12:55:42 +0000] "GET /2.0/metrics/1TJGTVihQFaiVxDqoNfzLA/maxtime HTTP/1.1" 200 27 146 (io.confluent.rest-utils.requests)

[2020-06-22 12:55:46,138] INFO 172.21.0.1 - - [22/Jun/2020:12:55:46 +0000] "GET /2.0/health/status HTTP/1.1" 200 149 14 (io.confluent.rest-utils.requests)


[2020-06-22 21:45:56,147] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/clusters/ksql HTTP/1.1" 200 94 1 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,153] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/clusters/connect HTTP/1.1" 200 86 2 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,156] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/clusters/kafka HTTP/1.1" 200 143 2 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,163] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/clusters/schema-registry HTTP/1.1" 200 143 2 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,165] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/clusters/kafka/display/CLUSTER_MANAGEMENT HTTP/1.1" 200 110 1 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,182] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /3.0/license HTTP/1.1" 200 429 2 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,206] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/metrics/maxtime HTTP/1.1" 200 60 2 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,233] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /2.0/metrics/clusters/status HTTP/1.1" 200 287 9 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,238] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /api/ksql/ksqldb1/info HTTP/1.1" 502 435 4 (io.confluent.rest-utils.requests)

[2020-06-22 21:45:56,238] INFO 172.18.0.1 - - [22/Jun/2020:21:45:56 +0000] "GET /api/connect/connect-default/ HTTP/1.1" 502 445 4 (io.confluent.rest-utils.requests)

[2020-06-22 21:46:12,156] INFO 172.18.0.1 - - [22/Jun/2020:21:46:12 +0000] "GET /2.0/health/status HTTP/1.1" 200 147 1 (io.confluent.rest-utils.requests)

How to configure for security under cp-all-in-one docker installation ?

Hi :
I have already successfully launched the cp-all-in-one in the docker environment. However, to make it more productive, I am trying to attach the security configuration. However, I have no idea what to do this. I tried to modify the configuration file inside the container directly, but with the RHEL OS, I don't know the password for root and also there is no way that I could change the password without sudo command installed. Kind of a dead circle, is there any way that I could do that ? Thanks so much !!

Confluent Platform Quick Start (Docker): Datagen Connector not found

Running Docker for windows.

$ docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:23:10 2020
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Windows Version 10.0.18362 Build 18362

I'm working through the Confluent Platform Quick Start (Docker).

Initially, step 2 shows all containers running, but eventually the connect container exits:

$ docker-compose ps
     Name                    Command                  State                         Ports
------------------------------------------------------------------------------------------------------------
broker            /etc/confluent/docker/run        Up             0.0.0.0:9092->9092/tcp
connect           /etc/confluent/docker/run        Exit 137
control-center    /etc/confluent/docker/run        Up             0.0.0.0:9021->9021/tcp
ksql-datagen      bash -c echo Waiting for K ...   Up
ksqldb-cli        /bin/sh                          Up
ksqldb-server     /etc/confluent/docker/run        Up (healthy)   0.0.0.0:8088->8088/tcp
rest-proxy        /etc/confluent/docker/run        Up             0.0.0.0:8082->8082/tcp
schema-registry   /etc/confluent/docker/run        Up             0.0.0.0:8081->8081/tcp
zookeeper         /etc/confluent/docker/run        Up             0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp

For step 3, the Datagen Connector isn't found. I consulted the referenced Issue: Cannot locate the Datagen Connector, but the resolutions do not produce the desired results.

Upon issuing the following command: docker-compose build --no-cache connect

The output is as follows:

connect uses an image, skipping

Upon issuing the following command: docker-compose logs connect | grep -i Datagen

There is no output.

Upon issuing the following command: docker-compose exec connect ls /usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen/lib/

The output is as follows:

ERROR: No container found for connect_1

If I attempt to restart the connect container and issue the above command again before it exits, it does produce the following:

$ docker-compose exec connect ls /usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen/lib/
automaton-1.11-8.jar             generex-1.0.2.jar                     jakarta.annotation-api-1.3.5.jar  jersey-client-2.30.jar             kafka-schema-registry-client-5.5.0.jar
avro-1.9.2.jar                   guava-18.0.jar                        jakarta.el-3.0.2.jar              jersey-common-2.30.jar             kafka-schema-serializer-5.5.0.jar
avro-random-generator-0.3.1.jar  hibernate-validator-6.0.17.Final.jar  jakarta.el-api-3.0.3.jar          jersey-media-jaxb-2.30.jar         osgi-resource-locator-1.0.3.jar
classmate-1.3.4.jar              jackson-annotations-2.10.2.jar        jakarta.inject-2.6.1.jar          jersey-server-2.30.jar             slf4j-api-1.7.26.jar
common-config-5.5.0.jar          jackson-core-2.10.2.jar               jakarta.validation-api-2.0.2.jar  joda-time-2.9.9.jar                snakeyaml-1.24.jar
common-utils-5.5.0.jar           jackson-databind-2.10.2.jar           jakarta.ws.rs-api-2.1.6.jar       kafka-avro-serializer-5.5.0.jar    swagger-annotations-1.5.22.jar
commons-compress-1.19.jar        jackson-dataformat-yaml-2.10.2.jar    jboss-logging-3.3.2.Final.jar     kafka-connect-avro-data-5.5.0.jar  swagger-core-1.5.3.jar
commons-lang3-3.2.1.jar          jackson-datatype-joda-2.10.2.jar      jersey-bean-validation-2.30.jar   kafka-connect-datagen-0.3.2.jar    swagger-models-1.5.3.jar

README for cp-all-in-one-community has wrong. link

Description
The cp-all-in-one-community/README.md file refers to https://docs.confluent.io/platform/current/tutorials/build-your-own-demos.html but this now just pulls through to the top level https://developer.confluent.io/

The previous documentation was very good as can be seen at https://web.archive.org/web/20221003173448/https://docs.confluent.io/platform/current/tutorials/build-your-own-demos.html

Troubleshooting
To verify - click on link in readme. Observe where it goes. Weep for pages lost in the mists of time.

Environment

  • GitHub branch: 7.3.0-post
  • Operating System: all
  • Version of Docker: any
  • Version of Docker Compose: any

Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available

steps to replicate:

  1. git clone https://github.com/confluentinc/cp-all-in-one.git ( branch 5.5.0-post)

  2. cd cp-all-in-one && docker-compose up -d

  3. confluent-hub install debezium/debezium-connector-mysql:latest ( inside connect docker container)

  4. curl -i -X POST -H "Accept:application/json"
    -H "Content-Type:application/json" http://localhost:8083/connectors/
    -d '{
    "name": "inventory-connector",
    "config": {
    "connector.class": "io.debezium.connector.mysql.MySqlConnector",
    "database.hostname": "mysq",
    "database.port": "3306",
    "database.user": "dbmaster",
    "database.password": "password",
    "database.server.id": "1",
    "database.server.name": "stage",
    "database.dbname": "test",
    "database.history.kafka.bootstrap.servers": "broker:9092",
    "database.history.kafka.topic": "dbtopic",
    "database.whitelist": "test_client",
    "include.schema.changes": "true"

    }
    

    }'
    response was created successfully
    docker logs -f connect
    [Producer clientId=inventory-connector-dbhistory] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
    INFO [AdminClient clientId=inventory-connector-dbhistory] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
    org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1589028250610) timed out at 9223372036854775807 after 1 attempt(s)
    Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
    [2020-05-09 12:44:09,709] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask)
    [2020-05-09 12:44:09,709] INFO Stopping MySQL connector task (io.debezium.connector.mysql.MySqlConnectorTask)
    [2020-05-09 12:44:09,709] INFO WorkerSourceTask{id=inventory-connector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
    [2020-05-09 12:44:09,710] INFO WorkerSourceTask{id=inventory-connector-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
    [2020-05-09 12:44:09,710] ERROR WorkerSourceTask{id=inventory-connector-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
    org.apache.kafka.connect.errors.ConnectException: Creation of database history topic failed, please create the topic manually
    at io.debezium.relational.history.KafkaDatabaseHistory.initializeStorage(KafkaDatabaseHistory.java:365)
    at io.debezium.connector.mysql.MySqlSchema.intializeHistoryStorage(MySqlSchema.java:268)
    at io.debezium.connector.mysql.MySqlTaskContext.initializeHistoryStorage(MySqlTaskContext.java:195)
    at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:144)
    at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:104)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:213)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

ksql-datagen exits. Can't connect to broker

Description
I followed the steps listed here (https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#) to start up a local Kafka instance on Docker. ksql-datagen exits after a couple of minutes due to no connection to the broker

Troubleshooting
I think this may be related to issues #53, #51, #25, #24, #12, and #10. I'm seeing the same behavior.

1.) I run docker-compose up -d and I can see all of the containers up and running.
2.) http://locahost:9021 shows no response
2.) Then I run docker-compose ps and see that ksql-datagen has exited.
3.) Running docker logs ksql-datagen shows that it can't connect to the broker.

Connection to node -1 (broker/192.168.224.3:29092) could not be established. Broker may not be available.

I have made no changes to the docker-compose.yml

Environment

  • GitHub branch: 6.2.0-post
  • Operating System: MacOS Big Sur 11.5
  • Version of Docker: 3.5.2
  • Version of Docker Compose: 1.29.2

The container name "/zookeeper" already in use

Description
I followed the tutorial https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#ce-docker-quickstart to the letter.

It fails at the beginning, step 4: docker-compose up -d brings:

Creating zookeeper ... error

ERROR: for zookeeper  Cannot create container for service zookeeper: Conflict. The container name "/zookeeper" is already in use by container "cb780502175d907a0d654f62dc5010410e5b2493a6bf475019164898ec13ef1e". You have to remove (or rename) that container to be able to reuse that name.

ERROR: for zookeeper  Cannot create container for service zookeeper: Conflict. The container name "/zookeeper" is already in use by container "cb780502175d907a0d654f62dc5010410e5b2493a6bf475019164898ec13ef1e". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.

But nothing is started, docker-compose ps or docker ps are empty

Troubleshooting

$ docker-compose logs cb780502175d907a0d654f62dc5010410e5b2493a6bf475019164898ec13ef1e

ERROR: No such service: cb780502175d907a0d654f62dc5010410e5b2493a6bf475019164898ec13ef1e

Environment

  • GitHub branch: 6.0.1-post
  • Operating System: macOS Big Sur version 11.0.1
  • Version of Docker: Docker version 19.03.13, build 4484c46d9d
  • Version of Docker Compose: docker-compose version 1.27.4, build 40524192

image given in cp-all-in-one-kraft is not found, docker-compose up fails

Description
When trying to use the Docker-compose example, the image given in https://github.com/confluentinc/cp-all-in-one/blob/master/cp-all-in-one-kraft/docker-compose.yml#L6, is not found

$ docker-compose up
[+] Running 1/1

Error response from daemon: failed to resolve reference "docker.io/confluentinc/cp-kafka:7.6.x-latest": docker.io/confluentinc/cp-kafka:7.6.x-latest: not found

Troubleshooting
On Docker hub, there is no image with 7.6
Screenshot 2023-07-26 at 9 00 54 AM

Environment

  • GitHub branch: master
  • Operating System: MacOS 13.4.1
  • Version of Docker: 24.0.2
  • Version of Docker Compose: v2.19.1

Add a tools container for clients

I think the all-in-one should include a tools container for client code so that people can volume map their client code into the docker compose network and run it against CP. See for example this README. That README was written to fill a gap in cp all in one, but I think it makes sense to consolidate effort and focus onto cp all in one as the go-to place for freeform sandbox development.

Docker compose fails on Fedora 35 with podman

Description

When using podman and pulling an image with a short name it needs an TTY to ask for the registry to pull from. This fails with docker compose.

Troubleshooting

$ sudo docker-compose up
Pulling zookeeper (confluentinc/cp-zookeeper:7.0.1)...
ERROR: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY

This works after a sed -i 's#image: #image: docker.io/#g' docker-compose.yml that resulted in:

--- docker-compose.yml.1	2021-12-14 14:54:02.044197565 +0100
+++ docker-compose.yml	2021-12-14 14:52:57.079120280 +0100
@@ -5 +5 @@
-    image: confluentinc/cp-zookeeper:7.0.1
+    image: docker.io/confluentinc/cp-zookeeper:7.0.1
@@ -15 +15 @@
-    image: confluentinc/cp-kafka:7.0.1
+    image: docker.io/confluentinc/cp-kafka:7.0.1
@@ -37 +37 @@
-    image: confluentinc/cp-schema-registry:7.0.1
+    image: docker.io/confluentinc/cp-schema-registry:7.0.1
@@ -50 +50 @@
-    image: cnfldemos/kafka-connect-datagen:0.5.0-6.2.0
+    image: docker.io/cnfldemos/kafka-connect-datagen:0.5.0-6.2.0
@@ -77 +77 @@
-    image: confluentinc/cp-ksqldb-server:7.0.1
+    image: docker.io/confluentinc/cp-ksqldb-server:7.0.1
@@ -100 +100 @@
-    image: confluentinc/cp-ksqldb-cli:7.0.1
+    image: docker.io/confluentinc/cp-ksqldb-cli:7.0.1
@@ -110 +110 @@
-    image: confluentinc/ksqldb-examples:7.0.1
+    image: docker.io/confluentinc/ksqldb-examples:7.0.1
@@ -132 +132 @@
-    image: confluentinc/cp-kafka-rest:7.0.1
+    image: docker.io/confluentinc/cp-kafka-rest:7.0.1

Environment

  • GitHub branch: 7.0.1-post
  • Operating System: Fedora 35
  • Version of Docker: podman version 3.4.2 (output of docker --version)
  • Version of Docker Compose: podman version 3.4.2 (output of docker compose --version)

Containers stop with Exit code 1

Description
Once run docker-compose after few minutes the conteiners stop
Name Command State Ports

broker /etc/confluent/docker/run Exit 1
connect /etc/confluent/docker/run Exit 1
control-center /etc/confluent/docker/run Exit 1
ksql-datagen bash -c echo Waiting for K ... Exit 1
ksqldb-cli /bin/sh Up
ksqldb-server /etc/confluent/docker/run Exit 1
rest-proxy /etc/confluent/docker/run Exit 1
schema-registry /etc/confluent/docker/run Exit 1
zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp,:::2181->2181/tcp, 2888/tcp, 3888/tcp

Troubleshooting
Identify any existing issues that seem related: https://github.com/confluentinc/cp-all-in-one/issues?q=is%3Aissue

If applicable, please include the output of:

  • docker-compose logs <container name>
  • any other relevant commands

Environment

  • GitHub branch: [e.g. 6.0.1-post, etc]
  • Operating System: Ubuntu 18.04.6 LTS
  • Version of Docker: Client:
    Version: 20.10.7
    API version: 1.41
    Go version: go1.13.8
    Git commit: 20.10.7-0ubuntu5~18.04.3
    Built: Mon Nov 1 01:04:14 2021
    OS/Arch: linux/amd64
    Context: default
    Experimental: true

Server:
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.8
Git commit: 20.10.7-0ubuntu518.04.3
Built: Fri Oct 22 00:57:37 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.5-0ubuntu3
18.04.1
GitCommit:
runc:
Version: 1.0.1-0ubuntu2~18.04.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:

  • Version of Docker Compose: docker-compose version 1.17.1, build unknown
    docker-py version: 2.5.1
    CPython version: 2.7.17
    OpenSSL version: OpenSSL 1.1.1 11 Sep 2018

Port 2181 bind error on Zookeeper startup

Description
Error starting zookeeper on docker-compose up.
May be Docker issue docker/compose#7188

Update
After upgrading Docker Desktop to 4.2.0 and rebooting machine, was able to get the instance to start; could be timing issue.

curl --silent --output docker-compose.yml
https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.0.0-post/cp-all-in-one/docker-compose.yml

docker-compose up -d

Troubleshooting
Docker Issue: Port conflict with multiple "host::port" services

Output of compose log:
...
Creating zookeeper ... error

ERROR: for zookeeper Cannot start service zookeeper: Ports are not available: listen tcp 0.0.0.0:2181: bind: An attempt was made to access a socket in a way forbidden by its access permissions.

ERROR: for zookeeper Cannot start service zookeeper: Ports are not available: listen tcp 0.0.0.0:2181: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: Encountered errors while bringing up the project.

Environment

  • GitHub branch: 7.0.0-post
  • Operating System: Windows 10
  • Version of Docker: Docker Desktop 3.0.0 / 4.2.0
  • Version of Docker Compose: version 1.27.4, build 40524192 / 1.29.2, build 5becea4c
    Updated Docker Desktop, same issue

A 3-broker version of "cp-all-in-one" docker-compose.yaml file ...

Dear Community Friends:

Can someone help me create the qty. 3 broker version of the cp-all-in-one Docker configuration? It will all go on a DEV/TEST laptop or PC.

I have been using the below configuration (which I refactored and created), and while it appears to work, there's something not quite right with it. For instance, when producing just 20-messages to it from within an IDE, it takes much longer than it should to complete (minutes -- even when, for troubleshooting purposes, I set acks=0, retries=0, etc). This hints at perhaps a network timeout issue or something (but I could be wrong -- it could be something else entirely).

Sidenote: For completeness, but not meant to distract you, you'll notice that I export some docker volumes just so that I can passively inspect the log directories.

Rather than try to repair the below, if you might be kind enough to provide a 3-broker version, that would be grand. Although, if you're feeling adventurous, I won't hinder you from pointing my mistakes below. It would be something to learn from.

=:)

Than you in advance!

networks:
  verilabs:
    name: verilabs
    external: false
    driver: bridge
    ipam:
     config:
       - subnet: 192.168.10.0/24
         gateway: 192.168.10.1

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.2.0
    hostname: zookeeper
    container_name: zookeeper
    privileged: true
    ports:
      - "2181:2181"
    networks:
      verilabs:
        ipv4_address: 192.168.10.20
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
    volumes:
      - "./data/confluent.d/zookeeper.d:/var/lib/zookeeper"

  broker01:
    image: confluentinc/cp-kafka:6.2.0
    hostname: broker01
    container_name: broker01
    privileged: true
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
      - "9092:9092"
      - "9101:9101"
    networks:
      verilabs:
        ipv4_address: 192.168.10.21
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker01:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
    volumes:
      - "./data/confluent.d/broker01.d:/var/lib/kafka/data"

  broker02:
    image: confluentinc/cp-kafka:6.2.0
    hostname: broker02
    container_name: broker02
    privileged: true
    depends_on:
      - zookeeper
    ports:
      - "29093:29092"
      - "9093:9092"
      - "9102:9101"
    networks:
      verilabs:
        ipv4_address: 192.168.10.22
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker02:29092,PLAINTEXT_HOST://localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
    volumes:
      - "./data/confluent.d/broker02.d:/var/lib/kafka/data"

  broker03:
    image: confluentinc/cp-kafka:6.2.0
    hostname: broker03
    container_name: broker03
    privileged: true
    depends_on:
      - zookeeper
    ports:
      - "29094:29092"
      - "9094:9092"
      - "9103:9101"
    networks:
      verilabs:
        ipv4_address: 192.168.10.23
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker03:29092,PLAINTEXT_HOST://localhost:9094
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
    volumes:
      - "./data/confluent.d/broker03.d:/var/lib/kafka/data"

  schema-registry:
    image: confluentinc/cp-schema-registry:6.2.0
    hostname: schema-registry
    container_name: schema-registry
    privileged: true
    depends_on:
      - broker01
      - broker02
      - broker03
    ports:
      - "8081:8081"
    networks:
      verilabs:
        ipv4_address: 192.168.10.24
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker01:29092'
      SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

  connect:
    image: cnfldemos/kafka-connect-datagen:0.5.0-6.2.0
    hostname: connect
    container_name: connect
    privileged: true
    depends_on:
      - broker01
      - broker02
      - broker03
      - schema-registry
    ports:
      - "8083:8083"
    networks:
      verilabs:
        ipv4_address: 192.168.10.25
    environment:
      CONNECT_BOOTSTRAP_SERVERS: 'broker01:29092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
      CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR

  ksqldb-server:
    image: confluentinc/cp-ksqldb-server:6.2.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    privileged: true
    depends_on:
      - broker01
      - broker02
      - broker03
      - connect
    ports:
      - "8088:8088"
    networks:
      verilabs:
        ipv4_address: 192.168.10.26
    environment:
      KSQL_CONFIG_DIR: "/etc/ksql"
      KSQL_BOOTSTRAP_SERVERS: "broker01:29092"
      KSQL_HOST_NAME: ksqldb-server
      KSQL_LISTENERS: "http://0.0.0.0:8088"
      KSQL_CACHE_MAX_BYTES_BUFFERING: 0
      KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerIntercep
tor"
      KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerIntercep
tor"
      KSQL_KSQL_CONNECT_URL: "http://connect:8083"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'

  rest-proxy:
    image: confluentinc/cp-kafka-rest:6.2.0
    depends_on:
      - broker01
      - broker02
      - broker03
      - schema-registry
    ports:
      - 8082:8082
    networks:
      verilabs:
        ipv4_address: 192.168.10.27
    hostname: rest-proxy
    container_name: rest-proxy
    privileged: true
    environment:
      KAFKA_REST_HOST_NAME: rest-proxy
      KAFKA_REST_BOOTSTRAP_SERVERS: 'broker01:29092'
      KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
      KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'

Issue with running kafka (without zookeeper)

Description
Error connecting to broker when running kraft

cp-all-in-one/cp-all-in-one-kraft
https://github.com/confluentinc/cp-all-in-one/tree/6.2.0-post/cp-all-in-one-kraft

Troubleshooting
When i run the sample producer i get an exception:

2021-09-22 12:39:35 WARN  NetworkClient:1060 - [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected

I checked the logs and the broker seems up but can't connect to it

The other examples work fine with zookeeper but not this one.

Environment

  • GitHub branch: 6.2.0-post
  • Operating System: mac os
  • Version of Docker: Version: 20.10.8
  • Version of Docker Compose: docker-compose version 1.29.2

Exit code 1 for multiple containers, missing Dockerfile

Below services end with Exit 1 code:

I tried adding memory limit but no luck, should these be placed in specific line of docker-compose file?

deploy:
      resources:
        limits:
          cpus: xxx
          memory: xxx
        reservations:
          cpus: xxx
          memory: xxx

Also I would suggest to add to each docker-compose a Dockerfile as well.

java.lang.NoSuchFieldError: DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS

Kafka connect does not start up. it dies when it throws the error : java.lang.NoSuchFieldError: DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS

Troubleshooting
Identify any existing issues that seem related: https://github.com/confluentinc/cp-all-in-one/issues?q=is%3Aissue

If applicable, please include the output of:
connect1 | [2021-08-08 00:23:22,041] ERROR Stopping due to error(org.apache.kafka.connect.cli.ConnectDistributed)
connect1 | java.lang.NoSuchFieldError: DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS
connect1 | at org.apache.kafka.connect.runtime.distributed.DistributedConfig.(DistributedConfig.java:245)
connect1 | at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95)
connect1 | at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)

Environment

  • cp-kakfka-connect-6.2.0
  • Operating System: ubuntu
  • Version of Docker: 20.10.7
  • Version of Docker Compose: 1.21.2

Kafka-connect keeps dying after the error above is logged.
zookeper and broker running fine.

KRAFT mode does not work in cluster

I've got cluster of 2 nodes:

---
version: '2'
services:

  broker:
    image: confluentinc/cp-kafka:7.0.0
    hostname: broker
    container_name: broker
    ports: 
      - "9092:9092"
      - "9101:9101"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_JMX_PORT: 9101
      KAFKA_JMX_HOSTNAME: localhost
      KAFKA_PROCESS_ROLES: 'broker,controller'
      KAFKA_NODE_ID: 1
      KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
      KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'
      KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
      KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
      KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
    volumes:
      - ./update_run.sh:/tmp/update_run.sh
    command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

  broker2:
        image: confluentinc/cp-kafka:7.0.0
        hostname: broker2
        depends_on:
          - broker
        container_name: broker2
        ports:
          - "9093:9093"
          - "9104:9104"
        environment:
          KAFKA_BROKER_ID: 2 
          KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
          KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker2:29092,PLAINTEXT_HOST://localhost:9093'
          KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
          KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
          KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
          KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
          KAFKA_JMX_PORT: 9104
          KAFKA_JMX_HOSTNAME: localhost
          KAFKA_PROCESS_ROLES: 'broker'
          KAFKA_NODE_ID: 2
          KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
          KAFKA_LISTENERS: 'PLAINTEXT://broker2:29092,CONTROLLER://broker2:29093,PLAINTEXT_HOST://0.0.0.0:9093'
          KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
          KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
          KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
        volumes:
          - ./update_run.sh:/tmp/update_run.sh
        command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"

docker-compose up gives following error:
error

any thoughts why INCONSISTENT_CLUSTER_ID in FETCH response is happening?
@aesteve @ybyzek any thoughts?

docker-compose fails to bring up Kafka Connect

Tried to bring up Kafka Connect by running docker-compose up on cp-all-in-one-community

It seems to fail with the compose file provided in this repo.

% docker ps          
CONTAINER ID        IMAGE                                         COMMAND                  CREATED              STATUS                             PORTS                                              NAMES
ca00113eda2a        confluentinc/ksqldb-examples:5.5.0            "bash -c 'echo Waiti…"   About a minute ago   Up 51 seconds                                                                         ksql-datagen
3e00003c6873        confluentinc/cp-ksqldb-cli:5.5.0              "/bin/sh"                About a minute ago   Up 51 seconds                                                                         ksqldb-cli
3e82bd1debf3        confluentinc/cp-ksqldb-server:5.5.0           "/etc/confluent/dock…"   About a minute ago   Up 52 seconds (health: starting)   0.0.0.0:8088->8088/tcp                             ksqldb-server
a83c8c044a74        cnfldemos/kafka-connect-datagen:0.3.2-5.5.0   "/etc/confluent/dock…"   About a minute ago   Up 53 seconds (health: starting)   0.0.0.0:8083->8083/tcp, 9092/tcp                   connect
6646e957d46c        confluentinc/cp-kafka-rest:5.5.0              "/etc/confluent/dock…"   About a minute ago   Up 53 seconds                      0.0.0.0:8082->8082/tcp                             rest-proxy
0b27d5828cc5        confluentinc/cp-schema-registry:5.5.0         "/etc/confluent/dock…"   About a minute ago   Up 53 seconds                      0.0.0.0:8081->8081/tcp                             schema-registry
04e1dca419cd        confluentinc/cp-kafka:5.5.0                   "/etc/confluent/dock…"   About a minute ago   Up 53 seconds                      0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp   broker
324792fec66e        confluentinc/cp-zookeeper:5.5.0               "/etc/confluent/dock…"   About a minute ago   Up 54 seconds                      2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp         zookeeper

% docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED              STATUS                        PORTS                                              NAMES
ca00113eda2a        confluentinc/ksqldb-examples:5.5.0      "bash -c 'echo Waiti…"   About a minute ago   Up About a minute                                                                ksql-datagen
3e00003c6873        confluentinc/cp-ksqldb-cli:5.5.0        "/bin/sh"                About a minute ago   Up About a minute                                                                ksqldb-cli
3e82bd1debf3        confluentinc/cp-ksqldb-server:5.5.0     "/etc/confluent/dock…"   About a minute ago   Up About a minute (healthy)   0.0.0.0:8088->8088/tcp                             ksqldb-server
6646e957d46c        confluentinc/cp-kafka-rest:5.5.0        "/etc/confluent/dock…"   About a minute ago   Up About a minute             0.0.0.0:8082->8082/tcp                             rest-proxy
0b27d5828cc5        confluentinc/cp-schema-registry:5.5.0   "/etc/confluent/dock…"   About a minute ago   Up About a minute             0.0.0.0:8081->8081/tcp                             schema-registry
04e1dca419cd        confluentinc/cp-kafka:5.5.0             "/etc/confluent/dock…"   About a minute ago   Up About a minute             0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp   broker
324792fec66e        confluentinc/cp-zookeeper:5.5.0         "/etc/confluent/dock…"   About a minute ago   Up About a minute             2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp         zookeeper

Logs for the connect container isn't showing anything out of the ordinary, but the application on there is crashing and the container is shutting down...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.