Giter VIP home page Giter VIP logo

kafka-images's Introduction

Docker images for Apache Kafka

This repo provides build files for Apache Kafka and Confluent Docker images. The images can be found on Docker Hub, and sample Docker Compose files here.

Docker Image reference

Information on using the Docker images is available in the documentation.

Build files

Properties

Properties are inherited from a top-level POM. Properties may be overridden on the command line (-Ddocker.registry=testing.example.com:8080/), or in a subproject's POM.

  • docker.skip-build: (Optional) Set to false to include Docker images as part of build. Default is 'false'.
  • docker.skip-test: (Optional) Set to false to include Docker image integration tests as part of the build. Requires Python 2.7, tox. Default is 'true'.
  • docker.registry: (Optional) Specify a registry other than placeholder/. Used as DOCKER_REGISTRY during docker build and testing. Trailing / is required. Defaults to placeholder/.
  • docker.tag: (Optional) Tag for built images. Used as DOCKER_TAG during docker build and testing. Defaults to the value of project.version.
  • docker.upstream-registry: (Optional) Registry to pull base images from. Trailing / is required. Used as DOCKER_UPSTREAM_REGISTRY during docker build. Defaults to the value of docker.registry.
  • docker.upstream-tag: (Optional) Use the given tag when pulling base images. Used as DOCKER_UPSTREAM_TAG during docker build. Defaults to the value of docker.tag.
  • docker.test-registry: (Optional) Registry to pull test dependency images from. Trailing / is required. Used as DOCKER_TEST_REGISTRY during testing. Defaults to the value of docker.upstream-registry.
  • docker.test-tag: (Optional) Use the given tag when pulling test dependency images. Used as DOCKER_TEST_TAG during testing. Defaults to the value of docker.upstream-tag.
  • docker.os_type: (Optional) Specify which operating system to use as the base image by using the Dockerfile with this extension. Valid values are ubi8. Default value is ubi8.
  • CONFLUENT_PACKAGES_REPO: (Required) Specify the location of the Confluent Platform packages repository. Depending on the type of OS for the image you are building you will need to either provide a Debian or RPM repository. For example this is the repository for the 5.4.0 release of the Debian packages: https://s3-us-west-2.amazonaws.com/confluent-packages-5.4.0/deb/5.4 This is the repository for the 5.4.0 release of the RPM's: https://s3-us-west-2.amazonaws.com/confluent-packages-5.4.0/rpm/5.4
  • CONFLUENT_VERSION: (Required) Specify the full Confluent Platform release version. Example: 5.4.0

Building

This project uses maven-assembly-plugin and dockerfile-maven-plugin to build Docker images via Maven.

To build SNAPSHOT images, configure .m2/settings.xml for SNAPSHOT dependencies. These must be available at build time.

mvn clean package -Pdocker -DskipTests # Build local images

kafka-images's People

Contributors

amitkgupta avatar andrewegel avatar angg98 avatar confluentjenkins avatar confluentsemaphore avatar cprovencher avatar davetroiano avatar dongnuo123 avatar elismaga avatar gracechensd avatar haeuserd avatar im-pratham avatar janjwerner-confluent avatar jkao97 avatar kagarwal06 avatar kc596 avatar kkonstantine avatar lyoung-confluent avatar mikebin avatar patrick-premont avatar rigelbm avatar samehtawfik avatar sdandu-gh avatar shivsundarr avatar stanislavkozlovski avatar stejani-cflt avatar ujjwalkalia avatar utkarsh5474 avatar vdesabou avatar xli1996 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kafka-images's Issues

[Feature Request] Add support for reading from file to `KAFKA_CONFLUENT_LICENSE` variable

The Confluent License is passed to cp-server via the KAFKA_CONFLUENT_LICENSE variable. However, contrary to the cp-control-center image, the License cannot be read from a file, i.e. a file setup using secrets.

This forces us to create a custom image based on cp-server with a simple entrypoint file that just reads the variable contents from a specified file.

It would be really nice if Confluent could support reading the License from a file as well as strings for cp-server, like is already done for cp-control-center. It would be nice if every component requiring licenses behaves the same and consistently.

Thanks!

Permission denied on custom configuration management

Hi. I'm having an issue with running a custom configuration management as per indicated in the documentation. In the following I used CP 5.5.3. My Docker file looks like this :

ARG CONFLUENT_DOCKER_TAG

FROM confluentinc/cp-kafka-connect-base:${CONFLUENT_DOCKER_TAG}

COPY include/etc/confluent/docker /etc/confluent/docker

RUN wget -qO- --content-disposition https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-olingo2-kafka-connector/0.9.0/camel-olingo2-kafka-connector-0.9.0-package.tar.gz | tar xvzf - -C /usr/share/java

RUN confluent-hub install --no-prompt castorm/kafka-connect-http:0.7.6 && \
    confluent-hub install --no-prompt confluentinc/kafka-connect-http:1.3.0 && \
    confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-xml:0.1.0.18 && \
    confluent-hub install --no-prompt mmolimar/kafka-connect-fs:1.3.0)

My docker-compose connect part is this :

connect:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        CONFLUENT_DOCKER_TAG: ${CONFLUENT_DOCKER_TAG}
    image: mediation/kafka-connect:${CONFLUENT_DOCKER_TAG}
    hostname: connect
    container_name: connect
    depends_on:
      - zookeeper
      - broker
      - schema-registry
    ports:
      - "8083:8083"
    environment:
      CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
      CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
      CONNECT_REST_ADVERTISED_HOST_NAME: connect
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: compose-connect-group
      CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
      CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
      CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_ZOOKEEPER_CONNECT: "zookeeper:2181"
      CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR

When I run my docker-compose up --build, I get this :

connect            | ===> User
connect            | uid=0(root) gid=0(root) groups=0(root)
connect            | ===> Configuring ...
connect            | /etc/confluent/docker/run: line 26: /etc/confluent/docker/configure: Permission denied

If I run the same with CP latest version I get this :

connect            | ===> User
connect            | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
connect            | ===> Configuring ...
connect            | /etc/confluent/docker/run: line 26: /etc/confluent/docker/configure: Permission denied

I saw in both latest and 5.5.3 kafka-connect-base Dockerfile that the COPY is done the same way :

COPY --chown=appuser:appuser include/etc/confluent/docker /etc/confluent/docker

Out of curiosity I tried the same line in my Connect Dockerfile (using uid:gid) with latest version but I still have :

connect            | ===> User
connect            | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
connect            | ===> Configuring ...
connect            | /etc/confluent/docker/run: line 26: /etc/confluent/docker/configure: Permission denied

Which is weird because if I override the CMD directive in my Dockerfile with a ls, I can see :

connect            | -rwxr-xr-x 1 appuser appuser 1199 Mar 17 02:50 apply-mesos-overrides
connect            | -rw-r--r-- 1 appuser appuser  775 Mar 17 02:36 bash-config
connect            | -rw-rw-r-- 1 appuser appuser 2160 Apr 19 09:02 configure
connect            | drwxrwxr-x 2 appuser appuser   53 Apr 19 10:27 connectors
connect            | -rwxr-xr-x 1 appuser appuser 1161 Mar 17 02:50 ensure
connect            | -rwxr-xr-x 1 appuser appuser 1008 Mar 17 02:50 healthcheck.sh
connect            | -rw-rw-r-- 1 appuser appuser  140 Apr 19 08:37 kafka-connect.properties.template
connect            | -rw-r--r-- 1 appuser appuser 1159 Mar 17 02:50 kafka.properties.template
connect            | -rw-rw-r-- 1 appuser appuser 1839 Apr 19 09:19 launch
connect            | -rw-r--r-- 1 appuser appuser  812 Mar 17 02:50 log4j.properties.template
connect            | -rw-r--r-- 1 appuser appuser 1121 Mar 17 02:36 mesos-setup.sh
connect            | -rwxr-xr-x 1 appuser appuser  935 Mar 17 02:50 run
connect            | -rw-r--r-- 1 appuser appuser  305 Mar 17 02:50 tools-log4j.properties.template

For the record, in my include/etc/confluent/docker, I just added changed some lines in configure and launch scripts.

So I'm confused, am I missing something ? How am I supposed to do this exactly ? Thanks

Not able to pass CONFLUENT_LICENSE as env var to Connect worker

I'm not sure that' the right repo to report it, but based on the latest docs for 6.0 Platform, workers should be able to pass license config to their connectors. However, I haven't found the way to it to work. No matter what env vars and with what prefixes I added to my confluentinc/cp-kafka-connect:6.0.0 container, I was not able to see it merged into the final config inside the container and subsequently stored to _confluent.command topic.

Latest Image for CP-Kafka-Connect has multiple vulnerabilities

While performing a container scan of kafka-connect image using Twistlock, 11 vulnerabilities were found similar to one mentioned in #84.

I have attached below the image showing the issues and table where current version used, description, and which version to use to fix is given. Please look into this and fix these issues.

image

Layer CVE ID Type Severity Packages Package Version CVSS Fix Status Description
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2020-28491 java high com.fasterxml.jackson.dataformat_jackson-dataformat-cbor 2.10.5 7.5 fixed in 2.11.4, 2.12.1 This affects the package com.fasterxml.jackson.dataformat:jackson-dataformat-cbor from 0 and before 2.11.4, from 2.12.0-rc1 and before 2.12.1. Unchecked allocation of byte buffer can cause a java.lang.OutOfMemoryError exception.
{"created":1619740917,"instruction":"RUN |4 ARTIFACT_ID=cp-base-new BUILD_NUMBER=4 GIT_COMMIT=25cee932 PROJECT_VERSION=6.1.1 /bin/sh -c microdnf install yum \u0026\u0026 yum update -y \u0026\u0026 yum install -y openssl git wget nc python${PYTHON_VERSION} tar procps krb5-workstation iputils hostname \u0026\u0026 alternatives --set python /usr/bin/python3 \u0026\u0026 python3 -m pip install --upgrade pip setuptools \u0026\u0026 python3 -m pip install --prefer-binary --prefix=/usr/local --upgrade 'git+https://github.com/confluentinc/[email protected]' \u0026\u0026 rpm --import https://www.azul.com/files/0xB1998361219BD9C9.txt \u0026\u0026 yum -y install https://cdn.azul.com/zulu/bin/zulu-repo-1.0.0-1.noarch.rpm \u0026\u0026 yum -y install ${ZULU_OPENJDK} \u0026\u0026 yum remove -y git \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 mkdir -p /etc/confluent/docker /usr/logs \u0026\u0026 useradd --no-log-init --create-home --shell /bin/bash appuser \u0026\u0026 chown appuser:appuser -R /etc/confluent/ /usr/logs","sizeBytes":523432402,"id":"\u003cmissing\u003e"} CVE-2019-20916 python high pip 9.0.3 7.5 fixed in 19.2 The pip package before 19.2 for Python allows Directory Traversal when a URL is given in an install command, because a Content-Disposition header can have ../ in a filename, as demonstrated by overwriting the /root/.ssh/authorized_keys file. This occurs in _download_http_url in _internal/download.py.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21290 java medium io.netty_netty-codec 4.1.50.Final 5.5 fixed in 4.1.59 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty before version 4.1.59.Final there is a vulnerability on Unix-like systems involving an insecure temp file. When netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. On unix-like systems, the temporary directory is shared between all user. As such, writing to this directory using APIs that do not explicitly set the file/directory permissions can lead to information disclosure. Of note, this does not impact modern MacOS Operating Systems. The method "File.createTempFile" on unix-like systems creates a random file, but, by default will create this file with the permissions "-rw-r--r--". Thus, if sensitive information is written to this file, other local users can read this information. This is the case in netty's "AbstractDiskHttpData" is vulnerable. This has been fixed in version 4.1.59.Final. As a workaround, one may specify your own "java.io.tmpdir" when you start the JVM or use "DefaultHttpDataFactory.setBaseDir(...)" to set the directory to something that is only readable by the current user.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21295 java medium io.netty_netty-all 4.1.59.Final 5.9 fixed in 4.1.60 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects, and these HTTP/1.1 ob
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21409 java medium io.netty_netty-all 4.1.59.Final 5.9 fixed in 4.1.61 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.
{"created":1619741394,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Adding confluent repository...${CONFLUENT_PACKAGES_REPO}" \u0026\u0026 rpm --import ${CONFLUENT_PACKAGES_REPO}/archive.key \u0026\u0026 printf "[Confluent.dist] \nname=Confluent repository (dist) \nbaseurl=${CONFLUENT_PACKAGES_REPO}/\$releasever \ngpgcheck=1 \ngpgkey=${CONFLUENT_PACKAGES_REPO}/archive.key \nenabled=1 \n\n[Confluent] \nname=Confluent repository \nbaseurl=${CONFLUENT_PACKAGES_REPO}/ \ngpgcheck=1 \ngpgkey=${CONFLUENT_PACKAGES_REPO}/archive.key \nenabled=1 " \u003e /etc/yum.repos.d/confluent.repo \u0026\u0026 yum install -y confluent-kafka-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e clean up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs" \u0026\u0026 mkdir -p /var/lib/${COMPONENT}/data /etc/${COMPONENT}/secrets \u0026\u0026 chmod -R ag+w /etc/kafka /var/lib/${COMPONENT}/data /etc/${COMPONENT}/secrets \u0026\u0026 chown -R appuser:appuser /var/log/kafka /var/log/confluent /var/lib/kafka /var/lib/zookeeper /etc/${COMPONENT}/secrets","sizeBytes":88304906,"id":"\u003cmissing\u003e"} CVE-2021-21409 java medium io.netty_netty-codec 4.1.59.Final 5.9 fixed in 4.1.61 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2018-20200 java medium com.squareup.okhttp3_okhttp 3.9.0 5.9 ** DISPUTED ** CertificatePinner.java in OkHttp 3.x through 3.12.0 allows man-in-the-middle attackers to bypass certificate pinning by changing SSLContext and the boolean values while hooking the application. NOTE: This id is disputed because some parties don't consider this is a vulnerability. Their rationale can be found in https://github.com/square/okhttp/issues/4967.
{"created":1619741394,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Adding confluent repository...${CONFLUENT_PACKAGES_REPO}" \u0026\u0026 rpm --import ${CONFLUENT_PACKAGES_REPO}/archive.key \u0026\u0026 printf "[Confluent.dist] \nname=Confluent repository (dist) \nbaseurl=${CONFLUENT_PACKAGES_REPO}/\$releasever \ngpgcheck=1 \ngpgkey=${CONFLUENT_PACKAGES_REPO}/archive.key \nenabled=1 \n\n[Confluent] \nname=Confluent repository \nbaseurl=${CONFLUENT_PACKAGES_REPO}/ \ngpgcheck=1 \ngpgkey=${CONFLUENT_PACKAGES_REPO}/archive.key \nenabled=1 " \u003e /etc/yum.repos.d/confluent.repo \u0026\u0026 yum install -y confluent-kafka-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e clean up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs" \u0026\u0026 mkdir -p /var/lib/${COMPONENT}/data /etc/${COMPONENT}/secrets \u0026\u0026 chmod -R ag+w /etc/kafka /var/lib/${COMPONENT}/data /etc/${COMPONENT}/secrets \u0026\u0026 chown -R appuser:appuser /var/log/kafka /var/log/confluent /var/lib/kafka /var/lib/zookeeper /etc/${COMPONENT}/secrets","sizeBytes":88304906,"id":"\u003cmissing\u003e"} CVE-2021-21295 java medium io.netty_netty-codec 4.1.59.Final 5.9 fixed in 4.1.60 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects, and these HTTP/1.1 ob
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21409 java medium io.netty_netty-codec 4.1.47.Final 5.9 fixed in 4.1.61 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21295 java medium io.netty_netty-codec 4.1.47.Final 5.9 fixed in 4.1.60 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects, and these HTTP/1.1 ob
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21295 java medium io.netty_netty-codec 4.1.50.Final 5.9 fixed in 4.1.60 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects, and these HTTP/1.1 ob
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21409 java medium io.netty_netty-codec 4.1.50.Final 5.9 fixed in 4.1.61 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21290 java medium io.netty_netty-codec 4.1.48.Final 5.5 fixed in 4.1.59 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty before version 4.1.59.Final there is a vulnerability on Unix-like systems involving an insecure temp file. When netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. On unix-like systems, the temporary directory is shared between all user. As such, writing to this directory using APIs that do not explicitly set the file/directory permissions can lead to information disclosure. Of note, this does not impact modern MacOS Operating Systems. The method "File.createTempFile" on unix-like systems creates a random file, but, by default will create this file with the permissions "-rw-r--r--". Thus, if sensitive information is written to this file, other local users can read this information. This is the case in netty's "AbstractDiskHttpData" is vulnerable. This has been fixed in version 4.1.59.Final. As a workaround, one may specify your own "java.io.tmpdir" when you start the JVM or use "DefaultHttpDataFactory.setBaseDir(...)" to set the directory to something that is only readable by the current user.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21295 java medium io.netty_netty-codec 4.1.48.Final 5.9 fixed in 4.1.60 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects, and these HTTP/1.1 ob
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21409 java medium io.netty_netty-codec 4.1.48.Final 5.9 fixed in 4.1.61 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-21290 java medium io.netty_netty-codec 4.1.47.Final 5.5 fixed in 4.1.59 Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty before version 4.1.59.Final there is a vulnerability on Unix-like systems involving an insecure temp file. When netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. On unix-like systems, the temporary directory is shared between all user. As such, writing to this directory using APIs that do not explicitly set the file/directory permissions can lead to information disclosure. Of note, this does not impact modern MacOS Operating Systems. The method "File.createTempFile" on unix-like systems creates a random file, but, by default will create this file with the permissions "-rw-r--r--". Thus, if sensitive information is written to this file, other local users can read this information. This is the case in netty's "AbstractDiskHttpData" is vulnerable. This has been fixed in version 4.1.59.Final. As a workaround, one may specify your own "java.io.tmpdir" when you start the JVM or use "DefaultHttpDataFactory.setBaseDir(...)" to set the directory to something that is only readable by the current user.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} PRISMA-2021-0055 java low commons-codec_commons-codec 1.11 0 fixed in 1.13 Versions <1.13 of this package are vulnerable to Information Exposure. When there is no byte array value that can be encoded into a string, the Base32 implementation does not reject it, and instead decodes it into an arbitrary value which can be re-encoded again using the same implementation. This allows for information exposure exploits such as tunneling additional information via seemingly valid base 32 strings.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2021-28163 java low org.eclipse.jetty_jetty-io 9.4.38.v20210224 2.7 fixed in 9.4.39 In Eclipse Jetty 9.4.32 to 9.4.38, 10.0.0.beta2 to 10.0.1, and 11.0.0.beta2 to 11.0.1, if a user uses a webapps directory that is a symlink, the contents of the webapps directory is deployed as a static webapp, inadvertently serving the webapps themselves and anything else that might be in that directory.
{"created":1619741471,"instruction":"RUN |6 ARTIFACT_ID=cp-kafka-connect-base BUILD_NUMBER=4 CONFLUENT_PACKAGES_REPO=https://s3-us-west-2.amazonaws.com/staging-confluent-packages-6.1.1/rpm/6.1 CONFLUENT_VERSION=6.1.1 GIT_COMMIT=a7218c9a PROJECT_VERSION=6.1.1 /bin/sh -c echo "===\u003e Installing ${COMPONENT}..." \u0026\u0026 echo "===\u003e Installing Schema Registry (for Avro jars) ..." \u0026\u0026 yum install -y confluent-schema-registry-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Controlcenter for monitoring interceptors ..." \u0026\u0026 yum install -y confluent-control-center-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Installing Confluent Hub client ..." \u0026\u0026 yum install -y confluent-hub-client-${CONFLUENT_VERSION} \u0026\u0026 echo "===\u003e Cleaning up ..." \u0026\u0026 yum clean all \u0026\u0026 rm -rf /tmp/* \u0026\u0026 echo "===\u003e Setting up ${COMPONENT} dirs ..." \u0026\u0026 mkdir -p /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chown appuser:appuser -R /etc/${COMPONENT} \u0026\u0026 chmod -R ag+w /etc/${COMPONENT} /etc/${COMPONENT}/secrets /etc/${COMPONENT}/jars \u0026\u0026 chmod -R ag+w /etc/schema-registry \u0026\u0026 mkdir -p /usr/share/confluent-hub-components \u0026\u0026 chown appuser:appuser -R /usr/share/confluent-hub-components","sizeBytes":495575130,"id":"\u003cmissing\u003e"} CVE-2020-8908 java low com.google.guava_guava 28.1-jre 3.3 fixed in 30.0 A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.

SSL is not enabled if advertised protocol map name doesn't end in SSL

The code assumes that the advertised listener name ends in SSL

https://github.com/confluentinc/kafka-images/blob/master/kafka/include/etc/confluent/docker/configure

# Set if ADVERTISED_LISTENERS has SSL:// or SASL_SSL:// endpoints.
if [[ $KAFKA_ADVERTISED_LISTENERS == *"SSL://"* ]]
then
  echo "SSL is enabled."

If I use
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_SSL
KAFKA_ADVERTISED_LISTENERS: INTERNAL://broker-4:9092,EXTERNAL://broker-4:9093

SSL is not enabled.

Can this be changed to look at KAFKA_LISTENER_SECURITY_PROTOCOL_MAP and see if SSL is after the : ?

I think that would allow for such scenario, or am I missing something.

Thanks,

Neil

Starting container process caused: chdir to cwd (\"/home/appuser\") set in config.json failed: permission denied"

Image: confluentinc/cp-kafka:6.1.0

We are migrating from version 5.5.1 to 6.1.0.

During pod startup create container process fails with below error message

Error: container create failed: time="2021-03-10T07:30:19Z" level=error msg="container_linux.go:366: starting container process caused: chdir to cwd ("/home/appuser") set in config.json failed: permission denied"

Can you investigate please? Thanks.

Container cp-zookeeper doesn't allow the use of custom node IDs

Introduction

I believe that I have found a bug related to the templates used to generate the Zookeeper configuration files, when it comes to clustering.

As stated in the Zookeeper's Admin documentation, under the server.x=[hostname]:nnnnn[:nnnnn] configuration key:

When the server starts up, it determines which server it is by looking for the file myid in the data directory. That file contains the server number, in ASCII, and it should match x in server.x in the left hand side of this setting.

Unfortunately, there is a major discrepancy that can be found in the Zookeeper templated config files of this project, which prevents users from using custom IDs for their Zookeeper nodes:

  1. In file zookeeper/include/etc/confluent/docker/configure at line 31 (link), we can see that the content of the environment variable ZOOKEEPER_SERVER_ID will be written to the myid file of the Zookeeper node. This ID can be any value between 1 and 255 and can be freely defined by the user. This is the expected behaviour.

  2. In file zookeeper/include/etc/confluent/docker/zookeeper.properties.template at line 84 (link), the section of the script that generates the server.x= configurations will automatically assign the loop.index as the server ID/number to each server listed in the environment variable ZOOKEEPER_SERVERS. This is WRONG.

Working example

In most cases, the current setup will work just fine, because users will tend to use node IDs that start at index 1.

Zookeeper node ZOOKEEPEER_SERVER_ID ZOOKEEPER_SERVERS
zk1.example.com (10.0.0.1) 1 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888
zk2.example.com (10.0.0.2) 2 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888
zk3.example.com (10.0.0.3) 3 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888

The generated configuration file will contain the following values:

# /etc/kafka/zookeeper.properties
server.1=10.0.0.1:2888:3888
server.2=10.0.0.2:2888:3888
server.3=10.0.0.3:2888:3888

Result: Zookeeper starts and runs just fine โœ…

Non-functional example

Now, let's reuse the same situation than above with only a slight variation: let's start node IDs at index 100.

Zookeeper node ZOOKEEPEER_SERVER_ID ZOOKEEPER_SERVERS
zk1.example.com (10.0.0.1) 101 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888
zk2.example.com (10.0.0.2) 102 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888
zk3.example.com (10.0.0.3) 103 10.0.0.1:2888:3888,10.0.0.2:2888:3888,10.0.0.3:2888:3888

Because of the server.{{ loop.index }}={{ server }} statement, the generated configuration file will contain the following values (note that the node IDs don't match!!!):

# /etc/kafka/zookeeper.properties
server.1=10.0.0.1:2888:3888
server.2=10.0.0.2:2888:3888
server.3=10.0.0.3:2888:3888

Result: Zookeeper will NOT start properly and will never be able to establish quorum with the other nodes in the cluster โŒ

Since cp-zookeeper is allowing users to define a custom node ID value via ZOOKEEPER_SERVER_ID, the generated zookeeper.properties file should be generated with these custom IDs:

# /etc/kafka/zookeeper.properties
server.101=10.0.0.1:2888:3888
server.102=10.0.0.2:2888:3888
server.103=10.0.0.3:2888:3888

Conclusion

In summary, the core of the problem resides in /zookeeper/include/etc/confluent/docker/zookeeper.properties.template at line 84 (link). The loop.index variable should NOT be used to generate this file.

There should be a mechanism built-in the syntax of ZOOKEEPER_SERVERS to define the node IDs of each member of the cluster. For instance:

ZOOKEEPER_SERVERS="10.0.0.1:2888:3888:101,10.0.0.2:2888:3888:102,10.0.0.3:2888:3888:103"

Or, a dedicated environment variable could be use to override this default ID mapping:

ZOOKEEPER_SERVERS_ID_MAPPING="10.0.0.1:101,10.0.0.2:102,10.0.0.1:103"

Let me know what you think about this and you encountered the same issue in the past with other users.
Thanks!

Allow override of Zookeeper dataDir and dataLogDir

Original issue here
confluentinc/cp-docker-images#608

But, this repo is now the appropriate place for it? I think.

Context:
Would like to use CP docker images to deploy on Azure and override ZOOKEEPER_DATA_DIR and ZOOKEEPER_DATA_LOG_DIR to mount to an Azure Persistent Volume.

This behavior (allowing overrides) would match zookeeper:3.5.7 image which appears to allow overrides for these two vars.

cp-kafka-connect: Custom ConfigProvider ClassNotFoundExceptions on startup

Hey, I just tried the cp-kafka-connect docker image and I want to install some custom ConfigProviders. The Confluent documentation clearly describes how to proceed with that:

To install the custom ConfigProvider implementation, add a new subdirectory containing the JAR files to the directory that is in Connectโ€™s plugin.path and (re)start the Connect workers. When the Connect worker starts up it instantiates all ConfigProvider implementations specified in the worker configuration.

Source: https://docs.confluent.io/platform/current/connect/security.html

Unfortunately I noticed that this does not work, even though the startup logs show that the .jar files are indeed loaded. I'm wondering whether this correlates with the issue described in confluentinc/cp-docker-images#815 . I tried all the suggested solutions but none worked for me. The problem seems to be common around the community as there are already blogposts describing the issues and workarounds (again none worked for me):


To reproduce:

Dockerfile I used:

FROM confluentinc/cp-kafka-connect:6.2.0

ENV KUBERNETES_CONFIG_PROVIDER_VERSION=0.1.0
ENV KUBERNETES_CONFIG_PROVIDER_DOWNLOAD=https://github.com/strimzi/kafka-kubernetes-config-provider/releases/download/${KUBERNETES_CONFIG_PROVIDER_VERSION}/kafka-kubernetes-config-provider-${KUBERNETES_CONFIG_PROVIDER_VERSION}.tar.gz
ENV KUBERNETES_CONFIG_PROVIDER_TARGET_PATH=/usr/share/java/strimzi-kubernetes-config-provider

ENV ENV_CONFIG_PROVIDER_VERSION=0.1.0
ENV ENV_CONFIG_PROVIDER_DOWNLOAD=https://github.com/strimzi/kafka-env-var-config-provider/releases/download/${ENV_CONFIG_PROVIDER_VERSION}/kafka-env-var-config-provider-${ENV_CONFIG_PROVIDER_VERSION}.tar.gz
ENV ENV_CONFIG_PROVIDER_TARGET_PATH=/usr/share/java/strimzi-env-var-config-provider

RUN set -e; \
    echo "===> Installing Kubernetes Config provider"; \
    mkdir ${KUBERNETES_CONFIG_PROVIDER_TARGET_PATH}; \
    curl -sLS ${KUBERNETES_CONFIG_PROVIDER_DOWNLOAD} | tar -xvz -C ${KUBERNETES_CONFIG_PROVIDER_TARGET_PATH} --strip-components=2;

RUN set -e; \
    echo "===> Installing Env Config provider"; \
    mkdir ${ENV_CONFIG_PROVIDER_TARGET_PATH}; \
    curl -sLS ${ENV_CONFIG_PROVIDER_DOWNLOAD} | tar -xvz -C ${ENV_CONFIG_PROVIDER_TARGET_PATH} --strip-components=2;

Jar files end up being where they are supposed to be:

image

And then start with a sth like command = ["/bin/connect-distributed", "/tmp/workers.properties"]

workers.properties:

bootstrap.servers=broker:29092
security.protocol=PLAINTEXT
sasl.mechanism=PLAIN
client.id=connect
producer.bootstrap.servers=broker:29092
producer.security.protocol=PLAINTEXT
producer.sasl.mechanism=PLAIN
producer.compression.type=gzip
group.id=connect
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
config.storage.topic=connect_configs
offset.storage.topic=connect_offsets
status.storage.topic=connect_statuses
config.providers=env
# config.providers.secrets.class=io.strimzi.kafka.KubernetesSecretConfigProvider
# config.providers.configmaps.class=io.strimzi.kafka.KubernetesConfigMapConfigProvider
config.providers.env.class=io.strimzi.kafka.EnvVarConfigProvider
plugin.path=/usr/share/java,/usr/share/confluent-hub-components

Relevant log messages during startup & failure:

[2021-09-10 19:39:22,503] INFO Loading plugin from: /usr/share/java/strimzi-kubernetes-config-provider (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:246)
[2021-09-10 19:39:22,902] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/java/strimzi-kubernetes-config-provider/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:269)


ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:86)
org.apache.kafka.common.config.ConfigException: Invalid value java.lang.ClassNotFoundException: io.strimzi.kafka.KubernetesConfigMapConfigProvider for configuration Invalid config:io.strimzi.kafka.KubernetesConfigMapConfigProvider ClassNotFoundException exception occurred

Env KAFKA_PARTITION_ASSIGNMENT_STRATEGY ignored by Kafka

We're trying to change partition assignment strategy. We manage Kafka cluster with Docker Compose like this:

  kafka1:
    image: confluentinc/cp-kafka:5.5.1
    networks:
      - test
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # some more env here...
      KAFKA_PARTITION_ASSIGNMENT_STRATEGY: org.apache.kafka.clients.consumer.CooperativeStickyAssignor
    deploy:
      placement:
        constraints:
          - node.role == manager

First we tried to use Kafka image confluentinc/cp-kafka:5.3.1 but apparently it has Kafka version inside which doesn't support Partition Assignment Strategy we're interested in (org.apache.kafka.clients.consumer.CooperativeStickyAssignor). - BTW, what's wrong with version naming here... They don't make any sense.

Anyway, we switched to confluentinc/cp-kafka:5.5.1 and applied the partition assignment strategy like in the snippet above. Still no luck, we see in Kafka logs this ENV was recognized but the config wasn't modified with the chosen strategy. During Kafka startup a full config is printed so we noticed our custom value is not there.

Is something wrong with the image? How to really apply org.apache.kafka.clients.consumer.CooperativeStickyAssignor partition assignment strategy?

If it's not a bug and changing the strategy is possible, a quick instruction how to do it would help us very much. Like which image to use, how to apply the strategy, how to verify it's working. Looking forward to hearing from you.

Unable to set 'confluent.schema.registry.url' in cp server docker image

Hi,

I am running confluent platform setup on docker. I am trying to enable schema validation on a topic, which necessitates setting "confluent.schema.registry.url" property on the broker.

I have set the property on the broker. Subsequently, restarted the broker. However, the property isn't taking effect and i am getting the following error while enabling the schema validation on a topic;

confluent.key.schema.validation and / or confluent.value.schema.validation is enabled but there is no confluent.schema.registry.url specified at the broker side, will not add the corresponding validator

Zookeeper not starting if only `ZOOKEEPER_SECURE_CLIENT_PORT` but not `ZOOKEEPER_CLIENT_PORT` set

I want to use Zookeeper with only the TLS port enabled and the non-TLS port disabled. However, if I only specify ZOOKEEPER_SECURE_CLIENT_PORT but not ZOOKEEPER_CLIENT_PORT, Zookeeper startup fails with the following error:

===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
ZOOKEEPER_CLIENT_PORT is required.
Command [/usr/local/bin/dub ensure ZOOKEEPER_CLIENT_PORT] FAILED !

I would suggest to fix this by ensuring at least one of the two variables is set:

diff --git a/zookeeper/include/etc/confluent/docker/configure b/zookeeper/include/etc/confluent/docker/configure
index 742207d9..2c2af7d1 100755
--- a/zookeeper/include/etc/confluent/docker/configure
+++ b/zookeeper/include/etc/confluent/docker/configure
@@ -16,7 +16,7 @@
 
 . /etc/confluent/docker/bash-config
 
-dub ensure ZOOKEEPER_CLIENT_PORT
+dub ensure-atleast-one ZOOKEEPER_CLIENT_PORT ZOOKEEPER_SECURE_CLIENT_PORT
 
 dub path /etc/kafka/ writable
 

In order to work, this also requires changes in the cp-base-new base image: confluent-docker-utils needs to be updated to v0.0.45 because the currently used version 0.0.43 is suffering from a bug in dub ensure-atleast-one.

Why Is Kafka .NET Client Consumer Blocking With confluent/cp-kafka? Why Does It Consume Correctly With lensesio:fast-data-dev?

Hi,

Not sure if this is the correct place to ask this. Apologies if not...

I am runnning an Xunit functional test within a docker compose stack based on Debian 3.1-buster with Confluent Kafka .NET Client v1.5.3 connecting to Kafka broker confluentinc/cp-kafka:6.0.1.

I am fairly new to Kafka and am experiencing an issue with consumer blocking. If I use the lensesio:fast-data-dev image the consumer does not block and trying to understand why?? I have also raised query @ confluent-kafka-dotnet.

The architecture is illustrated below:

architecture

I am testing with xUnit and have a class fixture that starts an in-process generic Kestrel host for the lifetime of the test collection/class. I am using an in-process generic host since I have an additional signalR service which uses websockets. From what I understand the WebApplicationFactory is in-memory only and does not use network sockets.

The generic host contains a Kafka producer and consumer. The producer is a singleton service that produces using the Produce method. The consumer is BackgroundService that runs a Consume loop with a cancellation token (see listing further below). The consumer has the following configuration:

  • EnableAutoCommit: true
  • EnableAutoOffsetStore: false
  • AutoOffsetReset: AutoOffsetReset.Latest

It is a single consumer with 3 partitions. The group.initial.rebalance.delay is configured as 1000ms.

The test spawns a thread that sends an event to trigger the producer to post data onto the Kafka topic. The test then waits for a time delay via ManualResetEvent to allow time for the consumer to process the topic data.

Problem with Consumer is Blocking

When I run the test within a docker-compose environment I can see from the logs (included below) that:

  • The producer and consumer are connected to the same broker and topic
  • The producer sends the data to the topic but the consumer is blocking

The xUnit and in-process Kestrel host are running within a docker-compose service within the same network as the kafka service. The Kafka producer is able to successfully post data onto the kafka topic as demonstrated by the logs below.

I have created an additional docker-compose service that runs a python client consumer. This uses a poll loop to consume data posted while running the test. Data is consumed by the Python client.

Does anyone have any ideas why the consumer would be blocking within this environment to assist with fault finding?
Would the wait performed in the xUnit test block the in-process Kestrel host started by the xUnit fixture?

If I run the Kestrel host locally on MacOS Catalina 10.15.7 connecting to Kafka (image:lensesio:fast-data-dev-2.5.1-L0) in docker-compose it produces and consumes successfully.

Update - Works with lensesio image
The local docker-compose that works uses docker image for lensesio:fast-data-dev-2.5.1-L0. This uses Apache Kafka 2.5.1 and Confluent components 5.5.1. I have also tried:

  • Downgrading to Confluent Kafka images 5.5.1
  • Upgrading the .Net Confluent Client to 1.5.3

The result remains the same, the producer produces fine, however the Consumer blocks.

What is the difference between lensesio:fast-data-dev-2.5.1-LO configuration and the confluent/cp images that would cause the blocking?

I have tagged the working docker-compose configuration onto the end of this query.

Update - Works for the confluent/cp-kafka image when group.initial.rebalance.delay is 0ms

Originally the group.initial.rebalance.delay was 1ms, the same as the lensesio:fast-data-dev-2.5.1-LO image. The 1ms settings on confluent/cp-kafka image exhibits the blocking behaviour.

If I change the group.initial.rebalance.delay to 0ms then no blocking occurs with the confluent/cp-kafka image.

Does the lensesio:fast-data-dev-2.5.1-LO image offer better performance in a docker-compose development environment when used with the confluent-kafka-dotnet client?

Test

[Fact]
public async Task MotionDetectionEvent_Processes_Data()
{
    var m = new ManualResetEvent(false);

    // spawn a thread to publish a message and wait for 14 seconds
    var thread = new Thread(async () =>
    {
        await _Fixture.Host.MqttClient.PublishAsync(_Fixture.Data.Message);

        // allow time for kafka to consume event and process
        Console.WriteLine($"TEST THREAD IS WAITING FOR 14 SECONDS");
        await Task.Run(() => Task.Delay(14000));
        Console.WriteLine($"TEST THREAD IS COMPLETED WAITING FOR 14 SECONDS");

        m.Set();
    });
    thread.Start();

    // wait for the thread to have completed
    await Task.Run(() => { m.WaitOne(); });

    // TO DO, ASSERT DATA AVAILABLE ON S3 STORAGE ETC.
}

Test Output - Producer has produced data onto the topic but consumer has not consumed

Test generic host example
SettingsFile::GetConfigMetaData ::: Directory for executing assembly :: /Users/simon/Development/Dotnet/CamFrontEnd/Tests/Temp/WebApp.Test.Host/bin/Debug/netcoreapp3.1
SettingsFile::GetConfigMetaData ::: Executing assembly :: WebApp.Testing.Utils, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
AutofacTestHost is using settings file /Users/simon/Development/Dotnet/CamFrontEnd/Tests/Temp/WebApp.Test.Host/bin/Debug/netcoreapp3.1/appsettings.Local.json
info: WebApp.Mqtt.MqttService[0]
      Mqtt Settings :: mqtt://mqtt:*********@localhost:1883
info: WebApp.Mqtt.MqttService[0]
      Mqtt Topic :: shinobi/+/+/trigger
info: WebApp.S3.S3Service[0]
      Minio client created for endpoint localhost:9000
info: WebApp.S3.S3Service[0]
      minio://accesskey:12345678abcdefgh@localhost:9000
info: Extensions.Hosting.AsyncInitialization.RootInitializer[0]
      Starting async initialization
info: Extensions.Hosting.AsyncInitialization.RootInitializer[0]
      Starting async initialization for WebApp.Kafka.Admin.KafkaAdminService
info: WebApp.Kafka.Admin.KafkaAdminService[0]
      Admin service trying to create Kafka Topic...
info: WebApp.Kafka.Admin.KafkaAdminService[0]
      Topic::eventbus, ReplicationCount::1, PartitionCount::3
info: WebApp.Kafka.Admin.KafkaAdminService[0]
      Bootstrap Servers::localhost:9092
info: WebApp.Kafka.Admin.KafkaAdminService[0]
      Admin service successfully created topic eventbus
info: WebApp.Kafka.Admin.KafkaAdminService[0]
      Kafka Consumer thread started
info: Extensions.Hosting.AsyncInitialization.RootInitializer[0]
      Async initialization for WebApp.Kafka.Admin.KafkaAdminService completed
info: Extensions.Hosting.AsyncInitialization.RootInitializer[0]
      Async initialization completed
info: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0]
      User profile is available. Using '/Users/simon/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest.
info: WebApp.Kafka.ProducerService[0]
      ProducerService constructor called
info: WebApp.Kafka.SchemaRegistry.Serdes.JsonDeserializer[0]
      Kafka Json Deserializer Constructed
info: WebApp.Kafka.ConsumerService[0]
      Kafka consumer listening to camera topics =>
info: WebApp.Kafka.ConsumerService[0]
      Camera Topic :: shinobi/RHSsYfiV6Z/xi5cncrNK6/trigger
info: WebApp.Kafka.ConsumerService[0]
      Camera Topic :: shinobi/group/monitor/trigger
%7|1607790673.462|INIT|rdkafka#consumer-3| [thrd:app]: librdkafka v1.5.3 (0x10503ff) rdkafka#consumer-3 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, STATIC_LINKING CC GXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2000)
info: WebApp.Kafka.ConsumerService[0]
      Kafka consumer created => Name :: rdkafka#consumer-3
%7|1607790673.509|SUBSCRIBE|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": subscribe to new subscription of 1 topics (join state init)
%7|1607790673.509|REBALANCE|rdkafka#consumer-3| [thrd:main]: Group "consumer-group" is rebalancing in state init (join-state init) without assignment: unsubscribe
info: WebApp.Kafka.ConsumerService[0]
      Kafka consumer has subscribed to topic eventbus
info: WebApp.Kafka.ConsumerService[0]
      Kafka is waiting to consume...
info: WebApp.Mqtt.MqttService[0]
      MQTT managed client connected
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://127.0.0.1:65212
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /Users/simon/Development/Dotnet/CamFrontEnd/Tests/Temp/WebApp.Test.Host/bin/Debug/netcoreapp3.1/
MQTT HAS PUBLISHED...SPAWNING TEST THREAD TO WAIT
TEST THREAD IS WAITING FOR 14 SECONDS
info: WebApp.S3.S3Service[0]
      Loading json into JSON DOM and updating 'img' property with key 2d8e2438-e674-4d71-94ac-e54df0143a29
info: WebApp.S3.S3Service[0]
      Extracting UTF8 bytes from base64
info: WebApp.S3.S3Service[0]
      Updated JSON payload with img: 2d8e2438-e674-4d71-94ac-e54df0143a29, now uploading 1.3053922653198242 MB to S3 storage
%7|1607790674.478|JOIN|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": postponing join until up-to-date metadata is available
%7|1607790674.483|REJOIN|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": subscription updated from metadata change: rejoining group
%7|1607790674.483|REBALANCE|rdkafka#consumer-3| [thrd:main]: Group "consumer-group" is rebalancing in state up (join-state init) without assignment: group rejoin
%7|1607790674.483|JOIN|rdkafka#consumer-3| [thrd:main]: 127.0.0.1:9092/1: Joining group "consumer-group" with 1 subscribed topic(s)
%7|1607790674.541|JOIN|rdkafka#consumer-3| [thrd:main]: 127.0.0.1:9092/1: Joining group "consumer-group" with 1 subscribed topic(s)
info: WebApp.S3.S3Service[0]
      Converting modified payload back to UTF8 bytes for Kafka processing
info: WebApp.Kafka.ProducerService[0]
      Produce topic : eventbus, key : shinobi/group/monitor/trigger, value : System.Byte[]
info: WebApp.Kafka.ProducerService[0]
      Delivered message to eventbus [[2]] @0
%7|1607790675.573|ASSIGNOR|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": "range" assignor run for 1 member(s)
%7|1607790675.588|ASSIGN|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": new assignment of 3 partition(s) in join state wait-sync
%7|1607790675.588|OFFSET|rdkafka#consumer-3| [thrd:main]: GroupCoordinator/1: Fetch committed offsets for 3/3 partition(s)
%7|1607790675.717|FETCH|rdkafka#consumer-3| [thrd:main]: Partition eventbus [0] start fetching at offset 0
%7|1607790675.719|FETCH|rdkafka#consumer-3| [thrd:main]: Partition eventbus [1] start fetching at offset 0
%7|1607790675.720|FETCH|rdkafka#consumer-3| [thrd:main]: Partition eventbus [2] start fetching at offset 1


        ** EXPECT SOME CONSUMER DATA HERE - INSTEAD IT IS BLOCKING WITH confluent/cp-kafka image **


TEST THREAD IS COMPLETED WAITING FOR 14 SECONDS
Timer Elapsed
Shutting down generic host
info: Microsoft.Hosting.Lifetime[0]
      Application is shutting down...
info: WebApp.Mqtt.MqttService[0]
      Mqtt managed client disconnected
info: WebApp.Kafka.ConsumerService[0]
      The Kafka consumer thread has been cancelled
info: WebApp.Kafka.ConsumerService[0]
      Kafka Consumer background service disposing
%7|1607790688.191|CLOSE|rdkafka#consumer-3| [thrd:app]: Closing consumer
%7|1607790688.191|CLOSE|rdkafka#consumer-3| [thrd:app]: Waiting for close events
%7|1607790688.191|REBALANCE|rdkafka#consumer-3| [thrd:main]: Group "consumer-group" is rebalancing in state up (join-state started) with assignment: unsubscribe
%7|1607790688.191|UNASSIGN|rdkafka#consumer-3| [thrd:main]: Group "consumer-group": unassigning 3 partition(s) (v5)
%7|1607790688.191|LEAVE|rdkafka#consumer-3| [thrd:main]: 127.0.0.1:9092/1: Leaving group
%7|1607790688.201|CLOSE|rdkafka#consumer-3| [thrd:app]: Consumer closed
%7|1607790688.201|DESTROY|rdkafka#consumer-3| [thrd:app]: Terminating instance (destroy flags NoConsumerClose (0x8))
%7|1607790688.201|CLOSE|rdkafka#consumer-3| [thrd:app]: Closing consumer
%7|1607790688.201|CLOSE|rdkafka#consumer-3| [thrd:app]: Disabling and purging temporary queue to quench close events
%7|1607790688.201|CLOSE|rdkafka#consumer-3| [thrd:app]: Consumer closed
%7|1607790688.201|DESTROY|rdkafka#consumer-3| [thrd:main]: Destroy internal
%7|1607790688.201|DESTROY|rdkafka#consumer-3| [thrd:main]: Removing all topics
info: WebApp.Mqtt.MqttService[0]
      Disposing Mqtt Client
info: WebApp.Kafka.ProducerService[0]
      Flushing remaining messages to produce...
info: WebApp.Kafka.ProducerService[0]
      Disposing Kafka producer...
info: WebApp.S3.S3Service[0]
      Disposing of resources
Stopping...

Kafka Consumer

using System;
using System.Threading;
using System.Threading.Tasks;

using Confluent.Kafka;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Microsoft.AspNetCore.SignalR;

using WebApp.Data;
using WebApp.Kafka.Config;
using WebApp.Realtime.SignalR;


namespace WebApp.Kafka
{
    public delegate IConsumer<string, MotionDetection> ConsumerFactory(
        KafkaConfig config,
        IAsyncDeserializer<MotionDetection> serializer
    );

    public class ConsumerService : BackgroundService, IDisposable
    {
        private KafkaConfig _config;
        private readonly IConsumer<string, MotionDetection> _kafkaConsumer;
        private ILogger<ConsumerService> _logger;
        private IHubContext<MotionHub, IMotion> _messagerHubContext;
        private IAsyncDeserializer<MotionDetection> _serializer { get; }

        public ConsumerFactory _factory { get; set; }


        // Using SignalR with background services:
        // https://docs.microsoft.com/en-us/aspnet/core/signalr/background-services?view=aspnetcore-2.2
        public ConsumerService(
            IOptions<KafkaConfig> config,
            ConsumerFactory factory,
            IHubContext<MotionHub, IMotion> messagerHubContext,
            IAsyncDeserializer<MotionDetection> serializer,
            ILogger<ConsumerService> logger
        )
        {
            if (config is null)
                throw new ArgumentNullException(nameof(config));

            _config = config.Value;
            _factory = factory ?? throw new ArgumentNullException(nameof(factory));
            _logger = logger ?? throw new ArgumentNullException(nameof(logger));
            _messagerHubContext = messagerHubContext ?? throw new ArgumentNullException(nameof(messagerHubContext));
            _serializer = serializer ?? throw new ArgumentNullException(nameof(serializer));

            // enforced configuration
            _config.Consumer.EnableAutoCommit = true; // allow consumer to autocommit offsets
            _config.Consumer.EnableAutoOffsetStore = false; // allow control over which offsets stored
            _config.Consumer.AutoOffsetReset = AutoOffsetReset.Latest; // if no offsets committed for topic for consumer group, default to latest   
            _config.Consumer.Debug = "consumer";

            _logger.LogInformation("Kafka consumer listening to camera topics =>");
            foreach (var topic in _config.MqttCameraTopics) { _logger.LogInformation($"Camera Topic :: {topic}"); }

            _kafkaConsumer = _factory(_config, _serializer);
            _logger.LogInformation($"Kafka consumer created => Name :: {_kafkaConsumer.Name}");
        }

        protected override Task ExecuteAsync(CancellationToken cancellationToken)
        {
            new Thread(() => StartConsumerLoop(cancellationToken)).Start();
            return Task.CompletedTask;
        }

        private void StartConsumerLoop(CancellationToken cancellationToken)
        {
            _kafkaConsumer.Subscribe(_config.Topic.Name);
            _logger.LogInformation($"Kafka consumer has subscribed to topic {_config.Topic.Name}");


            while (!cancellationToken.IsCancellationRequested)
            {
                try
                {
                    _logger.LogInformation("Kafka is waiting to consume...");
                    var consumerResult = _kafkaConsumer.Consume(cancellationToken);
                    _logger.LogInformation("Kafka Consumer consumed message => {}", consumerResult.Message.Value);

                    if (_config.MqttCameraTopics.Contains(consumerResult.Message.Key))
                    {
                        // we need to consider here security for auth, only want for user
                        // await _messagerHubContext.Clients.All.ReceiveMotionDetection(consumerResult.Message.Value);
                        _logger.LogInformation("Kafka Consumer dispatched message to SignalR");

                        // instruct background thread to commit this offset
                        _kafkaConsumer.StoreOffset(consumerResult);
                    }
                }
                catch (OperationCanceledException)
                {
                    _logger.LogInformation("The Kafka consumer thread has been cancelled");
                    break;
                }
                catch (ConsumeException ce)
                {
                    _logger.LogError($"Consume error: {ce.Error.Reason}");

                    if (ce.Error.IsFatal)
                    {
                        // https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#fatal-consumer-errors
                        _logger.LogError(ce, ce.Message);
                        break;
                    }
                }
                catch (Exception e)
                {
                    _logger.LogError(e, $"Unexpected exception while consuming motion detection {e}");
                    break;
                }
            }
        }


        public override void Dispose()
        {
            _logger.LogInformation("Kafka Consumer background service disposing");
            _kafkaConsumer.Close();
            _kafkaConsumer.Dispose();

            base.Dispose();
        }
    }
}

Kestrel Host Configuration

/// <summary>
/// Build the server, with Autofac IOC.
/// </summary>
protected override IHost BuildServer(HostBuilder builder)
{
    // build the host instance
    return new HostBuilder()
    .UseServiceProviderFactory(new AutofacServiceProviderFactory())
    .ConfigureLogging(logging =>
    {
        logging.ClearProviders();
        logging.AddConsole();
        logging.AddFilter("Microsoft.AspNetCore.SignalR", LogLevel.Information);
    })
    .ConfigureWebHost(webBuilder =>
    {
        webBuilder.ConfigureAppConfiguration((context, cb) =>
        {
            cb.AddJsonFile(ConfigMetaData.SettingsFile, optional: false)
            .AddEnvironmentVariables();
        })
        .ConfigureServices(services =>
        {
            services.AddHttpClient();
        })
        .UseStartup<TStartup>()
        .UseKestrel()
        .UseUrls("http://127.0.0.1:0");
    }).Build();
}

docker-compose stack

---
version: "3.8"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:6.0.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    networks:
      - camnet
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: WARN

  kafka:
    image: confluentinc/cp-kafka:6.0.1
    hostname: kafka
    container_name: kafka
    depends_on:
      - zookeeper
    networks:
      - camnet
    ports:
      - "9092:9092"
      - "19092:19092"
    environment:
      CONFLUENT_METRICS_ENABLE: 'false'
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
      KAFKA_BROKER_ID: 1
      KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 1000
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_LOG4J_ROOT_LOGLEVEL: WARN
      KAFKA_LOG4J_LOGGERS: "org.apache.zookeeper=WARN,org.apache.kafka=WARN,kafka=WARN,kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      
  mqtt:
    container_name: mqtt
    image: eclipse-mosquitto:1.6.9
    ports:
      - "8883:8883"
      - "1883:1883"
      - "9901:9001"
    environment:
      - MOSQUITTO_USERNAME=${MQTT_USER}
      - MOSQUITTO_PASSWORD=${MQTT_PASSWORD}
    networks:
      - camnet
    volumes:
      - ./Mqtt/Config/mosquitto.conf:/mosquitto/config/mosquitto.conf
      - ./Mqtt/Certs/localCA.crt:/mosquitto/config/ca.crt
      - ./Mqtt/Certs/server.crt:/mosquitto/config/server.crt
      - ./Mqtt/Certs/server.key:/mosquitto/config/server.key

  minio:
    container_name: service-minio
    image: dcs3spp/minio:version-1.0.2
    ports:
      - "127.0.0.1:9000:9000"
    environment:
      - MINIO_BUCKET=images
      - MINIO_ACCESS_KEY=${MINIO_USER}
      - MINIO_SECRET_KEY=${MINIO_PASSWORD}
    networks:
      - camnet

networks:
  camnet:

Works with the lensesio:fast-data-dev image. Why?

version: "3.8"

services:
  kafka:
    image: lensesio/fast-data-dev:2.5.1-L0
    container_name: kafka
    networks:
      - camnet
    ports:
      - 2181:2181 # zookeeper
      - 3030:3030 # ui
      - 9092:9092 # broker
      - 8081:8081 # schema registry
      - 8082:8082 # rest proxy
      - 8083:8083 # kafka connect
    environment:
      - ADV_HOST=127.0.0.1
      - SAMPLEDATA=0
      - REST_PORT=8082
      - FORWARDLOGS=0
      - RUNTESTS=0
      - DISABLE_JMX=1
      - CONNECTORS=${CONNECTOR}
      - WEB_PORT=3030
      - DISABLE=hive-1.1

  mqtt:
    container_name: mqtt
    image: eclipse-mosquitto:1.6.9
    ports:
      - "8883:8883"
      - "1883:1883"
      - "9901:9001"
    environment:
      - MOSQUITTO_USERNAME=${MQTT_USER}
      - MOSQUITTO_PASSWORD=${MQTT_PASSWORD}
    networks:
      - camnet
    volumes:
      - ./Mqtt/Config/mosquitto.conf:/mosquitto/config/mosquitto.conf
      - ./Mqtt/Certs/localCA.crt:/mosquitto/config/ca.crt
      - ./Mqtt/Certs/server.crt:/mosquitto/config/server.crt
      - ./Mqtt/Certs/server.key:/mosquitto/config/server.key

  minio:
    container_name: service-minio
    image: dcs3spp/minio:version-1.0.2
    ports:
      - "127.0.0.1:9000:9000"
    environment:
      - MINIO_BUCKET=images
      - MINIO_ACCESS_KEY=${MINIO_USER}
      - MINIO_SECRET_KEY=${MINIO_PASSWORD}
    networks:
      - camnet

networks:
  camnet:

Request failed authentication : cp-kafka-connect:6.1.0-1-ubi8

Hi,

My docker setup is as simple as https://github.com/mongodb/mongo-kafka/tree/master/docker. When I try to upgrade base image to 6.1.0-1-ubi8 I am seeing the below debug (exceptions?!) spamming my connect logs. Any help/suggestions are greatly appreciated. Thanks.

  • Broker image version is confluentinc/cp-enterprise-kafka:5.5.0.
  • I am not seeing this for the previous versions.. I've tried versions as latest as 5.5.3-3-ubi8.
  • This is something to do with the base image since I tried testing even without installing any connectors.
[2021-02-11 20:01:40,145] DEBUG Authenticating request (org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilter)
[2021-02-11 20:01:40,147] DEBUG Request failed authentication (org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilter)
javax.security.auth.login.LoginException: Login Failure: all modules ignored
	at java.base/javax.security.auth.login.LoginContext.invoke(LoginContext.java:871)
	at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:665)
	at java.base/javax.security.auth.login.LoginContext$4.run(LoginContext.java:663)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at java.base/javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:663)
	at java.base/javax.security.auth.login.LoginContext.login(LoginContext.java:574)
	at org.apache.kafka.connect.rest.basic.auth.extension.JaasBasicAuthFilter.filter(JaasBasicAuthFilter.java:64)
	at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:108)
	at org.glassfish.jersey.server.ContainerFilteringStage.apply(ContainerFilteringStage.java:44)
	at org.glassfish.jersey.process.internal.Stages.process(Stages.java:173)
	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:245)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)

Kafka container cannot receive truststore credentials other than from mounted files

I am running a dockerized Kafka-Cluster which is set up with Ansible. I'd also like to use SSL to encrypt inter-broker-communication and communication with clients. For this I am using keystore und truststore, for which I need to provide the credentials of course. I cannot, however, provide those credentials as an encrypted string (using Ansible-Vault), but must provide the name of the credentials file that is mounted in the container under /etc/kafka/secrets/, as per the following snippet:

if [[ -n "${KAFKA_SSL_CLIENT_AUTH-}" ]] && ( [[ $KAFKA_SSL_CLIENT_AUTH == *"required"* ]] || [[ $KAFKA_SSL_CLIENT_AUTH == *"requested"* ]] )
then
dub ensure KAFKA_SSL_TRUSTSTORE_FILENAME
export KAFKA_SSL_TRUSTSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_FILENAME"
dub path "$KAFKA_SSL_TRUSTSTORE_LOCATION" exists
dub ensure KAFKA_SSL_TRUSTSTORE_CREDENTIALS
KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_CREDENTIALS"
dub path "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION" exists
export KAFKA_SSL_TRUSTSTORE_PASSWORD
KAFKA_SSL_TRUSTSTORE_PASSWORD=$(cat "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION")
fi

The truststore-password is then extracted from the file with the given name.

Now, as far as I can see, this requires me to mount the unencrypted credentials file(s) into the Kafka container. This means, using Ansible from a central machine, that I will have to move the unencrypted file onto the target VM for each Kafka broker, before it is mounted on container startup. Which leaves me with unencrypted secrets on the VMs, which is an obvious security problem.

All in all, the above script seems to be a very convoluted way of figuring out the password and I would like to request a possibility to just pass along the password as a simple string (which I'd then provide through Ansible-Vault). Since this seems to be possible in non-dockerized Kafka anyway, I don't see a reason to limit the dockerized version to filenames only.

Release note

I recently switch the kafka-connect image I use from 5.5.1 to 6.1.1 and noticed a breaking change: the jdbc source connector was not installed by default anymore.
Loves the new tool confluent-hub to manage the connectors now. Great change.

Now my question is, is there some release notes somewhere that lists the breaking changes from one major release to another?
Not the release of the kafka application itself. I am talking about the release of the docker images.

KIP-500 support running a cluster without zookeeper

As Kafka 2.8.0 now supports running a cluster without zookeeper,
the parameter KAFKA_ZOOKEEPER_CONNECT should not be required if the parameters KAFKA_CONTROLLER_QUORUM_VOTERS or KAFKA_PROCESS_ROLES are set.

Unable to create data directory /var/lib/zookeeper/log/version-2

I'm trying to launch an instance of Zookeeper through docker using the confluentinc/cp-zookeeper:latest image.

docker info

Client:
 Cloud integration: 1.0.17
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:20 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:10 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

I've tried both of these approaches:

# docker-compose.yml
version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
    ports:
      - 2181:2181

and

mkdir -p /var/lib/zookeeper/data /var/lib/zookeeper/txn-logs /var/lib/kafka/data
chown -R 1000:1000 /var/lib/zookeeper /var/lib/kafka

docker run -d -v /var/lib/zookeeper/data:/var/lib/zookeeper/data -v /var/lib/zookeeper/txn-logs:/var/lib/zookeeper/log -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 confluentinc/cp-zookeeper:latest

In both cases I get the following error:

zookeeper  | [2021-10-04 17:34:29,045] ERROR Unable to access datadir, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
zookeeper  | org.apache.zookeeper.server.persistence.FileTxnSnapLog$DatadirException: Unable to create data directory /var/lib/zookeeper/log/version-2
zookeeper  | 	at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:127)
zookeeper  | 	at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:125)
zookeeper  | 	at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:107)
zookeeper  | 	at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:65)
zookeeper  | 	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:128)
zookeeper  | 	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)

I've come across others who have had this issue and they recommended attaching a volume, but I also tried that (as you can see above) and I get the same error.

Kafka Connect HEALTHCHECK startup catch-22

In Dockerfile.ubi8 the cp-kafka-connect image is configured with a HEALTHCHECK command that is used by docker to determine the health of the Kafka Connect container:

HEALTHCHECK --start-period=120s --interval=5s --timeout=10s --retries=96 \
CMD /etc/confluent/docker/healthcheck.sh

The healthcheck.sh script uses the container's configured CONNECT_REST_ADVERTISED_HOST_NAME variable to try to connect to the named host as an indication that the container is healthy. This variable should be some hostname that can be resolved.

if [[ $(curl -s -o /dev/null -w %{http_code} http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors) = 200 ]]; then

However, when one tries to set the hostname to the internal connect container name in a docker deployment, we run into a catch-22: Docker will wait for healthcheck.sh to report healthy before adding the container name to docker's internal network DNS, but connect will never be able to resolve that DNS name because the container is not yet marked as healthy. Thus it will stay in the "starting" state indefinitely, or until it's shutdown/restarted by the docker daemon.

For example, the following docker compose config will never report a healthy container, even though the name "kafka-connect-01" would be perfectly valid, resolvable name for kafka-connect-02 to use after it completes startup, etc.

  kafka-connect-01:
    image: confluentinc/cp-kafka-connect:5.5.1
    container_name: kafka-connect-01
    environment:
      CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect-01"
      ...

  kafka-connect-02:
    image: confluentinc/cp-kafka-connect:5.5.1
    container_name: kafka-connect-02
    environment:
      CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect-02"
      ...

To resolve this, I think healthcheck.sh should not use the $CONNECT_REST_ADVERTISED_HOST_NAME variable, and use localhost instead, which will try to connect to the locally running (to the container) instance of connect. This would have the following benefits:

  • This will allow Worker communication to happen over the internal docker network (the motivation for this issue)
  • It will prevent misrepresenting the container status in the case where CONNECT_REST_ADVERTISED_HOST_NAME is misconfigured to point to some other instance of connect
  • It represents a better separation of responsibilities because the health status of the container would not depend on being able to resolve the container's own hostname -- a DNS concern

Publish images with JMX exporter agent attached

To expose metrics similar to how cp-ansible provisioning enables, would be possible to release images with these agents installed by default?

Currently users are creating a new image on top of cp ones, or have to mount volumes, or configMaps.

Would it make sense to consider image tags like confluentinc/cp-kafka:5.5.0-prom-jmx and confluentinc/cp-kafka:5.5.0-jolokia or something similar?

Custom log config file gets overwritten by 'dub'

The Confluent Kafka Connect docs reference the file "/etc/kafka/connect-log4j.properties" to be used for configuring logging, but this doesn't work, as the base image's configure command finishes with the following line, which ends up overwriting my config:

dub template "/etc/confluent/docker/log4j.properties.template" "/etc/kafka/connect-log4j.properties"

How should I handle this situation? I'm extending the base image to create our own custom KC image, and just need to set my own config. Am I going about this in the wrong way?

Root user/sudo on 6.x images

It seems that with the 6.x platform images everything runs as "appuser", and there's no sudo or such configured on the images.

In our (Kubernetes, AWS MSK) environment we use the images not necessarily to run Kafka itself, but rather as very convenient images for getting the Kafka CLI tools. Usually the flow in our documentation to do any Kafka maintenance is:

  1. Use kubectl run ... --image confluentinc/cp-kafka to bring up a helper pod
  2. Connect to that running pod
  3. Install additional tools such as jq, vim, ...
  4. Run one or more Kafka CLI tools (kafka-topics, kafka-consumer-groups, kafka-reassign-partitions)

With the 6.x images I cannnot see a way to make step 3 work, and with that it seems I would have to fork/clone/...(?) the images or build my own to provide suitable means.

Assuming this is a reasonable use-case: What do you think would be the best way to support it?

Root owned writable files

Hi, my security system complains there are files owned by root and writable by anyone in /etc/kafka/.

~# docker run -it --rm confluentinc/cp-zookeeper /bin/bash
[appuser@50635fd4475d ~]$ ls -l /etc/kafka/
total 72
-rw-rw-rw- 1 root root  906 Feb  4 16:55 connect-console-sink.properties
-rw-rw-rw- 1 root root  909 Feb  4 16:55 connect-console-source.properties
..

Do you think it's good idea to just run chmod o-w or chown appuser. on them?
I'm not familiar with kafka and I'm not sure if the user needs permissions to alter them.

Mounting a volume of Docker for Kafka Data Directory is failing for >= 6.x (dub error)

The issue to focus on is the "volumes" part of the docker-compose file

I'm using the Ubuntu 20.04 AMI from AWS to reproduce the issue

ubuntu@ip-172-31-90-14:~$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"

Using this docker-compose file, things fail:

$ docker-compose up
kafka1_1  | ===> Running preflight checks ... 
kafka1_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka1_1  | Command [/usr/local/bin/dub path /var/lib/kafka/data writable] FAILED !

See how the version used is confluent/cp-kafka**:6.1.0**

version: '2.1'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - ./zk-single-kafka-single/zoo1/data:/data
      - ./zk-single-kafka-single/zoo1/datalog:/datalog

  kafka1:
    image: confluentinc/cp-kafka:6.1.0
    hostname: kafka1
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    volumes:
      - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
    depends_on:
      - zoo1

If I remove these two lines, things work:

volumes:
      - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data

But obviously, these lines are needed to externalize the data file of the docker image.

If I use the version 6.0.2, same outcome.

If I use the version 5.5.3, things work as expected:

version: '2.1'

services:
  zoo1:
    image: zookeeper:3.4.9
    hostname: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_PORT: 2181
      ZOO_SERVERS: server.1=zoo1:2888:3888
    volumes:
      - ./zk-single-kafka-single/zoo1/data:/data
      - ./zk-single-kafka-single/zoo1/datalog:/datalog

  kafka1:
    image: confluentinc/cp-kafka:5.5.3
    hostname: kafka1
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    volumes:
      - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
    depends_on:
      - zoo1

See the log output:

kafka1_1  | ===> Running preflight checks ... 
kafka1_1  | ===> Check if /var/lib/kafka/data is writable ...
kafka1_1  | ===> Check if Zookeeper is healthy ...
kafka1_1  | [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/
04/2020 15:53 GMT

So something changes at 6.x and I don't know what or how to fix it. The goal is to externalize the data directory

Cheers
Stephane

KAFKA_SSL_TRUSTSTORE_CREDENTIALS should be optional when KAFKA_SSL_TRUSTSTORE_TYPE is PEM

Hello everybody!

I'm trying to set a PEM file as a trust store, as it seems to be supported here https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L311 as long as the password is null.

By checking https://github.com/confluentinc/kafka-images/blob/master/kafka/include/etc/confluent/docker/configure#L91 looks like KAFKA_SSL_TRUSTSTORE_CREDENTIALS is mandatory if SSL is enabled, meaning the password will never be null.

Shouldn't we test if KAFKA_SSL_TRUSTSTORE_TYPE is PEM before evaluating KAFKA_SSL_TRUSTSTORE_CREDENTIALS?

Thanks

How to support graceful shutdown of cp-kafka

After the container is killed, a large number of partition directories need to be traversed when the container is rebuilt.
The process takes a long time. Could the cp-kafka container monitor container stop signal and automatically execute command kafka-server-stop

Current version:

Docker Kafka_Connect: CONVERTER params breaks container when JsonSchemaConverter is used

When using kafka-connect docker image (tested with 5.5.3) and configuring JsonSchemaConverter the container fails with

   -e "CONNECT_KEY_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
   -e "CONNECT_VALUE_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
   -e "CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
   -e "CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
[2021-02-04 09:24:14,637] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.json.JsonSchemaConverter for configuration key.converter: Class io.confluent.connect.json.JsonSchemaConverter could not be found.
        at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:727)
        at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:473)
        at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
        at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
        at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
        at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:374)
        at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:316)
        at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
        at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)

It seems the converter is not installed by default but I haven't found this information in the documentation.
So am I doing something wrong here or is this really a bug / missing part of documentation?

cp-server spews errors on v6+ when only using zookeeper and cp-server

If you just boot zookeeper and cp-server (which is what most of our devs do locally), it spews these errors continuously (every minute or so)

kafka_1                    |  (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'compression.type' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'acks' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'key.serializer' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'max.request.size' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'value.serializer' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'interceptor.classes' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'max.in.flight.requests.per.connection' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] WARN The configuration 'linger.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
kafka_1                    | [2021-03-08 15:30:07,880] INFO Kafka version: 6.1.0-ce (org.apache.kafka.common.utils.AppInfoParser)
kafka_1                    | [2021-03-08 15:30:07,880] INFO Kafka commitId: d19b95317d02f231 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1                    | [2021-03-08 15:30:07,880] INFO Kafka startTimeMs: 1615217407880 (org.apache.kafka.common.utils.AppInfoParser)
kafka_1                    | [2021-03-08 15:30:07,885] INFO [Admin Manager on Broker 1]: Error processing create topic request CreatableTopic(name='_confluent-telemetry-metrics', numPartitions=12, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='max.message.bytes', value='10485760'), CreateableTopicConfig(name='message.timestamp.type', value='CreateTime'), CreateableTopicConfig(name='min.insync.replicas', value='1'), CreateableTopicConfig(name='retention.ms', value='259200000'), CreateableTopicConfig(name='segment.ms', value='14400000'), CreateableTopicConfig(name='retention.bytes', value='-1')], linkName=null, mirrorTopic=null) (kafka.server.AdminManager)
kafka_1                    | org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
kafka_1                    | [2021-03-08 15:30:07,886] INFO App info kafka.admin.client for confluent-telemetry-reporter-local-producer unregistered (org.apache.kafka.common.utils.AppInfoParser)
kafka_1                    | [2021-03-08 15:30:07,887] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics)
kafka_1                    | [2021-03-08 15:30:07,887] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics)
kafka_1                    | [2021-03-08 15:30:07,887] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics)
kafka_1                    | [2021-03-08 15:30:07,887] ERROR Error checking or creating metrics topic (io.confluent.telemetry.exporter.kafka.KafkaExporter)
kafka_1                    | org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.

I've played with the following environment variables but they don't seem to have any affect that I can find:

CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_REPORTER_TOPIC_CREATE: 'false'
CONFLUENT_METRICS_ENABLE: 'false'
KAFKA_DEFAULT_REPLICATION_FACTOR: 1

Add images for ARM 64

The new Apple M1 machines are rolling out. Fast, silent and cool.
Your images do not work on them, a build for ARM 64 would be very use full.

Kafka container does not set truststore variables when ssl.client.auth=none

When only enabling encryption, following docs: https://docs.confluent.io/current/kafka/encryption.html#brokers, ssl.client.auth is not required, ie. none.

But truststore variables are set only when ss.client.auth is set to requested or required:

if [[ -n "${KAFKA_SSL_CLIENT_AUTH-}" ]] && ( [[ $KAFKA_SSL_CLIENT_AUTH == *"required"* ]] || [[ $KAFKA_SSL_CLIENT_AUTH == *"requested"* ]] )
then
dub ensure KAFKA_SSL_TRUSTSTORE_FILENAME
export KAFKA_SSL_TRUSTSTORE_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_FILENAME"
dub path "$KAFKA_SSL_TRUSTSTORE_LOCATION" exists
dub ensure KAFKA_SSL_TRUSTSTORE_CREDENTIALS
KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION="/etc/kafka/secrets/$KAFKA_SSL_TRUSTSTORE_CREDENTIALS"
dub path "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION" exists
export KAFKA_SSL_TRUSTSTORE_PASSWORD
KAFKA_SSL_TRUSTSTORE_PASSWORD=$(cat "$KAFKA_SSL_TRUSTSTORE_CREDENTIALS_LOCATION")
fi

Should we consider to let users define this variables in any case, regardless of ssl.client.auth input?

kafka-cluster-sasl zookeeper exception

Hello,

I try to use the example kafka-cluster-sasl/docker-compose.yml but I have an exception in zookeeper.
I build kerberos image from kerberos dockerfile
I use this documentation clustered-deployment-sasl.html (don't find a more recent)

Exception:

[2021-07-23 08:18:47,625] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
Looking for keys for: zookeeper/[email protected]
[2021-07-23 08:18:47,642] WARN No password found for user: null (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2021-07-23 08:18:47,644] ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
java.io.IOException: Could not configure server because SASL configuration did not allow the  ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: No password provided
        at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:243)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:646)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:148)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)

Thanks for your help

Quorum lost

What happens if we loose quorum because of servers crash and if its impossible to run those servers again? Is there any solution for that case? Can we reach quorum somehow again? I read about dynamic reconfiguration, but as i understand the quorum is needded to success " a quorum of the old configuration is required to be available and connected for ZooKeeper to be able to make progress".

Phases of linear memory consumption increase without load

We are observing some phases of linear memory consumption increase without load (~1 message every 2 minutes) and even without any producers.
The issues seems to be independent of the image version (i.e., occurs with 5.5.3, 6.1.0, and 6.1.1).

It is also independent of any producers or consumers.
The following snapshot shows the Kafka memory consumption, with certain events (e.g., shutting down consumers or producers), switching image version tagged.

Link to the raintank Grafana snapshot
We are experiencing the issue on the following system.

Hardware: GCE VM e2-medium (2 vCPUs, 4 GB RAM)

OS: Ubuntu 18.04.3 LTS

Linux Kernel: 5.4.0-1038-gcp

Docker Version: 19.03.3, build a872fc2f86

Latest Image for CP-Zookeeper has multiple vulnerabilities

While performing a container scan of this image using Twistlock, 6 vulnerabilities were found.

  • - The pip package before 19.2 for Python allows Directory Traversal when a URL is given in an install command, because a Content-Disposition header can have ../ in a filename, as demonstrated by overwriting the /root/.ssh/authorized_keys file. This occurs in _download_http_url in _internal/download.py.

  • - Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.

  • - Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects

  • - Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty before version 4.1.59.Final there is a vulnerability on Unix-like systems involving an insecure temp file. When netty's multipart decoders are used local information disclosure can occur via the local system temporary directory if temporary storing uploads on the disk is enabled. On unix-like systems, the temporary directory is shared between all user. As such, writing to this directory using APIs that do not explicitly set the file/directory permissions can lead to information disclosure. Of note, this does not impact modern MacOS Operating Systems. The method "File.createTempFile" on unix-like systems creates a random file, but, by default will create this file with the permissions "-rw-r--r--". Thus, if sensitive information is written to this file, other local users can read this information. This is the case in netty's "AbstractDiskHttpData" is vulnerable. This has been fixed in version 4.1.59.Final. As a workaround, one may specify your own "java.io.tmpdir" when you start the JVM or use "DefaultHttpDataFactory.setBaseDir(...)" to set the directory to something that is only readable by the current user.

  • - Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.61.Final there is a vulnerability that enables request smuggling. The content-length header is not correctly validated if the request only uses a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1. This is a followup of GHSA-wm47-8v5p-wjpj/CVE-2021-21295 which did miss to fix this one case. This was fixed as part of 4.1.61.Final.

  • - Netty is an open-source, asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. In Netty (io.netty:netty-codec-http2) before version 4.1.60.Final there is a vulnerability that enables request smuggling. If a Content-Length header is present in the original HTTP/2 request, the field is not validated by Http2MultiplexHandler as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (HttpRequest, HttpContent, etc.) via Http2StreamFrameToHttpObjectCodec and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an example attack refer to the linked GitHub Advisory. Users are only affected if all of this is true: HTTP2MultiplexCodec or Http2FrameCodec is used, Http2StreamFrameToHttpObjectCodec is used to convert to HTTP/1.1 objects

KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS config rendered useless in cp-kafka by startup

The Kafka broker zookeeper.connection.timeout.ms configuration property and its environment variable equivalent allow the user to handle deployment situations where, for whatever reason, it may take some time to establish a connection to the zookeeper. However the initialization sequence inside the cp-kafka container prevents this configuration from having any effect. Before parsing the environment variables and launching Kafka at all, the ensure script is run, which in turn calls kub zk-ready "$KAFKA_ZOOKEEPER_CONNECT" "${KAFKA_CUB_ZK_TIMEOUT:-40}" which attempts to connect to the zookeeper. This 40 seconds, or the (undocumented)KAFKA_CUB_ZK_TIMEOUT variable, will take precedence and if the connection is not established within that time the container exits with an error.

I would propose that this script first check KAFKA_CUB_ZK_TIMEOUT and use if present, else fall back to KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS, else fall back to KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS (which is the documented Kafka behavior). If this is undesirable for some reason, it could instead be documented (e.g., here) that the latter two variables hold only for post-launch behavior and that the user must set KAFKA_CUB_ZK_TIMEOUT.

/var/lib/kafka permissions for non-root users

Original issue here:
confluentinc/cp-docker-images#692

We try to run Kafka as non-root user (due to strict security policy). It looks that rights are set incorrectly:

# ls -ld /var/lib/kafka
drwxr-x--- 3 root root 4096 Jun 19 00:27 /var/lib/kafka

As a result non-root users cannot access /var/lib/kafka/data

Quick fix - adding to Dockerfile: chmod o+rx /var/lib/kafka

Publish SemVer container image tags

Would you be interested in introducing SemVer container image tags for container images published from this repository? I am suggesting an approach similar to what is proposed in this article.

It seems quite popular these days, and would represent an improvement for us - as it will allow us automate our image patching more easily. WDYT?

Request delay reaches minute level after enabling SASL/PLAIN

Env:

Test Case

root@broker1:/# cat /kafka/consumer-sasl.properties
enable.auto.commit=false
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
  username="???" \
  password="???";
root@broker1:/# time kafka-console-consumer --bootstrap-server localhost:9093 --property print.key=true --property print.timestamp=true --topic infra-minio-f6-aoi-notify --from-beginning --max-messages 1 --consumer.config /kafka/consumer-sasl.properties --timeout-ms 120000

LogAppendTime:1641866605124	pics/SMT/AOI FOV/S20B/LF2824222A02S2/20211118/20211118083657-GF1BPPA0HL.JPG	{"EventName":"s3:ObjectRemoved:Delete","Key":"pics/SMT/AOI FOV/S20B/LF2824222A02S2/20211118/20211118083657-GF1BPPA0HL.JPG","Records":[{"eventVersion":"2.0","eventSource":"minio:s3","awsRegion":"","eventTime":"2022-01-11T02:03:13.333Z","eventName":"s3:ObjectRemoved:Delete","userIdentity":{"principalId":"line"},"requestParameters":{"principalId":"line","region":"","sourceIPAddress":"192.168.16.42"},"responseElements":{"content-length":"44416","x-amz-request-id":"16C914F61DA210C8","x-minio-deployment-id":"efe9247b-005d-4407-a042-7b95430e00ce","x-minio-origin-endpoint":"https://f6-te-aoi-oss.ipt.inventec.net"},"s3":{"s3SchemaVersion":"1.0","configurationId":"Config","bucket":{"name":"pics","ownerIdentity":{"principalId":"line"},"arn":"arn:aws:s3:::pics"},"object":{"key":"SMT%2FAOI+FOV%2FS20B%2FLF2824222A02S2%2F20211118%2F20211118083657-GF1BPPA0HL.JPG","sequencer":"16C914F7B837491A"}},"source":{"host":"192.168.16.42","port":"","userAgent":"MinIO (linux; amd64) minio-go/v7.0.20 oss-archive/minio-tool"}}]}
Processed a total of 1 messages

real	1m25.094s
user	0m15.595s
sys	0m8.360s
root@broker1:/# 
root@broker1:/#
root@broker1:/# 
root@broker1:/# time kafka-console-consumer --bootstrap-server localhost:19092 --property print.key=true --property print.timestamp=true --topic infra-minio-f6-aoi-notify --from-beginning --max-messages 1 --timeout-ms 120000
LogAppendTime:1641866605124	pics/SMT/AOI FOV/S20B/LF2824222A02S2/20211118/20211118083657-GF1BPPA0HL.JPG	{"EventName":"s3:ObjectRemoved:Delete","Key":"pics/SMT/AOI FOV/S20B/LF2824222A02S2/20211118/20211118083657-GF1BPPA0HL.JPG","Records":[{"eventVersion":"2.0","eventSource":"minio:s3","awsRegion":"","eventTime":"2022-01-11T02:03:13.333Z","eventName":"s3:ObjectRemoved:Delete","userIdentity":{"principalId":"line"},"requestParameters":{"principalId":"line","region":"","sourceIPAddress":"192.168.16.42"},"responseElements":{"content-length":"44416","x-amz-request-id":"16C914F61DA210C8","x-minio-deployment-id":"efe9247b-005d-4407-a042-7b95430e00ce","x-minio-origin-endpoint":"https://f6-te-aoi-oss.ipt.inventec.net"},"s3":{"s3SchemaVersion":"1.0","configurationId":"Config","bucket":{"name":"pics","ownerIdentity":{"principalId":"line"},"arn":"arn:aws:s3:::pics"},"object":{"key":"SMT%2FAOI+FOV%2FS20B%2FLF2824222A02S2%2F20211118%2F20211118083657-GF1BPPA0HL.JPG","sequencer":"16C914F7B837491A"}},"source":{"host":"192.168.16.42","port":"","userAgent":"MinIO (linux; amd64) minio-go/v7.0.20 oss-archive/minio-tool"}}]}
Processed a total of 1 messages

real	0m5.097s
user	0m3.613s
sys	0m0.505s

defunct s3 debian source in older docker images

The 5.5.1 version has a now-defunct s3 debian source which stops apt from functioning.

$ cat Dockerfile
FROM confluentinc/cp-kafka:5.5.1

RUN apt-get update

$ docker build .
#1 [internal] load build definition from Dockerfile
#1 sha256:8ccc0aa66dad9f12969c939a214ab578d45de46dbe3932be09737ea9257d1949
#1 transferring dockerfile: 36B 0.0s done
#1 DONE 0.0s

#2 [internal] load .dockerignore
#2 sha256:feed0528ac01efeb9edf43de7b26963e2b5a8e8301b09ba7eb9af78bfa522bc4
#2 transferring context: 35B done
#2 DONE 0.0s

#3 [internal] load metadata for docker.io/confluentinc/cp-kafka:5.5.1
#3 sha256:5217d0d3af9149126fcb5e21157ae544a9fec904eea0f849956264db192e6fad
#3 DONE 0.0s

#4 [1/2] FROM docker.io/confluentinc/cp-kafka:5.5.1
#4 sha256:4f3c35e152f64d739970862f721289246a2919bf066167211e1bf4ff483f7c8a
#4 CACHED

#5 [2/2] RUN apt-get update
#5 sha256:bbd772aed03dc13c6ef2b7a34644720555f40ecc0818b7e14311ca63c3981d17
#5 3.438 Get:1 http://security.debian.org jessie/updates InRelease [44.9 kB]
#5 3.456 Ign http://deb.debian.org jessie InRelease
#5 4.372 Get:2 http://deb.debian.org jessie-updates InRelease [16.3 kB]
#5 4.372 Get:3 https://s3-us-west-2.amazonaws.com stable InRelease
#5 4.372 Ign https://s3-us-west-2.amazonaws.com stable InRelease
#5 4.372 Ign http://repos.azulsystems.com stable InRelease
#5 4.411 Get:4 http://deb.debian.org jessie Release.gpg [1652 B]
#5 4.412 Get:5 http://repos.azulsystems.com stable Release.gpg [833 B]
#5 4.443 Get:6 http://deb.debian.org jessie Release [77.3 kB]
#5 4.475 Get:7 https://s3-us-west-2.amazonaws.com stable Release.gpg
#5 4.479 Ign https://s3-us-west-2.amazonaws.com stable Release.gpg
#5 4.490 Get:8 http://repos.azulsystems.com stable Release [8606 B]
#5 4.575 Get:9 https://s3-us-west-2.amazonaws.com stable Release
#5 4.576 Ign https://s3-us-west-2.amazonaws.com stable Release
#5 4.672 Get:10 https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.673 Err https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.673   
#5 4.770 Get:11 https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.771 Err https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.771   
#5 4.869 Get:12 https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.870 Err https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.870   
#5 4.968 Get:13 https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.969 Err https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 4.969   
#5 4.969 Get:14 http://security.debian.org jessie/updates/main amd64 Packages [992 kB]
#5 5.066 Get:15 https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 5.066 Err https://s3-us-west-2.amazonaws.com stable/main amd64 Packages
#5 5.066   HttpError301
#5 6.081 Get:16 http://repos.azulsystems.com stable/main amd64 Packages [25.7 kB]
#5 6.246 Get:17 http://deb.debian.org jessie-updates/main amd64 Packages [20 B]
#5 6.276 Get:18 http://deb.debian.org jessie/main amd64 Packages [9098 kB]
#5 7.291 Fetched 10.3 MB in 3s (2611 kB/s)
#5 7.291 W: Failed to fetch https://s3-us-west-2.amazonaws.com/staging-confluent-packages-5.5.1/deb/5.5/dists/stable/main/binary-amd64/Packages  HttpError301
#5 7.291 
#5 7.291 E: Some index files failed to download. They have been ignored, or old ones used instead.
#5 ERROR: executor failed running [/bin/sh -c apt-get update]: exit code: 100
------
 > [2/2] RUN apt-get update:
------
executor failed running [/bin/sh -c apt-get update]: exit code: 100

possible solutions:

  • upgrade to newer versions of cp-kafka. though ideal, this can be tricky due to multiple reasons (bureaucracy, production kafka versions, etc).

  • one other option is to remove the repo before any apt calls (since it's more than likely not used)

    RUN sed -r -i 's/^(.*amazonaws)/\#\1/' /etc/apt/sources.list
    

    the problem with this is it will require all those that base off of the kafka image to patch the problem repo:

is there a way the endpoint can be fixed?

Improve security by providing updated images more regularly

cp-kafka:6.2.0 is 12 days young but already shows many missing system updates.
It would be good to provide updated images with the same tag more regularly, only for updating dependencies without waiting for your new patch release with functional changes to finish.

Floating tags for major and minor releases would be convenient for users to follow the latest stable versions, they would correlate to Confluent Platform versions like 6.1.x and 6.2.x:

  • 6 = 6.2 = 6.2.0
  • 6.1 = 6.1.2
  • 6.0 = 6.0.3

Suggestions to improve Dockerfile: https://github.com/goodwithtech/dockle

$ dockle confluentinc/cp-kafka:6.2.0
FATAL	- CIS-DI-0009: Use COPY instead of ADD in Dockerfile
	* Use COPY : /bin/sh -c #(nop) ADD --chown=appuser:appusermulti:29db10218faffff4a0743a284dd051fbaff46ddc205e96a76b7da6942fc3870c in /usr/share/java/cp-base-new/
	* Use COPY : /bin/sh -c #(nop) ADD --chown=appuser:appuserdir:cd0454fa5f2975d97f5409e30db2e97d97ec47aac0a1c45c6aa82a70ea296ab5 in /usr/share/doc/cp-base-new/
INFO	- CIS-DI-0005: Enable Content trust for Docker
	* export DOCKER_CONTENT_TRUST=1 before docker pull/build
INFO	- CIS-DI-0006: Add HEALTHCHECK instruction to the container image
	* not found HEALTHCHECK statement
INFO	- CIS-DI-0008: Confirm safety of setuid/setgid files
	* setuid file: urwxr-xr-x usr/sbin/pam_timestamp_check
	* setuid file: urwxr-xr-x usr/sbin/unix_chkpwd
	* setuid file: urwxr-xr-x usr/bin/ksu
	* setuid file: urwxr-xr-x usr/bin/mount
	* setuid file: urwxr-xr-x usr/bin/gpasswd
	* setgid file: grwx--x--x usr/libexec/utempter/utempter
	* setuid file: urwxr-x--- usr/libexec/dbus-1/dbus-daemon-launch-helper
	* setgid file: grwxr-xr-x usr/bin/write
	* setuid file: urwxr-xr-x usr/bin/newgrp
	* setuid file: urwxr-xr-x usr/bin/chage
	* setuid file: urwxr-xr-x usr/bin/su
	* setuid file: urwxr-xr-x usr/bin/umount

You're using Zulu OpenJDK 11, if possible consider upgrading to the latest stable Jetty 11.0.5 or 10.0.5.
Security findings: https://github.com/aquasecurity/trivy

$ trivy image confluentinc/cp-kafka:6.2.0
2021-06-20T15:26:24.758+0200	INFO	Need to update DB
2021-06-20T15:26:24.759+0200	INFO	Downloading DB...
2021-06-20T15:27:03.864+0200	INFO	Detected OS: redhat
2021-06-20T15:27:03.864+0200	INFO	Detecting RHEL/CentOS vulnerabilities...
2021-06-20T15:27:03.867+0200	INFO	Number of PL dependency files: 140
2021-06-20T15:27:03.867+0200	INFO	Detecting jar vulnerabilities...

confluentinc/cp-kafka:6.2.0 (redhat 8.4)
========================================
Total: 119 (UNKNOWN: 0, LOW: 60, MEDIUM: 54, HIGH: 2, CRITICAL: 3)

+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
|        LIBRARY         | VULNERABILITY ID | SEVERITY |          INSTALLED VERSION           |  FIXED VERSION  |                  TITLE                  |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| bzip2-libs             | CVE-2019-12900   | LOW      | 1.0.6-26.el8                         |                 | bzip2: out-of-bounds write              |
|                        |                  |          |                                      |                 | in function BZ2_decompress              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-12900   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| coreutils-single       | CVE-2017-18018   | MEDIUM   | 8.30-8.el8                           |                 | coreutils: race condition               |
|                        |                  |          |                                      |                 | vulnerability in chown and chgrp        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2017-18018   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| curl                   | CVE-2021-22876   |          | 7.61.1-18.el8                        |                 | curl: Leak of authentication            |
|                        |                  |          |                                      |                 | credentials in URL                      |
|                        |                  |          |                                      |                 | via automatic Referer                   |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-22876   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-22898   | LOW      |                                      |                 | curl: TELNET stack                      |
|                        |                  |          |                                      |                 | contents disclosure                     |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-22898   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| dbus                   | CVE-2020-35512   |          | 1:1.12.8-12.el8_4.2                  |                 | dbus: users with the same numeric UID   |
|                        |                  |          |                                      |                 | could lead to use-after-free and...     |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-35512   |
+------------------------+                  +          +                                      +-----------------+                                         +
| dbus-common            |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+                  +          +                                      +-----------------+                                         +
| dbus-daemon            |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+                  +          +                                      +-----------------+                                         +
| dbus-libs              |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+                  +          +                                      +-----------------+                                         +
| dbus-tools             |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| file-libs              | CVE-2019-18218   | MEDIUM   | 5.33-16.el8_3.1                      |                 | file: heap-based buffer overflow        |
|                        |                  |          |                                      |                 | in cdf_read_property_info in cdf.c      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-18218   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-8905    | LOW      |                                      |                 | file: stack-based buffer over-read      |
|                        |                  |          |                                      |                 | in do_core_note in readelf.c            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-8905    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-8906    |          |                                      |                 | file: out-of-bounds read in             |
|                        |                  |          |                                      |                 | do_core_note in readelf.c               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-8906    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| glib2                  | CVE-2021-27219   | HIGH     | 2.56.4-9.el8                         | 2.56.4-10.el8_4 | glib: integer overflow in               |
|                        |                  |          |                                      |                 | g_bytes_new function on                 |
|                        |                  |          |                                      |                 | 64-bit platforms due to an...           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-27219   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-27218   | MEDIUM   |                                      |                 | glib: integer overflow in               |
|                        |                  |          |                                      |                 | g_byte_array_new_take function          |
|                        |                  |          |                                      |                 | when called with a buffer of...         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-27218   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-16428   | LOW      |                                      |                 | glib2: NULL pointer dereference in      |
|                        |                  |          |                                      |                 | g_markup_parse_context_end_parse()      |
|                        |                  |          |                                      |                 | function in gmarkup.c                   |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-16428   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-16429   |          |                                      |                 | glib2: Out-of-bounds read in            |
|                        |                  |          |                                      |                 | g_markup_parse_context_parse()          |
|                        |                  |          |                                      |                 | in gmarkup.c                            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-16429   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-28153   |          |                                      |                 | glib: g_file_replace() with             |
|                        |                  |          |                                      |                 | G_FILE_CREATE_REPLACE_DESTINATION       |
|                        |                  |          |                                      |                 | creates empty target                    |
|                        |                  |          |                                      |                 | for dangling symlink                    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-28153   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| glibc                  | CVE-2019-1010022 | CRITICAL | 2.28-151.el8                         |                 | glibc: stack guard protection bypass    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-1010022 |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-27645   | LOW      |                                      |                 | glibc: Use-after-free in                |
|                        |                  |          |                                      |                 | addgetnetgrentX function                |
|                        |                  |          |                                      |                 | in netgroupcache.c                      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-27645   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-33574   |          |                                      |                 | glibc: mq_notify does                   |
|                        |                  |          |                                      |                 | not handle separately                   |
|                        |                  |          |                                      |                 | allocated thread attributes             |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-33574   |
+------------------------+------------------+----------+                                      +-----------------+-----------------------------------------+
| glibc-common           | CVE-2019-1010022 | CRITICAL |                                      |                 | glibc: stack guard protection bypass    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-1010022 |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-27645   | LOW      |                                      |                 | glibc: Use-after-free in                |
|                        |                  |          |                                      |                 | addgetnetgrentX function                |
|                        |                  |          |                                      |                 | in netgroupcache.c                      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-27645   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-33574   |          |                                      |                 | glibc: mq_notify does                   |
|                        |                  |          |                                      |                 | not handle separately                   |
|                        |                  |          |                                      |                 | allocated thread attributes             |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-33574   |
+------------------------+------------------+----------+                                      +-----------------+-----------------------------------------+
| glibc-minimal-langpack | CVE-2019-1010022 | CRITICAL |                                      |                 | glibc: stack guard protection bypass    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-1010022 |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-27645   | LOW      |                                      |                 | glibc: Use-after-free in                |
|                        |                  |          |                                      |                 | addgetnetgrentX function                |
|                        |                  |          |                                      |                 | in netgroupcache.c                      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-27645   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-33574   |          |                                      |                 | glibc: mq_notify does                   |
|                        |                  |          |                                      |                 | not handle separately                   |
|                        |                  |          |                                      |                 | allocated thread attributes             |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-33574   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| gnutls                 | CVE-2021-20231   | MEDIUM   | 3.6.14-8.el8_3                       |                 | gnutls: Use after free in               |
|                        |                  |          |                                      |                 | client key_share extension              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20231   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-20232   |          |                                      |                 | gnutls: Use after free                  |
|                        |                  |          |                                      |                 | in client_send_params in                |
|                        |                  |          |                                      |                 | lib/ext/pre_shared_key.c                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20232   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| json-c                 | CVE-2020-12762   |          | 0.13.1-0.4.el8                       |                 | json-c: integer overflow                |
|                        |                  |          |                                      |                 | and out-of-bounds write                 |
|                        |                  |          |                                      |                 | via a large JSON file                   |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-12762   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| libarchive             | CVE-2020-21674   |          | 3.3.3-1.el8                          |                 | libarchive: heap-based                  |
|                        |                  |          |                                      |                 | buffer overflow in                      |
|                        |                  |          |                                      |                 | archive_string_append_from_wcs          |
|                        |                  |          |                                      |                 | function in archive_string.c            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-21674   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2017-14166   | LOW      |                                      |                 | libarchive: Heap-based buffer           |
|                        |                  |          |                                      |                 | over-read in the atol8 function         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2017-14166   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2017-14501   |          |                                      |                 | libarchive: Out-of-bounds               |
|                        |                  |          |                                      |                 | read in parse_file_info                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2017-14501   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-1000879 |          |                                      |                 | libarchive: NULL pointer dereference in |
|                        |                  |          |                                      |                 | ACL parser resulting in a denial of...  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-1000879 |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-1000880 |          |                                      |                 | libarchive: Improper input              |
|                        |                  |          |                                      |                 | validation in WARC parser               |
|                        |                  |          |                                      |                 | resulting in a denial of...             |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-1000880 |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libcurl                | CVE-2021-22876   | MEDIUM   | 7.61.1-18.el8                        |                 | curl: Leak of authentication            |
|                        |                  |          |                                      |                 | credentials in URL                      |
|                        |                  |          |                                      |                 | via automatic Referer                   |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-22876   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-22898   | LOW      |                                      |                 | curl: TELNET stack                      |
|                        |                  |          |                                      |                 | contents disclosure                     |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-22898   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libdnf                 | CVE-2021-3445    | MEDIUM   | 0.55.0-7.el8                         |                 | libdnf: libdnf does its                 |
|                        |                  |          |                                      |                 | own signature verification,             |
|                        |                  |          |                                      |                 | but this can be tricked...              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3445    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| libgcc                 | CVE-2018-20673   |          | 8.4.1-1.el8                          |                 | libiberty: Integer overflow in          |
|                        |                  |          |                                      |                 | demangle_template() function            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20673   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-20657   | LOW      |                                      |                 | libiberty: Memory leak in               |
|                        |                  |          |                                      |                 | demangle_template function              |
|                        |                  |          |                                      |                 | resulting in a denial of service...     |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20657   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-14250   |          |                                      |                 | binutils: integer overflow in           |
|                        |                  |          |                                      |                 | simple-object-elf.c leads to            |
|                        |                  |          |                                      |                 | a heap-based buffer overflow            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-14250   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libgcrypt              | CVE-2019-12904   | MEDIUM   | 1.8.5-4.el8                          |                 | Libgcrypt: physical addresses           |
|                        |                  |          |                                      |                 | being available to other processes      |
|                        |                  |          |                                      |                 | leads to a flush-and-reload...          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-12904   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-33560   |          |                                      |                 | libgcrypt: mishandles ElGamal           |
|                        |                  |          |                                      |                 | encryption because it lacks             |
|                        |                  |          |                                      |                 | exponent blinding to address a...       |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-33560   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libsolv                | CVE-2021-3200    | LOW      | 0.7.16-2.el8                         |                 | libsolv: heap-based buffer overflow     |
|                        |                  |          |                                      |                 | in testcase_read() in src/testcase.c    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3200    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| libssh                 | CVE-2020-16135   |          | 0.9.4-2.el8                          |                 | libssh: NULL pointer                    |
|                        |                  |          |                                      |                 | dereference in sftpserver.c             |
|                        |                  |          |                                      |                 | if ssh_buffer_new returns NULL          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-16135   |
+------------------------+                  +          +                                      +-----------------+                                         +
| libssh-config          |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libstdc++              | CVE-2018-20673   | MEDIUM   | 8.4.1-1.el8                          |                 | libiberty: Integer overflow in          |
|                        |                  |          |                                      |                 | demangle_template() function            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20673   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-20657   | LOW      |                                      |                 | libiberty: Memory leak in               |
|                        |                  |          |                                      |                 | demangle_template function              |
|                        |                  |          |                                      |                 | resulting in a denial of service...     |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20657   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-14250   |          |                                      |                 | binutils: integer overflow in           |
|                        |                  |          |                                      |                 | simple-object-elf.c leads to            |
|                        |                  |          |                                      |                 | a heap-based buffer overflow            |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-14250   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| libtasn1               | CVE-2018-1000654 |          | 4.13-3.el8                           |                 | libtasn1: Infinite loop in              |
|                        |                  |          |                                      |                 | _asn1_expand_object_id(ptree)           |
|                        |                  |          |                                      |                 | leads to memory exhaustion              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-1000654 |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libxml2                | CVE-2021-3516    | MEDIUM   | 2.9.7-9.el8                          |                 | libxml2: Use-after-free in              |
|                        |                  |          |                                      |                 | xmlEncodeEntitiesInternal()             |
|                        |                  |          |                                      |                 | in entities.c                           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3516    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3517    |          |                                      |                 | libxml2: Heap-based buffer overflow     |
|                        |                  |          |                                      |                 | in xmlEncodeEntitiesInternal()          |
|                        |                  |          |                                      |                 | in entities.c                           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3517    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3518    |          |                                      |                 | libxml2: Use-after-free in              |
|                        |                  |          |                                      |                 | xmlXIncludeDoProcess() in xinclude.c    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3518    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3537    |          |                                      |                 | libxml2: NULL pointer dereference       |
|                        |                  |          |                                      |                 | when post-validating mixed              |
|                        |                  |          |                                      |                 | content parsed in recovery mode...      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3537    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3541    |          |                                      |                 | libxml2: Exponential entity             |
|                        |                  |          |                                      |                 | expansion attack bypasses all           |
|                        |                  |          |                                      |                 | existing protection mechanisms          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3541    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| libzstd                | CVE-2021-24032   | LOW      | 1.4.4-1.el8                          |                 | zstd: Race condition                    |
|                        |                  |          |                                      |                 | allows attacker to access               |
|                        |                  |          |                                      |                 | world-readable destination file         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-24032   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| lua-libs               | CVE-2020-15945   | MEDIUM   | 5.3.4-11.el8                         |                 | lua: segmentation fault                 |
|                        |                  |          |                                      |                 | in changedline in ldebug.c              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-15945   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2020-24370   | LOW      |                                      |                 | lua: segmentation fault in getlocal     |
|                        |                  |          |                                      |                 | and setlocal functions in ldebug.c      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-24370   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| lz4-libs               | CVE-2019-17543   | MEDIUM   | 1.8.3-2.el8                          |                 | lz4: heap-based buffer                  |
|                        |                  |          |                                      |                 | overflow in LZ4_write32                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-17543   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3520    |          |                                      |                 | lz4: memory corruption                  |
|                        |                  |          |                                      |                 | due to an integer overflow              |
|                        |                  |          |                                      |                 | bug caused by memmove...                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3520    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| ncurses-base           | CVE-2019-17594   |          | 6.1-7.20180224.el8                   |                 | ncurses: heap-based buffer              |
|                        |                  |          |                                      |                 | overflow in the _nc_find_entry          |
|                        |                  |          |                                      |                 | function in tinfo/comp_hash.c           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-17594   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-17595   |          |                                      |                 | ncurses: heap-based buffer              |
|                        |                  |          |                                      |                 | overflow in the fmt_entry               |
|                        |                  |          |                                      |                 | function in tinfo/comp_hash.c           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-17595   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-19211   | LOW      |                                      |                 | ncurses: Null pointer                   |
|                        |                  |          |                                      |                 | dereference at function                 |
|                        |                  |          |                                      |                 | _nc_parse_entry in parse_entry.c        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-19211   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-19217   |          |                                      |                 | ncurses: Null pointer dereference       |
|                        |                  |          |                                      |                 | at function _nc_name_match              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-19217   |
+------------------------+------------------+----------+                                      +-----------------+-----------------------------------------+
| ncurses-libs           | CVE-2019-17594   | MEDIUM   |                                      |                 | ncurses: heap-based buffer              |
|                        |                  |          |                                      |                 | overflow in the _nc_find_entry          |
|                        |                  |          |                                      |                 | function in tinfo/comp_hash.c           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-17594   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-17595   |          |                                      |                 | ncurses: heap-based buffer              |
|                        |                  |          |                                      |                 | overflow in the fmt_entry               |
|                        |                  |          |                                      |                 | function in tinfo/comp_hash.c           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-17595   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-19211   | LOW      |                                      |                 | ncurses: Null pointer                   |
|                        |                  |          |                                      |                 | dereference at function                 |
|                        |                  |          |                                      |                 | _nc_parse_entry in parse_entry.c        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-19211   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2018-19217   |          |                                      |                 | ncurses: Null pointer dereference       |
|                        |                  |          |                                      |                 | at function _nc_name_match              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-19217   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| nettle                 | CVE-2021-3580    | MEDIUM   | 3.4.1-4.el8_3                        |                 | nettle: Remote crash                    |
|                        |                  |          |                                      |                 | in RSA decryption via                   |
|                        |                  |          |                                      |                 | manipulated ciphertext                  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3580    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| openssl                | CVE-2021-23840   |          | 1:1.1.1g-15.el8_3                    |                 | openssl: integer                        |
|                        |                  |          |                                      |                 | overflow in CipherUpdate                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-23840   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-23841   |          |                                      |                 | openssl: NULL pointer dereference       |
|                        |                  |          |                                      |                 | in X509_issuer_and_serial_hash()        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-23841   |
+------------------------+------------------+          +                                      +-----------------+-----------------------------------------+
| openssl-libs           | CVE-2021-23840   |          |                                      |                 | openssl: integer                        |
|                        |                  |          |                                      |                 | overflow in CipherUpdate                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-23840   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-23841   |          |                                      |                 | openssl: NULL pointer dereference       |
|                        |                  |          |                                      |                 | in X509_issuer_and_serial_hash()        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-23841   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| pcre                   | CVE-2019-20838   | LOW      | 8.42-4.el8                           |                 | pcre: buffer over-read in               |
|                        |                  |          |                                      |                 | JIT when UTF is disabled                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-20838   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2020-14155   |          |                                      |                 | pcre: integer overflow in libpcre       |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-14155   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| platform-python        | CVE-2021-3426    | MEDIUM   | 3.6.8-37.el8                         |                 | python: information                     |
|                        |                  |          |                                      |                 | disclosure via pydoc                    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3426    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9674    | LOW      |                                      |                 | python: Nested zip file (Zip bomb)      |
|                        |                  |          |                                      |                 | vulnerability in Lib/zipfile.py         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9674    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| platform-python-pip    | CVE-2018-20225   |          | 9.0.3-19.el8                         |                 | python-pip: when --extra-index-url      |
|                        |                  |          |                                      |                 | option is used and package              |
|                        |                  |          |                                      |                 | does not already exist...               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20225   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3572    |          |                                      |                 | python-pip: pip incorrectly handled     |
|                        |                  |          |                                      |                 | unicode separators in git references    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3572    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| procps-ng              | CVE-2018-1121    |          | 3.3.15-6.el8                         |                 | procps-ng, procps: process              |
|                        |                  |          |                                      |                 | hiding through race                     |
|                        |                  |          |                                      |                 | condition enumerating /proc             |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-1121    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| python3-hawkey         | CVE-2021-3445    | MEDIUM   | 0.55.0-7.el8                         |                 | libdnf: libdnf does its                 |
|                        |                  |          |                                      |                 | own signature verification,             |
|                        |                  |          |                                      |                 | but this can be tricked...              |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3445    |
+------------------------+                  +          +                                      +-----------------+                                         +
| python3-libdnf         |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| python3-libs           | CVE-2021-3426    |          | 3.6.8-37.el8                         |                 | python: information                     |
|                        |                  |          |                                      |                 | disclosure via pydoc                    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3426    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9674    | LOW      |                                      |                 | python: Nested zip file (Zip bomb)      |
|                        |                  |          |                                      |                 | vulnerability in Lib/zipfile.py         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9674    |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| python3-pip            | CVE-2018-20225   |          | 9.0.3-19.el8                         |                 | python-pip: when --extra-index-url      |
|                        |                  |          |                                      |                 | option is used and package              |
|                        |                  |          |                                      |                 | does not already exist...               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20225   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3572    |          |                                      |                 | python-pip: pip incorrectly handled     |
|                        |                  |          |                                      |                 | unicode separators in git references    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3572    |
+------------------------+------------------+          +                                      +-----------------+-----------------------------------------+
| python3-pip-wheel      | CVE-2018-20225   |          |                                      |                 | python-pip: when --extra-index-url      |
|                        |                  |          |                                      |                 | option is used and package              |
|                        |                  |          |                                      |                 | does not already exist...               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20225   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3572    |          |                                      |                 | python-pip: pip incorrectly handled     |
|                        |                  |          |                                      |                 | unicode separators in git references    |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3572    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| python3-rpm            | CVE-2021-20271   | MEDIUM   | 4.14.3-13.el8                        |                 | rpm: Signature checks bypass            |
|                        |                  |          |                                      |                 | via corrupted rpm package               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20271   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3421    |          |                                      |                 | rpm: unsigned signature header          |
|                        |                  |          |                                      |                 | leads to string injection               |
|                        |                  |          |                                      |                 | into an rpm database...                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3421    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-20266   | LOW      |                                      |                 | rpm: missing length                     |
|                        |                  |          |                                      |                 | checks in hdrblobInit()                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20266   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| python3-unbound        | CVE-2019-25033   | MEDIUM   | 1.7.3-15.el8                         |                 | unbound: integer overflow               |
|                        |                  |          |                                      |                 | in the regional allocator               |
|                        |                  |          |                                      |                 | via the ALIGN_UP macro                  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-25033   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-16866   | LOW      |                                      |                 | unbound: uninitialized memory           |
|                        |                  |          |                                      |                 | accesses leads to crash via             |
|                        |                  |          |                                      |                 | a crafted NOTIFY query...               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-16866   |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| python36               | CVE-2018-20406   |          | 3.6.8-2.module+el8.1.0+3334+5cb623d7 |                 | python: Integer overflow                |
|                        |                  |          |                                      |                 | in Modules/_pickle.c allows             |
|                        |                  |          |                                      |                 | for memory exhaustion if                |
|                        |                  |          |                                      |                 | serializing gigabytes...                |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20406   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9674    |          |                                      |                 | python: Nested zip file (Zip bomb)      |
|                        |                  |          |                                      |                 | vulnerability in Lib/zipfile.py         |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9674    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| rpm                    | CVE-2021-20271   | MEDIUM   | 4.14.3-13.el8                        |                 | rpm: Signature checks bypass            |
|                        |                  |          |                                      |                 | via corrupted rpm package               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20271   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3421    |          |                                      |                 | rpm: unsigned signature header          |
|                        |                  |          |                                      |                 | leads to string injection               |
|                        |                  |          |                                      |                 | into an rpm database...                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3421    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-20266   | LOW      |                                      |                 | rpm: missing length                     |
|                        |                  |          |                                      |                 | checks in hdrblobInit()                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20266   |
+------------------------+------------------+----------+                                      +-----------------+-----------------------------------------+
| rpm-build-libs         | CVE-2021-20271   | MEDIUM   |                                      |                 | rpm: Signature checks bypass            |
|                        |                  |          |                                      |                 | via corrupted rpm package               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20271   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3421    |          |                                      |                 | rpm: unsigned signature header          |
|                        |                  |          |                                      |                 | leads to string injection               |
|                        |                  |          |                                      |                 | into an rpm database...                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3421    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-20266   | LOW      |                                      |                 | rpm: missing length                     |
|                        |                  |          |                                      |                 | checks in hdrblobInit()                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20266   |
+------------------------+------------------+----------+                                      +-----------------+-----------------------------------------+
| rpm-libs               | CVE-2021-20271   | MEDIUM   |                                      |                 | rpm: Signature checks bypass            |
|                        |                  |          |                                      |                 | via corrupted rpm package               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20271   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-3421    |          |                                      |                 | rpm: unsigned signature header          |
|                        |                  |          |                                      |                 | leads to string injection               |
|                        |                  |          |                                      |                 | into an rpm database...                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-3421    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2021-20266   | LOW      |                                      |                 | rpm: missing length                     |
|                        |                  |          |                                      |                 | checks in hdrblobInit()                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20266   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| sqlite-libs            | CVE-2019-5827    | HIGH     | 3.26.0-13.el8                        |                 | chromium-browser:                       |
|                        |                  |          |                                      |                 | out-of-bounds access in SQLite          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-5827    |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-13750   | MEDIUM   |                                      |                 | sqlite: dropping of shadow tables       |
|                        |                  |          |                                      |                 | not restricted in defensive mode        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-13750   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-13751   |          |                                      |                 | sqlite: fts3: improve                   |
|                        |                  |          |                                      |                 | detection of corrupted records          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-13751   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-19603   |          |                                      |                 | sqlite: mishandles certain SELECT       |
|                        |                  |          |                                      |                 | statements with a nonexistent           |
|                        |                  |          |                                      |                 | VIEW, leading to DoS...                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-19603   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2020-13435   |          |                                      |                 | sqlite: NULL pointer dereference        |
|                        |                  |          |                                      |                 | leads to segmentation fault in          |
|                        |                  |          |                                      |                 | sqlite3ExprCodeTarget in expr.c...      |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2020-13435   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-19244   | LOW      |                                      |                 | sqlite: allows a crash                  |
|                        |                  |          |                                      |                 | if a sub-select uses both               |
|                        |                  |          |                                      |                 | DISTINCT and window...                  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-19244   |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9936    |          |                                      |                 | sqlite: heap-based buffer               |
|                        |                  |          |                                      |                 | over-read in function                   |
|                        |                  |          |                                      |                 | fts5HashEntrySort in sqlite3.c          |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9936    |
+                        +------------------+          +                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9937    |          |                                      |                 | sqlite: null-pointer                    |
|                        |                  |          |                                      |                 | dereference in function                 |
|                        |                  |          |                                      |                 | fts5ChunkIterate in sqlite3.c           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9937    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| systemd                | CVE-2018-20839   | MEDIUM   | 239-45.el8                           |                 | systemd: mishandling of the             |
|                        |                  |          |                                      |                 | current keyboard mode check             |
|                        |                  |          |                                      |                 | leading to passwords being...           |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2018-20839   |
+------------------------+                  +          +                                      +-----------------+                                         +
| systemd-libs           |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+                  +          +                                      +-----------------+                                         +
| systemd-pam            |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
|                        |                  |          |                                      |                 |                                         |
+------------------------+------------------+          +--------------------------------------+-----------------+-----------------------------------------+
| tar                    | CVE-2021-20193   |          | 2:1.30-5.el8                         |                 | tar: Memory leak in                     |
|                        |                  |          |                                      |                 | read_header() in list.c                 |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-20193   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-9923    | LOW      |                                      |                 | tar: null-pointer dereference           |
|                        |                  |          |                                      |                 | in pax_decode_header in sparse.c        |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-9923    |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| unbound-libs           | CVE-2019-25033   | MEDIUM   | 1.7.3-15.el8                         |                 | unbound: integer overflow               |
|                        |                  |          |                                      |                 | in the regional allocator               |
|                        |                  |          |                                      |                 | via the ALIGN_UP macro                  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-25033   |
+                        +------------------+----------+                                      +-----------------+-----------------------------------------+
|                        | CVE-2019-16866   | LOW      |                                      |                 | unbound: uninitialized memory           |
|                        |                  |          |                                      |                 | accesses leads to crash via             |
|                        |                  |          |                                      |                 | a crafted NOTIFY query...               |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2019-16866   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+
| wget                   | CVE-2021-31879   | MEDIUM   | 1.19.5-10.el8                        |                 | wget: authorization header              |
|                        |                  |          |                                      |                 | disclosure on redirect                  |
|                        |                  |          |                                      |                 | -->avd.aquasec.com/nvd/cve-2021-31879   |
+------------------------+------------------+----------+--------------------------------------+-----------------+-----------------------------------------+

usr/share/java/kafka/jetty-server-9.4.40.v20210413.jar
======================================================
Total: 1 (UNKNOWN: 0, LOW: 0, MEDIUM: 1, HIGH: 0, CRITICAL: 0)

+--------------------------------+------------------+----------+-------------------+---------------+---------------------------------------+
|            LIBRARY             | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |                 TITLE                 |
+--------------------------------+------------------+----------+-------------------+---------------+---------------------------------------+
| org.eclipse.jetty:jetty-server | CVE-2019-10247   | MEDIUM   | 9.4.40.v20210413  |               | jetty: error path                     |
|                                |                  |          |                   |               | information disclosure                |
|                                |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2019-10247 |
+--------------------------------+------------------+----------+-------------------+---------------+---------------------------------------+

You can integrate the GitHub Action into your workflows if you want:
https://github.com/aquasecurity/trivy/blob/main/docs/integrations/github-actions.md

Create a topic on container creation - KAFKA_CREATE_TOPICS env

Is there any way to create a topic when create the kafka container on docker-compose? Because I'm using a spring-boot application and the log show a lot of messages until my first request because the topic has not been created.

cpo-executor | 19:40:07.978 [kafka-admin-client-thread | adminclient-1] WARN o.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
cpo-executor | 19:40:08.161 [main] ERROR o.s.kafka.core.KafkaAdmin - Could not configure topics
cpo-executor | org.springframework.kafka.KafkaException: Timed out waiting to get existing topics; nested exception is java.util.concurrent.TimeoutException
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$4(KafkaAdmin.java:254)
cpo-executor | at java.util.HashMap.forEach(HashMap.java:1289)
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.checkPartitions(KafkaAdmin.java:233)
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.addTopicsIfNeeded(KafkaAdmin.java:219)
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.initialize(KafkaAdmin.java:189)
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.afterSingletonsInstantiated(KafkaAdmin.java:157)
cpo-executor | at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:963)
cpo-executor | at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:923)
cpo-executor | at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:588)
cpo-executor | at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144)
cpo-executor | at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:767)
cpo-executor | at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
cpo-executor | at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:426)
cpo-executor | at org.springframework.boot.SpringApplication.run(SpringApplication.java:326)
cpo-executor | at org.springframework.boot.SpringApplication.run(SpringApplication.java:1311)
cpo-executor | at org.springframework.boot.SpringApplication.run(SpringApplication.java:1300)
cpo-executor | at com.closeupinternational.cpoexecutor.CpoExecutorApplication.main(CpoExecutorApplication.java:15)
cpo-executor | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
cpo-executor | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
cpo-executor | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
cpo-executor | at java.lang.reflect.Method.invoke(Method.java:498)
cpo-executor | at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
cpo-executor | at org.springframework.boot.loader.Launcher.launch(Launcher.java:107)
cpo-executor | at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
cpo-executor | at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
cpo-executor | Caused by: java.util.concurrent.TimeoutException: null
cpo-executor | at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
cpo-executor | at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
cpo-executor | at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$4(KafkaAdmin.java:236)
cpo-executor | ... 24 common frames omitted

SSL is not configured when Internal and external listener is defined

looking at the code :
https://github.com/confluentinc/kafka-images/blob/master/kafka/include/etc/confluent/docker/configure

if [[ $KAFKA_ADVERTISED_LISTENERS == *"SSL://"* ]]

will always be false in case you define multiple listeners like this :

docker run --name kafka --rm -it -p 2181:2181 -p 9092:9092 -p 29092:29092 \
    -e KAFKA_LISTENERS=INTERNAL://:29092,EXTERNAL://:9092 \
    -e KAFKA_ADVERTISED_LISTENERS=INTERNAL://kafka:29092,EXTERNAL://localhost:9092 \
    -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT \
    -e KAFKA_INTER_BROKER_LISTENER_NAME=INTERNAL \
    confluentinc/cp-kafka:latest

like the documentation specify to do :
https://www.confluent.io/blog/kafka-listeners-explained/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.