Giter VIP home page Giter VIP logo

fast-data-dev's Introduction

Lenses Box / fast-data-dev

lensesio/box (lensesio/box) docker

lensesio/fast-data-dev docker

Join the Slack Lenses.io Community!

Apache Kafka docker image for developers; with Lenses (lensesio/box) or Lenses.io's open source UI tools (lensesio/fast-data-dev). Have a full fledged Kafka installation up and running in seconds and top it off with a modern streaming platform (only for kafka-lenses-dev), intuitive UIs and extra goodies. Also includes Kafka Connect, Schema Registry, Lenses.io's Stream Reactor 25+ Connectors and more.

Get a free license for Lenses Box

Introduction

When you need:

  1. A Kafka distribution with Apache Kafka, Kafka Connect, Zookeeper, Confluent Schema Registry and REST Proxy
  2. Lenses.io Lenses or kafka-topics-ui, schema-registry-ui, kafka-connect-ui
  3. Lenses.io Stream Reactor, 25+ Kafka Connectors to simplify ETL processes
  4. Integration testing and examples embedded into the docker

just run:

docker run --rm --net=host lensesio/fast-data-dev

That's it. Visit http://localhost:3030 to get into the fast-data-dev environment

fast-data-dev web UI screenshot

All the service ports are exposed, and can be used from localhost / or within your IntelliJ. The kafka broker is exposed by default at port 9092, zookeeper at port 2181, schema registry at 8081, connect at 8083. As an example, to access the JMX data of the broker run:

jconsole localhost:9581

If you want to have the services remotely accessible, then you may need to pass in your machine's IP address or hostname that other machines can use to access it:

docker run --rm --net=host -e ADV_HOST=<IP> lensesio/fast-data-dev

Hit control+c to stop and remove everything

fast-data-dev web UI screenshot

Mac and Windows users (docker-machine)

Create a VM with 4+GB RAM using Docker Machine:

docker-machine create --driver virtualbox --virtualbox-memory 4096 lensesio

Run docker-machine ls to verify that the Docker Machine is running correctly. The command's output should be similar to:

$ docker-machine ls
NAME        ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
lensesio     *        virtualbox   Running   tcp://192.168.99.100:2376           v17.03.1-ce

Configure your terminal to be able to use the new Docker Machine named lensesio:

eval $(docker-machine env lensesio)

And run the Kafka Development Environment. Define ports, advertise the hostname and use extra parameters:

docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \
       -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=192.168.99.100 \
       lensesio/fast-data-dev:latest

That's it. Visit http://192.168.99.100:3030 to get into the fast-data-dev environment

Run on the Cloud

You may want to quickly run a Kafka instance in GCE or AWS and access it from your local computer. Fast-data-dev has you covered.

Start a VM in the respective cloud. You can use the OS of your choice, provided it has a docker package. CoreOS is a nice choice as you get docker out of the box.

Next you have to open the firewall, both for your machines but also for the VM itself. This is important!

Once the firewall is open try:

docker run -d --net=host -e ADV_HOST=[VM_EXTERNAL_IP] \
           -e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev

Alternatively just export the ports you need. E.g:

docker run -d -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \
           -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=[VM_EXTERNAL_IP] \
           -e RUNNING_SAMPLEDATA=1 lensesio/fast-data-dev

Enjoy Kafka, Schema Registry, Connect, Lensesio UIs and Stream Reactor.

Customize execution

Fast-data-dev and kafka-lenses-dev support custom configuration and extra features via environment variables.

fast-data-dev / kafka-lenses-dev advanced configuration

Optional Parameters Description
CONNECT_HEAP=3G Configure the maximum (-Xmx) heap size allocated to Kafka Connect. Useful when you want to start many connectors.
<SERVICE>_PORT=<PORT> Custom port <PORT> for service, 0 will disable it. <SERVICE> one of ZK, BROKER, BROKER_SSL, REGISTRY, REST, CONNECT.
<SERVICE>_JMX_PORT=<PORT> Custom JMX port <PORT> for service, 0 will disable it. <SERVICE> one of ZK, BROKER, BROKER_SSL, REGISTRY, REST, CONNECT.
USER=username Run in combination with PASSWORD to specify the username to use on basic auth.
PASSWORD=password Protect the fast-data-dev UI when running publicly. If USER is not set, the default username is kafka.
SAMPLEDATA=0 Do not create topics with sample avro and json records; (e.g do not create topics sea_vessel_position_reports, reddit_posts).
RUNNING_SAMPLEDATA=1 In the sample topics send a continuous (yet very low) flow of messages, so you can develop against live data.
RUNTESTS=0 Disable the (coyote) integration tests from running when container starts.
FORWARDLOGS=0 Disable running the file source connector that brings broker logs into a Kafka topic.
RUN_AS_ROOT=1 Run kafka as root user - useful to i.e. test HDFS connector.
DISABLE_JMX=1 Disable JMX - enabled by default on ports 9581 - 9585. You may also disable it individually for services.
ENABLE_SSL=1 Generate a CA, key-certificate pairs and enable a SSL port on the broker.
SSL_EXTRA_HOSTS=IP1,host2 If SSL is enabled, extra hostnames and IP addresses to include to the broker certificate.
CONNECTORS=<CONNECTOR>[,<CON2>] Explicitly set which connectors* will be enabled. E.g hbase, elastic (Stream Reactor version)
DISABLE=<CONNECTOR>[,<CON2>] Disable one or more connectors*. E.g hbase, elastic (Stream Reactor version), elasticsearch (Confluent version)
BROWSECONFIGS=1 Expose service configuration in the UI. Useful to see how Kafka is setup.
DEBUG=1 Print stdout and stderr of all processes to container's stdout. Useful for debugging early container exits.
SUPERVISORWEB=1 Enable supervisor web interface on port 9001 (adjust via SUPERVISORWEB_PORT) in order to control services, run tail -f, etc.

*Available connectors are: azure-documentdb, blockchain, bloomberg, cassandra, coap, druid, elastic, elastic5, ftp, hazelcast, hbase, influxdb, jms, kudu, mongodb, mqtt, pulsar, redis, rethink, voltdb, couchbase, dbvisitreplicate, debezium-mongodb, debezium-mysql, debezium-postgres, elasticsearch, hdfs, jdbc, s3, twitter.

To programmatically get a list, run:

docker run --rm -it lensesio/fast-data-dev \
       find /opt/lensesio/connectors -type d -maxdepth 2 -name "kafka-connect-*"
Optional Parameters (unsupported) Description
WEB_ONLY=1 Run in combination with --net=host and docker will connect to the kafka services running on the local host. Please use our UI docker images instead.
TOPIC_DELETE=0 Configure whether you can delete topics. By default topics can be deleted. Please use KAFKA_DELETE_TOPIC_ENABLE=false instead.

Configure Kafka Components

You may configure any Kafka component (broker, schema registry, connect, rest proxy) by converting the configuration option to uppercase, replace dots with underscores and prepend with <SERVICE>_.

As example:

  • To set the log.retention.bytes for the broker, you would set the environment variable KAFKA_LOG_RETENTION_BYTES=1073741824.
  • To set the kafkastore.topic for the schema registry, you would set SCHEMA_REGISTRY_KAFKASTORE_TOPIC=_schemas.
  • To set the plugin.path for the connect worker, you would set CONNECT_PLUGIN_PATH=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors.
  • To set the schema.registry.url for the rest proxy, you would set KAFKA_REST_SCHEMA_REGISTRY_URL=http://localhost:8081.

We also support the variables that set JVM options, such as KAFKA_OPTS, SCHEMA_REGISTRY_JMX_OPTS, etc.

Lensesio's Kafka Distribution (LKD) supports a few extra flags as well. Since in the Apache Kafka build, both the broker and the connect worker expect JVM options at the default KAFKA_OPTS, LKD supports using BROKER_OPTS, etc for the broker and CONNECT_OPTS, etc for the connect worker. Of course KAFKA_OPTS are still supported and apply to both applications (and the embedded zookeeper).

Another LKD addition are the VANILLA_CONNECT, SERDE_TOOLS and LENSESIO_COMMON flags for Kafka Connect. By default we load into the Connect Classpath the Schema Registry and Serde Tools by Confluent in order to support avro and our own base jars in order to support avro and our connectors. You can choose to run a completely vanilla kafka connect, the same that comes from the official distribution, without avro support by setting VANILLA_CONNECT=1. Please note that most if not all the connectors will fail to load, so it would be wise to disable them. SERDE_TOOLS=0 will disable Confluent's jars and LENSESIO_COMMON=0 will disable our jars. Any of these is enough to support avro, but disabling LENSESIO_COMMON will render Stream Reactor inoperable.

Versions

The latest version of this docker image tracks our latest stable tag (1.0.1). Our images include:

Version Kafka Distro Lensesio tools Apache Kafka Connectors
lensesio/fast-data-dev:3.6.1 LKD 3.6.1-L0 3.6.1 20+ connectors
lensesio/fast-data-dev:3.3.1 LKD 3.3.1-L0 3.3.1 20+ connectors
lensesio/fast-data-dev:2.6.2 LKD 2.6.2-L0 2.6.2 30+ connectors
lensesio/fast-data-dev:2.5.1 LKD 2.5.1-L0 2.5.1 30+ connectors
lensesio/fast-data-dev:2.4.1 LKD 2.4.1-L0 2.4.1 30+ connectors
lensesio/fast-data-dev:2.3.2 LKD 2.3.2-L0 2.3.2 30+ connectors
lensesio/fast-data-dev:2.2.1 LKD 2.2.1-L0 2.2.1 30+ connectors
lensesio/fast-data-dev:2.1.1 LKD 2.1.1-L0 2.1.1 30+ connectors
lensesio/fast-data-dev:2.0.1 LKD 2.0.1-L0 2.0.1 30+ connectors
landoop/fast-data-dev:1.1.1 LKD 1.1.1-L0 1.1.1 30+ connectors
landoop/fast-data-dev:1.0.1 LKD 1.0.1-L0 1.0.1 30+ connectors
landoop/fast-data-dev:cp3.3.0 CP 3.3.0 OSS 0.11.0.0 30+ connectors
landoop/fast-data-dev:cp3.2.2 CP 3.2.2 OSS 0.10.2.1 24+ connectors
landoop/fast-data-dev:cp3.1.2 CP 3.1.2 OSS 0.10.1.1 20+ connectors
landoop/fast-data-dev:cp3.0.1 CP 3.0.1 OSS 0.10.0.1 20+ connectors

*LKD stands for Lenses.io's Kafka Distribution. We build and package Apache Kafka with Kafka Connect and Apache Zookeeper, Confluent Schema Registry and REST Proxy and a collection of third party Kafka Connectors as well as our own Stream Reactor collection.

Please note the BSL license of the tools. To use them on a PROD cluster with > 3 Kafka nodes, you should contact us.

Building it

Fast-data-dev and Lenses Box require a recent version of docker which supports multistage builds. Optionally you should also enable the buildx plugin to enable multi-arch builds, even if you just use the default builder.

To build it just run:

docker build -t lensesio-local/fast-data-dev .

Periodically pull from docker hub to refresh your cache.

If your docker version does not support multi-arch builds, or you don't have the buildx plugin installed, use the build args demonstrated below to emulate multi-arch support:

docker build --build-arg TARGETOS=linux --build-arg TARGETARCH=amd64 -t lensesio-local/fast-data-dev .

Advanced Features and Settings

Custom Ports

To use custom ports for the various services, you can take advantage of the ZK_PORT, BROKER_PORT, REGISTRY_PORT, REST_PORT, CONNECT_PORT and WEB_PORT environment variables. One catch is that you can't swap ports; e.g to assign 8082 (default REST Proxy port) to the brokers.

docker run --rm -it \
           -p 3181:3181 -p 3040:3040 -p 7081:7081 \
           -p 7082:7082 -p 7083:7083 -p 7092:7092 \
           -e ZK_PORT=3181 -e WEB_PORT=3040 -e REGISTRY_PORT=8081 \
           -e REST_PORT=7082 -e CONNECT_PORT=7083 -e BROKER_PORT=7092 \
           -e ADV_HOST=127.0.0.1 \
           lensesio/fast-data-dev

A port of 0 will disable the service.

Execute kafka command line tools

Do you need to execute kafka related console tools? Whilst your Kafka containers is running, try something like:

docker run --rm -it --net=host lensesio/fast-data-dev kafka-topics --zookeeper localhost:2181 --list

Or enter the container to use any tool as you like:

docker run --rm -it --net=host lensesio/fast-data-dev bash

View logs

You can view the logs from the web interface. If you prefer the command line, every application stores its logs under /var/log inside the container. If you have your container's ID, or name, you could do something like:

docker exec -it <ID> cat /var/log/broker.log

Enable SSL on Broker

Do you want to test your application over an authenticated TLS connection to the broker? We got you covered. Enable TLS via -e ENABLE_SSL=1:

docker run --rm --net=host \
           -e ENABLE_SSL=1 \
           lensesio/fast-data-dev

When fast-data-dev spawns, it will create a self-signed CA. From that it will create a truststore and two signed key-certificate pairs, one for the broker, one for your client. You can access the truststore and the client's keystore from our Web UI, under /certs (e.g http://localhost:3030/certs). The password for both the keystores and the TLS key is fastdata. The SSL port of the broker is 9093, configurable via the BROKER_SSL_PORT variable.

Here is a simple example of how the SSL functionality can be used. Let's spawn a fast-data-dev to act as the server:

docker run --rm --net=host -e ENABLE_SSL=1 -e RUNTESTS=0 lensesio/fast-data-dev

On a new console, run another instance of fast-data-dev only to get access to Kafka command line utilities and use TLS to connect to the broker of the former container:

docker run --rm -it --net=host --entrypoint bash lensesio/fast-data-dev
root@fast-data-dev / $ wget localhost:3030/certs/truststore.jks
root@fast-data-dev / $ wget localhost:3030/certs/client.jks
root@fast-data-dev / $ kafka-producer-perf-test --topic tls_test \
  --throughput 100000 --record-size 1000 --num-records 2000 \
  --producer-props bootstrap.servers="localhost:9093" security.protocol=SSL \
  ssl.keystore.location=client.jks ssl.keystore.password=fastdata \
  ssl.key.password=fastdata ssl.truststore.location=truststore.jks \
  ssl.truststore.password=fastdata

Since the plaintext port is also available, you can test both and find out which is faster and by how much. ;)

Advanced Connector settings

Explicitly Enable Connectors

The number of connectors present significantly affects Kafka Connect's startup time, as well as its memory usage. You can enable connectors explicitly using the CONNECTORS environment variable:

docker run --rm -it --net=host \
           -e CONNECTORS=jdbc,elastic,hbase \
           lensesio/fast-data-dev

Please note that if you don't enable jdbc, some tests will fail. This doesn't affect fast-data-dev's operation.

Explicitly Disable Connectors

Following the same logic as in the paragraph above, you can instead choose to explicitly disable certain connectors using the DISABLE environment variable. It takes a comma separated list of connector names you want to disable:

docker run --rm -it --net=host \
           -e DISABLE=elastic,hbase \
           lensesio/fast-data-dev

If you disable the jdbc connector, some tests will fail to run.

Enable additional connectors

If you have a custom connector you would like to use, you can mount it at folder /connectors. plugin.path variable for Kafka Connect is set up to include /connectors/, so it will use any single-jar connectors it will find inside this directory and any multi-jar connectors it will find in subdirectories of this directory.

docker run --rm -it --net=host \
           -v /path/to/my/connector/connector.jar:/connectors/connector.jar \
           -v /path/to/my/multijar-connector-directory:/connectors/multijar-connector-directory \
           lensesio/fast-data-dev

FAQ

  • Lensesio's Fast Data Web UI tools and integration test requires some time till they fully work. Especially the tests and Kafka Connect UI will need a few minutes.

    That is because the services (Kafka, Schema Registry, Kafka Connect, REST Proxy) have to start and initialize before the UIs can read data.

  • What resources does this container need?

    An idle, fresh container will need about 3GiB of RAM. As at least 5 JVM applications will be working in it, your mileage will vary. In our experience Kafka Connect usually requires a lot of memory. It's heap size is set by default to 640MiB but you'll might need more than that.

  • Fast-data-dev does not start properly, broker fails with:

    [2016-08-23 15:54:36,772] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.net.UnknownHostException: [HOSTNAME]: [HOSTNAME]: unknown error

    JVM based apps tend to be a bit sensitive to hostname issues. Either run the image without --net=host and expose all ports (2181, 3030, 8081, 8082, 8083, 9092) to the same port at the host, or better yet make sure your hostname resolve to the localhost address (127.0.0.1). Usually to achieve this, you need to add your hostname (case sensitive) at /etc/hosts as the first name after 127.0.0.1. E.g:

    127.0.0.1 MyHost localhost
    

Detailed configuration options

Web Only Mode

Note: Web only mode will be deprecated in the future.

This is a special mode only for Linux hosts, where only Lensesio's Web UIs are started and kafka services are expected to be running on the local machine. It must be run with --net=host flag, thus the Linux only requisite:

docker run --rm -it --net=host \
           -e WEB_ONLY=true \
           lensesio/fast-data-dev

This is useful if you already have a Kafka cluster and want just the additional Lensesio Fast Data web UI. Please note that we provide separate, lightweight docker images for each UI component and we strongly encourage to use these over fast-data-dev.

Connect Heap Size

You can configure Connect's heap size via the environment variable CONNECT_HEAP. The default is 640M:

docker run -e CONNECT_HEAP=3G -d lensesio/fast-data-dev

Basic Auth (password)

We have included a web server to serve Lensesio UIs and proxy the schema registry and kafa REST proxy services, in order to share your docker over the web. If you want some basic protection, pass the PASSWORD variable and the web server will be protected by user kafka with your password. If you want to setup the username too, set the USER variable.

 docker run --rm -it -p 3030:3030 \
            -e PASSWORD=password \
            lensesio/fast-data-dev

Disable tests

By default this docker runs a set of coyote tests, to ensure that your container and development environment is all set up. You can disable running the coyote tests using the flag:

-e RUNTESTS=0

Run Kafka as root

In the recent versions of fast-data-dev, we switched to running Kafka as user nobody instead of root since it was a bad practice. The old behaviour may still be desirable, for example on our HDFS connector tests, Connect worker needs to run as the root user in order to be able to write to the HDFS. To switch to the old behaviour, use:

-e RUN_AS_ROOT=1

JMX Metrics

JMX metrics are enabled by default. If you want to disable them for some reason (e.g you need the ports for other purposes), use the DISABLE_JMX environment variable:

docker run --rm -it --net=host \
           -e DISABLE_JMX=1 \
           lensesio/fast-data-dev

JMX ports are hardcoded to 9581 for the broker, 9582 for schema registry, 9583 for REST proxy and 9584 for connect distributed. Zookeeper is exposed at 9585.

fast-data-dev's People

Contributors

ackintosh avatar andmarios avatar antwnis avatar bwalsh avatar chdask avatar deanlinaras avatar dougdonohoe avatar faberchri avatar georgeyord avatar gitter-badger avatar landon9720 avatar longliveenduro avatar nmaves avatar noderat avatar pliew avatar simplesteph avatar spirosoik avatar tapppi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-data-dev's Issues

Any options to reset REGISTRY, REST to other IP ?

Reproduce the problem:

  1. Start docker run -e RUNTESTS=0 -e ADV_HOST=10.110.136.25 -e DEBUG=1 -e SAMPLEDATA=0 -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 --rm landoop/fast-data-dev on one machine with IP 10.110.136.25.

Then produce message by
kafka-avro-console-producer --broker-list localhost:9092 --topic wordnames --property value.schema='{"type":"record","name":"fxrate", "fields":[{"name":"symbol","type":"string"}]}' { "symbol": "Hello" } { "symbol": "World" }

  1. Start another fast-data-dev container like step1, but on another machine with a different IP.
    The use consume message by
    kafka-avro-console-consumer --topic wordnames --bootstrap-server 10.110.136.25:9092 --from-beginning

Got error msg:
ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$:105) org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 29 Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema not found; error code: 40403 at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:182) at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:203) at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:379)\ at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:372) at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:65) at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:131) at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:122 at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:93) at io.confluent.kafka.formatter.AvroMessageFormatter.writeTo(AvroMessageFormatter.java:122) at io.confluent.kafka.formatter.AvroMessageFormatter.writeTo(AvroMessageFormatter.java:114) at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:140) at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78) at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:53) at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)

Add bash-completion

/etc/bash_completion.d

Autocomplete for kafka-topics and kafka-console-consumer. It also completes properties

For when we are interactive within the docker

docker run --rm -it --entrypoint bash landoop/fast-data-dev

Connectivity Error for Connect UI

Hi,

I am trying to use fast-data-dev on Ubuntu 17.10 by sudo docker run --rm -it --net=host landoop/fast-data-dev:cp3.3 (I also used latest tag)

And Connect UI returns following error:

Kafka Connect : /api/kafka-connect
Kafka Connect Version : 0.11.0.0-cp1
Kafka Connect UI Version : 0.9.2
CONNECTIVITY ERROR

Topics UI and other components work well. What can be the problem?

Trying to determine Kafka version and resolve a NoBrokersAvailableError

Thanks for this project. I happened upon it as I was looking for GUIs to help with trying to get a better understanding of the innards of Kafka. Up to now, I've been using the wurstmeister/kafka containerised solution and connecting pykafka to this (from versions 0.8 through 0.10) works fine.

However, if I try to connect pykafka to the exposed 9092 port of this set-up, I'm getting this error ... and trying to understand how this differs from the wurstmeister/kafka implementation. I realise there are more moving parts in this set-up, but at it's core, it is still exposing kafka ... which I presumably should be able to connect directly to.

I haven't been able to establish from any of the Dockerfiles in here as to what version of Kafka is being installed.

for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno -2] Name or service not known

Thanks for any insights you might have. Failing that ... do you think it's possible to connect up the kafka-topics-ui to the wurstmeister/kafka set-up ... it seems I need to run a kafka rest proxy in the mix, in order for the ui part to work?

Regards,
Colum

Create more than one kafka broker

Hi,
First, congrats for the huge and good job that you have done in Landoop / fast-data-dev. I was presented to this beautiful project during a online kafka course.

Second, I would like to know if there is a easy way to create more than one broker.

Thank you

connect-configs topics is reported as monitor-msg topic

image

As you can see there's a topic called monitor-msg but in reality it's definitely the connect-configs topic. It's pretty confusing that is has been renamed in the UI for some reason. Run the kafka-topics --list command to verify

Producer from another docker container unable to send/write messages to kafka topic

Hi,

Things work perfectly when I use a producer app I have using localhost:9092 as a broker with your container. However, when I try to run the same producer on another docker container on my machine, it's unable to talk to your container and I get the below error. Hoping you'd be able to give a suggestion on what I could try. Thanks!

"org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms"

502 Bad gateway when on kafka topics ui

I run the command docker run --rm -it -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 landoop/fast-data-dev on redhat 7.2.
Everything is started. When i go to http://IP:3030, i have many error http 502 (Bad gateway) in chrome console.

image

Do you know the problem ? Can you help me to resolve this plz ?

Gaetan

How to persist data

Hey there,

we're currently using your project for development. Is there an easy way (since we're new to Kafka as well as docker) to persist our topics as well as the connectors?

Connect UI doesn't accept empty strings for properties

I'm using the ElasticSearch connector.
It seems it is not accepting the valid empty strings as a property which is valid java properties:

name=sink-elastic-twitter-distributed
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=2
topics=demo-3-twitter
connection.url=http://elasticsearch:9200
type.name=kafka-connect
key.ignore=true
topic.key.ignore=
topic.index.map=
topic.schema.ignore=

image

Finally, if we don't specify them, the UI says they're required. but according to the doc at https://docs.confluent.io/3.3.0/connect/connect-elasticsearch/docs/configuration_options.html they're supposed to be optional. For example the quickstart config here: https://github.com/confluentinc/kafka-connect-elasticsearch/blob/master/config/quickstart-elasticsearch.properties does not specify them

Not sure if the issue lies with the connector itself, the framework, or the landoop image

Can't Visit http://localhost:3030

Hi,

I am new to docker and kafka. When I try to run landoop/fast-data-dev, I can't visit localhost:3030 in my browser. Even if I changed the ADV_HOST to 127.0.0.1, I still can't open 127.0.0.1:3030 in my browser. I have no idea how to solve this problem.

 ~ docker run --rm --net=host landoop/fast-data-dev
Starting services.
This is landoop’s fast-data-dev. Kafka 0.11.0.1, Confluent OSS 3.3.1.
You may visit http://localhost:3030 in about a minute.
2017-12-25 03:07:36,336 CRIT Supervisor running as root (no user in config file)
2017-12-25 03:07:36,336 INFO Included extra file "/etc/supervisord.d/01-fast-data.conf" during parsing
2017-12-25 03:07:36,336 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2017-12-25 03:07:36,338 INFO supervisord started with pid 7
2017-12-25 03:07:37,340 INFO spawned: 'sample-data' with pid 100
2017-12-25 03:07:37,343 INFO spawned: 'zookeeper' with pid 101
2017-12-25 03:07:37,346 INFO spawned: 'caddy' with pid 102
2017-12-25 03:07:37,348 INFO spawned: 'broker' with pid 103
2017-12-25 03:07:37,350 INFO spawned: 'smoke-tests' with pid 104
2017-12-25 03:07:37,352 INFO spawned: 'connect-distributed' with pid 105
2017-12-25 03:07:37,354 INFO spawned: 'logs-to-kafka' with pid 106
2017-12-25 03:07:37,356 INFO spawned: 'schema-registry' with pid 113
2017-12-25 03:07:37,358 INFO spawned: 'rest-proxy' with pid 115
2017-12-25 03:07:39,031 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,031 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,031 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,031 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,032 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,032 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,032 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,032 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:07:39,032 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:12:07,668 INFO exited: logs-to-kafka (exit status 0; expected)
2017-12-25 03:12:13,253 INFO exited: connect-distributed (exit status 1; not expected)
2017-12-25 03:12:13,264 INFO spawned: 'connect-distributed' with pid 1268
2017-12-25 03:12:14,311 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:12:53,999 INFO exited: schema-registry (exit status 1; not expected)
2017-12-25 03:12:54,001 INFO spawned: 'schema-registry' with pid 1416
2017-12-25 03:12:55,090 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-12-25 03:14:13,635 INFO exited: smoke-tests (exit status 24; not expected)

A KCQL question: elastic sink connector

First, thanks for your work! Then, correct me if I'm wrong, but fast-data-dev permits just one elastic sink connector, and, correct me if I'm wrong, the latter has just one config and one field in it to define the data transfer (connect.elastic.sink.kcql), and, correct me if I'm wrong, it is possible to define just one mapping of a single kafka topic to a single elasticsearch index with a single KCQL expression.
Now is this a management server "demo version" limitation, or an infrastructure limitation, or am I a totally confused noob?

Connect HealthCheck failure 85.7%

I've had lots of feedback saying that connect healthcheck is failing. My guess is that because connect takes WAY more time to load right now due to the plugin and classpath changes. Or something else but I'm not so sure

image
image
image

Permission denied /data folder

Hi

One of my student using the new version has the following problem

Hi,

I have Windows 10 Pro (with Hyper-V enabled) and Docker for Windows 18.03 (the latest one).

When I tried to run the the command from your video, I've got following message:

COMMAND>>> docker run --rm -it -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=127.0.0.1 landoop/fast-data-dev

MESSAGE>>> cannot download image with operating system "linux" when requesting "windows"

I fixed this by adding '--platform linux' to the command and now it looks like this:

docker run --rm -it --platform linux -p 2181:2181 -p 3030:3030 -p 8081:8081 -p 8082:8082 -p 8083:8083 -p 9092:9092 -e ADV_HOST=127.0.0.1 landoop/fast-data-dev

The image has been downloaded successfully, but now I get the other error:

chmod: changing permissions of '/data/zookeeper': Permission denied
chmod: changing permissions of '/data/kafka': Permission denied



Container doesn't start and I cannot proceed.

Can you help me? How to solve this issue? 

Thank you in advance!

The question is here: https://www.udemy.com/apache-kafka/learn/v4/questions/4235368
if you need to interact with the student.

I'm not sure what could be causing this but I thought I'd let you know . Hope you can reproduce and fix

On cp3.2.0 tag, connect-ui does not seem to work well

Just launched the tag cp3.2.0 to check it out, and getting some odd UI / UX issues:

For example, when clicking on the twitter source icon:

image

This seems to be the case for all icons on the kafka-connect-ui. Unsure if fast-data-dev specific or kafka-connect-ui specific

unexpected schema not found error

Hi,
I'm still trying to use this docker image, with a trivial goal of connecting Kafka to ElasticSearch.
I'm using http://docs.datamountaineer.com/en/latest/elastic.html — which I don't seem to find a (github) repo for.
The most recent issue which I'm stuck with is very much like confluentinc/kafka-connect-elasticsearch#59

  • I established the image deals with AVRO objects only, unfortunately
  • So I am producing AVRO through ruby-kafka + avro_turf
  • Before that (when I tried to use jruby-kafka with avro_turf or whatever), it was spitting errors like Failed to deserialize data to Avro without any particular reason
  • But now, there's an addition:
    Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 1
  • I was suspicious of race conditions, so I'm initialising the connector just after making sure the schema is registered.

I'm not sure I'm up to debugging the connector, which also kinda defies the purpose of your work...
Any tips or references will be appreciated.
Thanks!

Webpage is not responsive

As I was exploring fast data dev and back and through so I minimize the page.
Here is it.
image
Oops sorry, I forgot to attach.

Caddyfile proxy isn't updated by ADV_HOST

Due to caddy don't resolve localhost properly, all landoop application fail to reach kafka,... REST endpoints.
As soon as localhost is replaced by an IP address everything works fine.

The Caddyfile uses localhost as target for all proxy statements.
e.g.

proxy /api/schema-registry localhost:8081 {
    without /api/schema-registry
}

I propose to use ADV_HOST parameter to replace localhost in Caddyfile.

Custom connector deployment

Hello.

I tried mounting my custom connector in my docker-compose file and so far, I haven't found a way for it to be seen in the container.

Is there a way to deploy my custom connector directly in the container? So, copy the jars and configuration in /opt/confluent at the correct paths (jars being deployed at /opt/confluent/share/java/my-custom-connector, config file in /opt/confluent/etc/my-custom-connector. Then restart the Connect service (ps aux | grep connect and kill it sudo kill -9 <pid>) by /opt/confluent/bin/connect-distributed /opt/confluent/etc/kafka/connect-distributed.properties &
For some reason, the classpath is not seeing my jars. Still investigating this. On a plain Confluent cluster that we are using, this way works.

Kind regards,
Tudor

zookeeper enters FATAL state after container starts

When i start my docker instance only port 3030 works as zookeeper, schema registry etc fails. This could be because zookeeper enters fatal state. How to get it working

[ec2-user@ip-172-29-3-67 ~]$ sudo docker run --rm --name pmm-confluent-kafka --net=host -e ADV_HOST=172.29.63.67 -e RUNNING_SAMPLEDATA=0 -e SAMPLEDATA=0 landoop/fast-data-dev

Setting advertised host to 172.29.63.67.
Starting services.
This is Landoop’s fast-data-dev. Kafka 1.0.1-L0 (Landoop's Kafka Distribution).
You may visit http://172.29.63.67:3030 in about a minute.
2018-05-07 14:39:30,435 CRIT Supervisor running as root (no user in config file)
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/01-zookeeper.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/02-broker.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/03-schema-registry.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/04-rest-proxy.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/05-connect-distributed.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/06-caddy.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/07-smoke-tests.conf" during parsing
2018-05-07 14:39:30,435 INFO Included extra file "/etc/supervisord.d/08-logs-to-kafka.conf" during parsing
2018-05-07 14:39:30,439 INFO supervisord started with pid 6
2018-05-07 14:39:31,442 INFO spawned: 'zookeeper' with pid 161
2018-05-07 14:39:31,446 INFO spawned: 'caddy' with pid 162
2018-05-07 14:39:31,448 INFO spawned: 'broker' with pid 163
2018-05-07 14:39:31,452 INFO spawned: 'smoke-tests' with pid 164
2018-05-07 14:39:31,455 INFO spawned: 'connect-distributed' with pid 165
2018-05-07 14:39:31,458 INFO spawned: 'logs-to-kafka' with pid 167
2018-05-07 14:39:31,466 INFO spawned: 'schema-registry' with pid 169
2018-05-07 14:39:31,472 INFO spawned: 'rest-proxy' with pid 172
2018-05-07 14:39:32,423 INFO exited: zookeeper (exit status 1; not expected)
2018-05-07 14:39:33,425 INFO spawned: 'zookeeper' with pid 439
2018-05-07 14:39:33,426 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,426 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-05-07 14:39:33,829 INFO exited: zookeeper (exit status 1; not expected)
2018-05-07 14:39:36,518 INFO spawned: 'zookeeper' with pid 728
2018-05-07 14:39:36,911 INFO exited: zookeeper (exit status 1; not expected)
2018-05-07 14:39:40,532 INFO spawned: 'zookeeper' with pid 1014
2018-05-07 14:39:40,935 INFO exited: zookeeper (exit status 1; not expected)
2018-05-07 14:39:41,520 INFO gave up: zookeeper entered FATAL state, too many start retries too quickly

the logs keep going on and on about spawning and exiting schema-registry, broker, rest proxy, etc

When i visit my-ip:3030 the coyote health checks part keeps spinning

I got the following error whilst installing/running docker image

Can anyone please tell me if I am doing something wrong?
I ran the following command
docker run --rm --net=host landoop/fast-data-dev

but using the browser, I'm unable to check if it is loaded.

Unable to find image 'landoop/fast-data-dev:latest' locally
latest: Pulling from landoop/fast-data-dev
88286f41530e: Already exists 
514528832091: Already exists 
f918be08a5ce: Already exists 
cdcb41cba641: Already exists 
9f9f8475da0c: Already exists 
d8faf144fbfd: Already exists 
e19b21134a45: Already exists 
24954ced8f7e: Already exists 
ce737813930c: Already exists 
24eb647aed4b: Already exists 
714caa451b50: Already exists 
5e1ac5b755d0: Already exists 
bddb07c49216: Already exists 
3f66a0bb86be: Already exists 
cf853ebbf7d3: Already exists 
9f76eb6ad422: Already exists 
603e41388b52: Already exists 
962e62f638f0: Already exists 
24681b9722cb: Already exists 
cd5f64e957af: Already exists 
52c463264806: Already exists 
9086638c1315: Already exists 
f0bb522926d3: Already exists 
b38ed1de7ee0: Already exists 
26f1626ba214: Already exists 
1520997ded4d: Already exists 
cdd3e0123205: Already exists 
edf47eb4cf1b: Already exists 
df4aeee9d305: Already exists 
045c0e68c38c: Already exists 
142180ddcf13: Already exists 
19d16d17a78c: Already exists 
Digest: sha256:992ef5f5a9e2b11d6f5098d07da9fbc82019018d1f11d1de0c0b4c71023b9413
Status: Downloaded newer image for landoop/fast-data-dev:latest
Starting services.
This is landoop’s fast-data-dev. Kafka 0.10.2.1-cp2, Confluent OSS 3.2.2.
You may visit http://localhost:3030 in about a minute.
2017-07-05 14:43:43,257 CRIT Supervisor running as root (no user in config file)
2017-07-05 14:43:43,257 WARN No file matches via include "/etc/supervisord.d/*.conf"
2017-07-05 14:43:43,258 INFO supervisord started with pid 7
2017-07-05 14:43:44,262 INFO spawned: 'sample-data' with pid 94
2017-07-05 14:43:44,264 INFO spawned: 'zookeeper' with pid 95
2017-07-05 14:43:44,267 INFO spawned: 'caddy' with pid 96
2017-07-05 14:43:44,269 INFO spawned: 'broker' with pid 98
2017-07-05 14:43:44,272 INFO spawned: 'smoke-tests' with pid 100
2017-07-05 14:43:44,274 INFO spawned: 'connect-distributed' with pid 101
2017-07-05 14:43:44,276 INFO spawned: 'logs-to-kafka' with pid 102
2017-07-05 14:43:44,279 INFO spawned: 'schema-registry' with pid 104
2017-07-05 14:43:44,282 INFO spawned: 'rest-proxy' with pid 108
2017-07-05 14:43:45,300 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,300 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,301 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,301 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,301 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,301 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,301 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,302 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:45,302 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:46,898 INFO exited: rest-proxy (exit status 1; not expected)
2017-07-05 14:43:46,914 INFO spawned: 'rest-proxy' with pid 283
2017-07-05 14:43:47,348 INFO exited: schema-registry (exit status 1; not expected)
2017-07-05 14:43:47,410 INFO spawned: 'schema-registry' with pid 330
2017-07-05 14:43:47,923 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:43:48,507 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-07-05 14:44:33,075 INFO exited: sample-data (exit status 0; expected)

I am using a mac and I get the following when I try to launch the page in Safari can't open "localhost:3030"

broker, zookeeper, schema-registry, rest-proxy, connect-distributed fail periodicaly

Every time i start the docker container with
docker run --rm --net=host landoop/fast-data-dev
some components crash periodicaly
Any ideas?

sudo docker run --rm --net=host landoop/fast-data-dev
This is landoop’s fast-data-dev. Kafka 0.11.0.1, Confluent OSS 3.3.1.
You may visit http://localhost:3030 in about a minute.
2018-01-09 16:21:14,301 CRIT Supervisor running as root (no user in config file)
2018-01-09 16:21:14,301 INFO Included extra file "/etc/supervisord.d/01-fast-data.conf" during parsing
2018-01-09 16:21:14,301 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2018-01-09 16:21:14,303 INFO supervisord started with pid 7
2018-01-09 16:21:15,306 INFO spawned: 'sample-data' with pid 93
2018-01-09 16:21:15,310 INFO spawned: 'zookeeper' with pid 94
2018-01-09 16:21:15,315 INFO spawned: 'caddy' with pid 95
2018-01-09 16:21:15,317 INFO spawned: 'broker' with pid 96
2018-01-09 16:21:15,319 INFO spawned: 'smoke-tests' with pid 98
2018-01-09 16:21:15,322 INFO spawned: 'connect-distributed' with pid 99
2018-01-09 16:21:15,325 INFO spawned: 'logs-to-kafka' with pid 101
2018-01-09 16:21:15,327 INFO spawned: 'schema-registry' with pid 106
2018-01-09 16:21:15,328 INFO spawned: 'rest-proxy' with pid 108
2018-01-09 16:21:16,430 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,430 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,430 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,430 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,431 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,431 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,431 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,431 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:16,431 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:21,549 INFO exited: zookeeper (exit status 1; not expected)
2018-01-09 16:21:22,553 INFO spawned: 'zookeeper' with pid 172
2018-01-09 16:21:23,556 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:26,554 INFO exited: broker (exit status 1; not expected)
2018-01-09 16:21:27,557 INFO spawned: 'broker' with pid 223
2018-01-09 16:21:28,833 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:28,833 INFO exited: zookeeper (exit status 1; not expected)
2018-01-09 16:21:29,837 INFO spawned: 'zookeeper' with pid 225
2018-01-09 16:21:30,642 INFO exited: schema-registry (exit status 1; not expected)
2018-01-09 16:21:31,643 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:31,645 INFO spawned: 'schema-registry' with pid 305
2018-01-09 16:21:32,647 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:35,426 INFO exited: zookeeper (exit status 1; not expected)
2018-01-09 16:21:35,429 INFO spawned: 'zookeeper' with pid 334
2018-01-09 16:21:36,168 INFO exited: rest-proxy (exit status 1; not expected)
2018-01-09 16:21:36,245 INFO spawned: 'rest-proxy' with pid 362
2018-01-09 16:21:36,258 INFO exited: connect-distributed (exit status 1; not expected)
2018-01-09 16:21:37,260 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:37,263 INFO spawned: 'connect-distributed' with pid 364
2018-01-09 16:21:37,264 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:38,795 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:38,795 INFO exited: broker (exit status 1; not expected)
2018-01-09 16:21:39,798 INFO spawned: 'broker' with pid 366
2018-01-09 16:21:41,654 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-01-09 16:21:41,654 INFO exited: zookeeper (exit status 1; not expected)
2018-01-09 16:21:42,658 INFO spawned: 'zookeeper' with pid 391
2018-01-09 16:21:43,661 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
^C2018-01-09 16:21:45,055 WARN received SIGINT indicating exit request
2018-01-09 16:21:45,055 INFO waiting for sample-data, zookeeper, caddy, broker, smoke-tests, connect-distributed, logs-to-kafka, schema-registry, rest-proxy to die
2018-01-09 16:21:45,424 INFO stopped: rest-proxy (terminated by SIGTERM)
2018-01-09 16:21:45,746 INFO stopped: schema-registry (exit status 143)
2018-01-09 16:21:45,746 INFO stopped: logs-to-kafka (terminated by SIGTERM)
2018-01-09 16:21:45,747 INFO stopped: connect-distributed (terminated by SIGTERM)
2018-01-09 16:21:46,749 INFO stopped: smoke-tests (terminated by SIGTERM)
2018-01-09 16:21:47,071 INFO stopped: broker (exit status 143)
2018-01-09 16:21:47,072 INFO stopped: caddy (exit status 0)
2018-01-09 16:21:47,394 INFO stopped: zookeeper (exit status 143)
2018-01-09 16:21:47,394 INFO stopped: sample-data (terminated by SIGTERM)

Can't bring up web site: `angular.js:14110 ReferenceError: runningServices is not defined`

docker command

docker run --rm --net=host \
  -e ADV_HOST=10.96.11.187 \
  -e PASSWORD=XXX  \
  -e USER=XXXX  \
  -e WEB_PORT=8000 \
  --name fast-data \
  -d landoop/fast-data-dev

developer tools

image
image

caddy

root@fast-data-dev log $ tail -f caddy.log
10.96.11.151 - [06/Nov/2017:15:47:38 +0000] "GET /img/ok.png HTTP/1.0" 200 2763
10.96.11.151 - [06/Nov/2017:15:47:38 +0000] "GET /schema-registry-ui/bower_components/angular-material/angular-material.min.js HTTP/1.0" 200 387136
10.96.11.151 - [06/Nov/2017:15:47:38 +0000] "GET /schema-registry-ui/src/assets/icons/favicon.png HTTP/1.0" 200 1198
10.96.11.151 - [06/Nov/2017:15:47:45 +0000] "GET /schema-registry-ui/bower_components/angular-aria/angular-aria.min.js.map HTTP/1.0" 200 8585
10.96.11.151 - [06/Nov/2017:15:47:45 +0000] "GET /schema-registry-ui/bower_components/angular-animate/angular-animate.min.js.map HTTP/1.0" 200 70948
10.96.11.151 - [06/Nov/2017:15:47:45 +0000] "GET /schema-registry-ui/bower_components/angular/angular.min.js.map HTTP/1.0" 200 437827
10.96.11.151 - [06/Nov/2017:15:47:54 +0000] "GET / HTTP/1.0" 304 0
10.96.11.151 - [06/Nov/2017:15:48:03 +0000] "GET / HTTP/1.0" 304 0
10.96.11.151 - [06/Nov/2017:15:51:14 +0000] "GET / HTTP/1.0" 304 0
10.96.11.187 - [06/Nov/2017:15:53:25 +0000] "GET / HTTP/1.1" 200 27720
10.96.11.151 - [06/Nov/2017:16:01:14 +0000] "GET / HTTP/1.0" 304 0

Kafka topic UI: System vs non-system topics

Not sure if there's a more technical meaning, but I see:

  • non-system: _schemas, connect-status-01
  • system: connect-configs-01, connect-offsets-01 (and __consumer_offsets, which is clearly system)

I think all connect should stay together, and possibly _schemas could go into system?

Landoop docker image for a dev environment

According to the doc, the docker image is supposed to be created with 6GB memory. I have a dev machine which consists of only 8GB memory. Can we create a landoop docker image with just around 1.5GB? Or, do we have separate mini version of the landoop image for dev environment?

Not showing title description in UI

Hi Team,

I have created Topic then trying to validate it Topic UI. I could see Topic in list but when I click on Topic then forever progress bar is running but not showing Topic description here. Please find the attached screen shot for your reference.

Regards,
Puneet Gaur
screen shot 2017-05-12 at 4 40 53 am

Problem when running WEB_ONLY

I wanted to run the Landoop front end inspecting the streaming samples from Confluent.

Now, I can run the command line tool (after editing the port):

docker run --rm -it --net=host landoop/fast-data-dev kafka-topics --zookeeper localhost:32181 --list

Running the full stack also works, when the Confluent docker is not running.

But when I try to run "web only", this fails:

docker run --rm -it --net=host -e WEB_ONLY="true" -e ZK_PORT=32181 landoop/fast-data-dev

I get this message:
Web only mode. Kafka services will be disabled.

But then this error:
cp: target '/etc/supervisord.d/08-logs-to-kafka.conf' is not a directory

Then it seems the container tries to start the services, failing at this and retrying continuously. The web server is started and seems to work, connecting to my Kafka service and getting something.

Expected behavior: Should just start the web front-end and connect to the Zookeeper instance. Not launch the other services.

Getting the below error when starting the docker

2017-05-02 20:47:28,826 INFO spawned: 'schema-registry' with pid 15690
2017-05-02 20:47:29,734 INFO exited: rest-proxy (exit status 1; not expected)
2017-05-02 20:47:29,849 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-05-02 20:47:29,850 INFO spawned: 'rest-proxy' with pid 15713

Suggestion: Option to select connectors to load

Is it possible to have an environment variable specifying the connectors we're interested into loading? By default all would be loaded

The goal is to have a lighter development environment and load exactly the connectors we'd like to try.

I want to run this docker on aws

I want to use this docker on ec2
docker run --rm -it
-p 3181:3181 -p 3040:3040 -p 7081:7081
-p 7082:7082 -p 7083:7083 -p 7092:7092
-e ZK_PORT=3181 -e WEB_PORT=3040 -e REGISTRY_PORT=8081
-e REST_PORT=7082 -e CONNECT_PORT=7083 -e BROKER_PORT=7092
-e ADV_HOST=127.0.0.1
landoop/fast-data-dev

I am binding ec2 ip address on ADV_HOST. But when I try using kafka producer(different machine) I could not push the messages to it. Am i missing something here?

connect-distributed connection refused

Hello, I've noticed there are two other issues related to the connect-ui not starting up properly and I seem to be having a similar issue.

Operating system: MacOS Sierra

I run the container with:

docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 -p 9581-9585:9581-9585 -p 9092:9092 landoop/fast-data-dev:latest

The stdout shows:

You may visit http://localhost:3030 in about a minute.
2017-11-13 12:11:47,793 CRIT Supervisor running as root (no user in config file)
2017-11-13 12:11:47,793 WARN Included extra file "/etc/supervisord.d/01-fast-data.conf" during parsing
2017-11-13 12:11:47,793 WARN Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2017-11-13 12:11:47,798 INFO supervisord started with pid 6
2017-11-13 12:11:48,803 INFO spawned: 'sample-data' with pid 89
2017-11-13 12:11:48,810 INFO spawned: 'zookeeper' with pid 90
2017-11-13 12:11:48,821 INFO spawned: 'caddy' with pid 92
2017-11-13 12:11:48,824 INFO spawned: 'broker' with pid 94
2017-11-13 12:11:48,826 INFO spawned: 'smoke-tests' with pid 96
2017-11-13 12:11:48,829 INFO spawned: 'connect-distributed' with pid 97
2017-11-13 12:11:48,835 INFO spawned: 'logs-to-kafka' with pid 101
2017-11-13 12:11:48,838 INFO spawned: 'schema-registry' with pid 106
2017-11-13 12:11:48,847 INFO spawned: 'rest-proxy' with pid 108
2017-11-13 12:11:50,651 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,652 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,653 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,653 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,653 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,653 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,654 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,654 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:11:50,654 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:13:01,975 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:13:03,007 INFO spawned: 'connect-distributed' with pid 729
2017-11-13 12:13:04,010 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:14:17,423 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:14:18,451 INFO spawned: 'connect-distributed' with pid 962
2017-11-13 12:14:19,454 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:14:42,246 INFO exited: sample-data (exit status 0; expected)
2017-11-13 12:14:48,923 INFO exited: logs-to-kafka (exit status 0; expected)
2017-11-13 12:16:14,920 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:16:15,988 INFO spawned: 'connect-distributed' with pid 1258
2017-11-13 12:16:17,636 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:17:05,213 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:17:06,246 INFO spawned: 'connect-distributed' with pid 1505
2017-11-13 12:17:07,252 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:17:47,409 INFO exited: smoke-tests (exit status 5; not expected)
2017-11-13 12:19:17,724 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:19:18,312 INFO spawned: 'connect-distributed' with pid 1635
2017-11-13 12:19:19,320 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:21:08,165 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:21:08,417 INFO spawned: 'connect-distributed' with pid 1664
2017-11-13 12:21:09,449 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-11-13 12:22:58,234 INFO exited: connect-distributed (terminated by SIGKILL; not expected)
2017-11-13 12:22:58,390 INFO spawned: 'connect-distributed' with pid 1693
2017-11-13 12:22:59,443 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

The caddy log shows:

Activating privacy features... done.
http://0.0.0.0:3030
172.17.0.1 - [13/Nov/2017:12:11:56 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
13/Nov/2017:12:11:56 +0000 [ERROR 502 /subjects/] dial tcp 0.0.0.0:8081: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:11:56 +0000] "GET /subjects/ HTTP/1.1" 502 16
13/Nov/2017:12:11:56 +0000 [ERROR 502 /topics/] dial tcp 0.0.0.0:8082: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:11:56 +0000] "GET /topics/ HTTP/1.1" 502 16
13/Nov/2017:12:11:56 +0000 [ERROR 502 /brokers] dial tcp 0.0.0.0:8082: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:11:56 +0000] "GET /brokers HTTP/1.1" 502 16
13/Nov/2017:12:11:56 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:11:56 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:06 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
13/Nov/2017:12:12:06 +0000 [ERROR 502 /topics/] dial tcp 0.0.0.0:8082: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:06 +0000] "GET /topics/ HTTP/1.1" 502 16
13/Nov/2017:12:12:06 +0000 [ERROR 502 /brokers] dial tcp 0.0.0.0:8082: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:06 +0000] "GET /brokers HTTP/1.1" 502 16
13/Nov/2017:12:12:06 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:06 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:07 +0000] "GET /subjects/ HTTP/1.1" 200 2
13/Nov/2017:12:12:16 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:16 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:16 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
172.17.0.1 - [13/Nov/2017:12:12:16 +0000] "GET /subjects/ HTTP/1.1" 200 2
172.17.0.1 - [13/Nov/2017:12:12:17 +0000] "GET /topics/ HTTP/1.1" 200 12
172.17.0.1 - [13/Nov/2017:12:12:17 +0000] "GET /brokers HTTP/1.1" 200 15
13/Nov/2017:12:12:26 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:26 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:26 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
172.17.0.1 - [13/Nov/2017:12:12:26 +0000] "GET /topics/ HTTP/1.1" 200 12
172.17.0.1 - [13/Nov/2017:12:12:26 +0000] "GET /subjects/ HTTP/1.1" 200 2
172.17.0.1 - [13/Nov/2017:12:12:26 +0000] "GET /brokers HTTP/1.1" 200 15
13/Nov/2017:12:12:36 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:36 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:36 +0000] "GET /topics/ HTTP/1.1" 200 42
172.17.0.1 - [13/Nov/2017:12:12:36 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
172.17.0.1 - [13/Nov/2017:12:12:36 +0000] "GET /subjects/ HTTP/1.1" 200 2
172.17.0.1 - [13/Nov/2017:12:12:36 +0000] "GET /brokers HTTP/1.1" 200 15
172.17.0.1 - [13/Nov/2017:12:12:46 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
172.17.0.1 - [13/Nov/2017:12:12:46 +0000] "GET /topics/ HTTP/1.1" 200 103
13/Nov/2017:12:12:46 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:46 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:46 +0000] "GET /subjects/ HTTP/1.1" 200 2
172.17.0.1 - [13/Nov/2017:12:12:46 +0000] "GET /brokers HTTP/1.1" 200 15
13/Nov/2017:12:12:56 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused
172.17.0.1 - [13/Nov/2017:12:12:56 +0000] "GET /connectors HTTP/1.1" 502 16
172.17.0.1 - [13/Nov/2017:12:12:56 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
172.17.0.1 - [13/Nov/2017:12:12:56 +0000] "GET /topics/ HTTP/1.1" 200 176
172.17.0.1 - [13/Nov/2017:12:12:56 +0000] "GET /subjects/ HTTP/1.1" 200 2
172.17.0.1 - [13/Nov/2017:12:12:56 +0000] "GET /brokers HTTP/1.1" 200 15
172.17.0.1 - [13/Nov/2017:12:12:59 +0000] "GET / HTTP/1.1" 304 0
172.17.0.1 - [13/Nov/2017:12:12:59 +0000] "GET /env.js HTTP/1.1" 200 1881
172.17.0.1 - [13/Nov/2017:12:12:59 +0000] "GET /coyote-tests/results HTTP/1.1" 200 33
13/Nov/2017:12:12:59 +0000 [ERROR 502 /connectors] dial tcp 0.0.0.0:8083: getsockopt: connection refused

The web interface shows:

screen shot 2017-11-13 at 12 21 13

screen shot 2017-11-13 at 12 29 04

The connect-distributed log shows:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-twitter/kafka-connect-twitter-0.1-master-af63e4c-cp3.3.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2017-11-13 12:02:06,983] INFO Loading plugin from: /opt/connectors/kafka-connect-yahoo (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:15,248] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-yahoo/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:15,249] INFO Added plugin 'com.datamountaineer.streamreactor.connect.yahoo.source.YahooSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,251] INFO Added plugin 'io.confluent.connect.avro.AvroConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,252] INFO Added plugin 'com.datamountaineer.streamreactor.connect.converters.source.JsonResilientConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,252] INFO Added plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,252] INFO Added plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,253] INFO Added plugin 'com.landoop.connect.sql.Transformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:15,253] INFO Loading plugin from: /opt/connectors/kafka-connect-coap (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:19,223] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-coap/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:19,223] INFO Added plugin 'com.datamountaineer.streamreactor.connect.coap.sink.CoapSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:19,224] INFO Added plugin 'com.datamountaineer.streamreactor.connect.coap.source.CoapSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:19,224] INFO Loading plugin from: /opt/connectors/kafka-connect-influxdb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:23,793] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-influxdb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:23,793] INFO Added plugin 'com.datamountaineer.streamreactor.connect.influx.InfluxSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:23,794] INFO Loading plugin from: /opt/connectors/kafka-connect-ftp (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:27,736] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-ftp/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:27,736] INFO Added plugin 'com.datamountaineer.streamreactor.connect.ftp.source.FtpSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:27,736] INFO Loading plugin from: /opt/connectors/kafka-connect-bloomberg (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:30,917] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-bloomberg/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:30,917] INFO Added plugin 'com.datamountaineer.streamreactor.connect.bloomberg.BloombergSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:30,918] INFO Loading plugin from: /opt/connectors/kafka-connect-cassandra (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:35,268] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-cassandra/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:35,268] INFO Added plugin 'com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:35,268] INFO Added plugin 'com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:35,268] INFO Loading plugin from: /opt/connectors/kafka-connect-hazelcast (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:40,416] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-hazelcast/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:40,416] INFO Added plugin 'com.datamountaineer.streamreactor.connect.hazelcast.sink.HazelCastSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:40,416] INFO Loading plugin from: /opt/connectors/kafka-connect-azure-documentdb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:45,742] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-azure-documentdb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:45,742] INFO Added plugin 'com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:45,742] INFO Loading plugin from: /opt/connectors/kafka-connect-rethink (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-13 12:02:50,441] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-rethink/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-13 12:02:50,441] INFO Added plugin 'com.datamountaineer.streamreactor.connect.rethink.source.ReThinkSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:50,442] INFO Added plugin 'com.datamountaineer.streamreactor.connect.rethink.sink.ReThinkSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-13 12:02:50,443] INFO Loading plugin from: /opt/connectors/kafka-connect-mongodb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-twitter/kafka-connect-twitter-0.1-master-af63e4c-cp3.3.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2017-11-13 12:03:15,923] INFO Loading plugin from: /opt/connectors/kafka-connect-yahoo (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)

It seems like there are problems connecting to the connect-distributed component and so it keeps getting restarted but the connection issue persists.

Web Only Mode on Mac

I'm trying to run fast-data-dev on MAC:
docker run --rm -it --net=host -e WEB_ONLY=1 -e DEBUG=1 landoop/fast-data-dev.
In docs told: "This is a special mode only for Linux hosts." - Is it possible to run fast-data-dev application on Mac? I'm running locally (default) installation (Confluent) platform (zookeeper/kafka/schema-registry/kafka-rest. Main reason - application couldn't connect to running locally components.
My configuration: Mac, docker (version 17.03.1-ce, build c6d412e, not docker-machine), Java 8.

connect-distributed fails if CONNECT_PORT env variable set

If I comment out CONNECT_PORT below, connect-distributed comes up fine.
(This is not a blocking issue for me. Just FYI)

  # CONNECT_PORT=8082
  kafka:
    image: landoop/fast-data-dev
    network_mode: host
    volumes:
      # https://github.com/Landoop/fast-data-dev/issues/37
      - "./volumes/kafka:/tmp"
    environment:
      ADV_HOST: ${ADV_HOST}
      ENABLE_SSL: ${ENABLE_SSL}
      WEB_PORT: ${WEB_PORT}
      CONNECT_PORT: ${CONNECT_PORT}
      REGISTRY_PORT: ${REGISTRY_PORT}
      USER: ${KAFKA_UI_USER}
      PASSWORD: ${KAFKA_UI_PASSWORD}
      BROKER_SSL_PORT: ${BROKER_SSL_PORT}

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-hdfs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/confluent/share/java/kafka-connect-twitter/kafka-connect-twitter-0.1-master-af63e4c-cp3.3.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2017-11-07 06:00:39,982] INFO Loading plugin from: /opt/connectors/kafka-connect-cassandra (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:00:49,501] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-cassandra/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:00:49,514] INFO Added plugin 'com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,515] INFO Added plugin 'com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,515] INFO Added plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,516] INFO Added plugin 'io.confluent.connect.avro.AvroConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,516] INFO Added plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,516] INFO Added plugin 'com.datamountaineer.streamreactor.connect.converters.source.JsonResilientConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,516] INFO Added plugin 'com.landoop.connect.sql.Transformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:49,516] INFO Loading plugin from: /opt/connectors/kafka-connect-jms (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:00:54,149] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-jms/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:00:54,149] INFO Added plugin 'com.datamountaineer.streamreactor.connect.jms.sink.JMSSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:54,150] INFO Added plugin 'com.datamountaineer.streamreactor.connect.jms.source.JMSSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:54,150] INFO Added plugin 'com.datamountaineer.streamreactor.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:54,151] INFO Loading plugin from: /opt/connectors/kafka-connect-hazelcast (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:00:58,537] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-hazelcast/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:00:58,540] INFO Added plugin 'com.datamountaineer.streamreactor.connect.hazelcast.sink.HazelCastSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:00:58,540] INFO Loading plugin from: /opt/connectors/kafka-connect-druid (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:04,950] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-druid/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:04,950] INFO Added plugin 'com.datamountaineer.streamreactor.connect.druid.DruidSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:04,950] INFO Loading plugin from: /opt/connectors/kafka-connect-blockchain (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:09,582] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-blockchain/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:09,582] INFO Added plugin 'com.datamountaineer.streamreactor.connect.blockchain.source.BlockchainSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:09,583] INFO Loading plugin from: /opt/connectors/kafka-connect-rethink (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:14,371] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-rethink/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:14,371] INFO Added plugin 'com.datamountaineer.streamreactor.connect.rethink.source.ReThinkSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:14,375] INFO Added plugin 'com.datamountaineer.streamreactor.connect.rethink.sink.ReThinkSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:14,375] INFO Loading plugin from: /opt/connectors/kafka-connect-azure-documentdb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:18,616] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-azure-documentdb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:18,616] INFO Added plugin 'com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:18,616] INFO Loading plugin from: /opt/connectors/kafka-connect-redis (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:22,627] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-redis/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:22,627] INFO Added plugin 'com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:22,627] INFO Loading plugin from: /opt/connectors/kafka-connect-kudu (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:27,411] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-kudu/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:27,411] INFO Added plugin 'com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:27,411] INFO Loading plugin from: /opt/connectors/kafka-connect-hbase (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:46,891] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-hbase/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:46,891] INFO Added plugin 'com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:46,891] INFO Loading plugin from: /opt/connectors/kafka-connect-ftp (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:51,127] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-ftp/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:51,127] INFO Added plugin 'com.datamountaineer.streamreactor.connect.ftp.source.FtpSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:51,127] INFO Loading plugin from: /opt/connectors/kafka-connect-coap (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:55,337] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-coap/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:55,337] INFO Added plugin 'com.datamountaineer.streamreactor.connect.coap.sink.CoapSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:55,337] INFO Added plugin 'com.datamountaineer.streamreactor.connect.coap.source.CoapSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:55,337] INFO Loading plugin from: /opt/connectors/kafka-connect-voltdb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:01:59,195] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-voltdb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:01:59,195] INFO Added plugin 'com.datamountaineer.streamreactor.connect.voltdb.VoltSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:01:59,196] INFO Loading plugin from: /opt/connectors/kafka-connect-influxdb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:03,382] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-influxdb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:03,383] INFO Added plugin 'com.datamountaineer.streamreactor.connect.influx.InfluxSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:03,384] INFO Loading plugin from: /opt/connectors/kafka-connect-bloomberg (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:07,547] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-bloomberg/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:07,548] INFO Added plugin 'com.datamountaineer.streamreactor.connect.bloomberg.BloombergSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:07,548] INFO Loading plugin from: /opt/connectors/kafka-connect-elastic5 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:14,889] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-elastic5/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:14,890] INFO Added plugin 'com.datamountaineer.streamreactor.connect.elastic5.ElasticSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:14,890] INFO Loading plugin from: /opt/connectors/kafka-connect-elastic (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:20,179] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-elastic/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:20,179] INFO Added plugin 'com.datamountaineer.streamreactor.connect.elastic.ElasticSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:20,179] INFO Loading plugin from: /opt/connectors/kafka-connect-mongodb (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:23,997] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-mongodb/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:23,997] INFO Added plugin 'com.datamountaineer.streamreactor.connect.mongodb.sink.MongoSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:23,997] INFO Loading plugin from: /opt/connectors/kafka-connect-mqtt (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:28,698] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-mqtt/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:28,698] INFO Added plugin 'com.datamountaineer.streamreactor.connect.mqtt.source.MqttSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:28,698] INFO Added plugin 'com.datamountaineer.streamreactor.connect.mqtt.sink.MqttSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:28,699] INFO Loading plugin from: /opt/connectors/kafka-connect-yahoo (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:176)
[2017-11-07 06:02:32,788] INFO Registered loader: PluginClassLoader{pluginLocation=file:/opt/connectors/kafka-connect-yahoo/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:32,789] INFO Added plugin 'com.datamountaineer.streamreactor.connect.yahoo.source.YahooSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,139] INFO Registered loader: sun.misc.Launcher$AppClassLoader@764c12b6 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:199)
[2017-11-07 06:02:47,140] INFO Added plugin 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'io.confluent.connect.s3.S3SinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'io.confluent.connect.hdfs.HdfsSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'com.eneco.trading.kafka.connect.twitter.TwitterSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'org.apache.kafka.connect.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,140] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'com.eneco.trading.kafka.connect.twitter.TwitterSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,141] INFO Added plugin 'io.confluent.connect.storage.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,142] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,142] INFO Added plugin 'io.confluent.connect.hdfs.tools.SchemaSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,143] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,144] INFO Added plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,144] INFO Added plugin 'org.apache.kafka.connect.converters.ByteArrayConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,145] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,146] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,146] INFO Added plugin 'org.apache.kafka.connect.transforms.Flatten$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,147] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,147] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,148] INFO Added plugin 'org.apache.kafka.connect.transforms.HoistField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,148] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,149] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,149] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,150] INFO Added plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,150] INFO Added plugin 'org.apache.kafka.connect.transforms.Cast$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,151] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,151] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,152] INFO Added plugin 'org.apache.kafka.connect.transforms.ReplaceField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,152] INFO Added plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,153] INFO Added plugin 'org.apache.kafka.connect.transforms.SetSchemaMetadata$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,153] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,154] INFO Added plugin 'org.apache.kafka.connect.transforms.TimestampConverter$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,154] INFO Added plugin 'org.apache.kafka.connect.transforms.ExtractField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,155] INFO Added plugin 'org.apache.kafka.connect.transforms.MaskField$Key' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,155] INFO Added plugin 'org.apache.kafka.connect.transforms.InsertField$Value' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-11-07 06:02:47,158] INFO Added aliases 'DocumentDbSinkConnector' and 'DocumentDbSink' to plugin 'com.datamountaineer.streamreactor.connect.azure.documentdb.sink.DocumentDbSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,159] INFO Added aliases 'BlockchainSourceConnector' and 'BlockchainSource' to plugin 'com.datamountaineer.streamreactor.connect.blockchain.source.BlockchainSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,160] INFO Added aliases 'BloombergSourceConnector' and 'BloombergSource' to plugin 'com.datamountaineer.streamreactor.connect.bloomberg.BloombergSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,160] INFO Added aliases 'CassandraSinkConnector' and 'CassandraSink' to plugin 'com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,161] INFO Added aliases 'CassandraSourceConnector' and 'CassandraSource' to plugin 'com.datamountaineer.streamreactor.connect.cassandra.source.CassandraSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,162] INFO Added aliases 'CoapSinkConnector' and 'CoapSink' to plugin 'com.datamountaineer.streamreactor.connect.coap.sink.CoapSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,163] INFO Added aliases 'CoapSourceConnector' and 'CoapSource' to plugin 'com.datamountaineer.streamreactor.connect.coap.source.CoapSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,163] INFO Added aliases 'DruidSinkConnector' and 'DruidSink' to plugin 'com.datamountaineer.streamreactor.connect.druid.DruidSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,164] INFO Added aliases 'FtpSourceConnector' and 'FtpSource' to plugin 'com.datamountaineer.streamreactor.connect.ftp.source.FtpSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,165] INFO Added aliases 'HazelCastSinkConnector' and 'HazelCastSink' to plugin 'com.datamountaineer.streamreactor.connect.hazelcast.sink.HazelCastSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,165] INFO Added aliases 'HbaseSinkConnector' and 'HbaseSink' to plugin 'com.datamountaineer.streamreactor.connect.hbase.HbaseSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,166] INFO Added aliases 'InfluxSinkConnector' and 'InfluxSink' to plugin 'com.datamountaineer.streamreactor.connect.influx.InfluxSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,167] INFO Added aliases 'JMSSinkConnector' and 'JMSSink' to plugin 'com.datamountaineer.streamreactor.connect.jms.sink.JMSSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,167] INFO Added aliases 'JMSSourceConnector' and 'JMSSource' to plugin 'com.datamountaineer.streamreactor.connect.jms.source.JMSSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,168] INFO Added aliases 'KuduSinkConnector' and 'KuduSink' to plugin 'com.datamountaineer.streamreactor.connect.kudu.sink.KuduSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,169] INFO Added aliases 'MongoSinkConnector' and 'MongoSink' to plugin 'com.datamountaineer.streamreactor.connect.mongodb.sink.MongoSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,169] INFO Added aliases 'MqttSinkConnector' and 'MqttSink' to plugin 'com.datamountaineer.streamreactor.connect.mqtt.sink.MqttSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,170] INFO Added aliases 'MqttSourceConnector' and 'MqttSource' to plugin 'com.datamountaineer.streamreactor.connect.mqtt.source.MqttSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,171] INFO Added aliases 'RedisSinkConnector' and 'RedisSink' to plugin 'com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,171] INFO Added aliases 'ReThinkSinkConnector' and 'ReThinkSink' to plugin 'com.datamountaineer.streamreactor.connect.rethink.sink.ReThinkSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,172] INFO Added aliases 'ReThinkSourceConnector' and 'ReThinkSource' to plugin 'com.datamountaineer.streamreactor.connect.rethink.source.ReThinkSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,173] INFO Added aliases 'VoltSinkConnector' and 'VoltSink' to plugin 'com.datamountaineer.streamreactor.connect.voltdb.VoltSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,173] INFO Added aliases 'YahooSourceConnector' and 'YahooSource' to plugin 'com.datamountaineer.streamreactor.connect.yahoo.source.YahooSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,174] INFO Added aliases 'TwitterSinkConnector' and 'TwitterSink' to plugin 'com.eneco.trading.kafka.connect.twitter.TwitterSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,174] INFO Added aliases 'TwitterSourceConnector' and 'TwitterSource' to plugin 'com.eneco.trading.kafka.connect.twitter.TwitterSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,175] INFO Added aliases 'ElasticsearchSinkConnector' and 'ElasticsearchSink' to plugin 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,176] INFO Added aliases 'HdfsSinkConnector' and 'HdfsSink' to plugin 'io.confluent.connect.hdfs.HdfsSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,176] INFO Added aliases 'JdbcSinkConnector' and 'JdbcSink' to plugin 'io.confluent.connect.jdbc.JdbcSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,177] INFO Added aliases 'JdbcSourceConnector' and 'JdbcSource' to plugin 'io.confluent.connect.jdbc.JdbcSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,178] INFO Added aliases 'S3SinkConnector' and 'S3Sink' to plugin 'io.confluent.connect.s3.S3SinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,178] INFO Added aliases 'FileStreamSinkConnector' and 'FileStreamSink' to plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,179] INFO Added aliases 'FileStreamSourceConnector' and 'FileStreamSource' to plugin 'org.apache.kafka.connect.file.FileStreamSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,179] INFO Added aliases 'MockConnector' and 'Mock' to plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,180] INFO Added aliases 'MockSinkConnector' and 'MockSink' to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,181] INFO Added aliases 'MockSourceConnector' and 'MockSource' to plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,181] INFO Added aliases 'VerifiableSinkConnector' and 'VerifiableSink' to plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,182] INFO Added aliases 'VerifiableSourceConnector' and 'VerifiableSource' to plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,182] INFO Added aliases 'JsonResilientConverter' and 'JsonResilient' to plugin 'com.datamountaineer.streamreactor.connect.converters.source.JsonResilientConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,183] INFO Added aliases 'AvroConverter' and 'Avro' to plugin 'io.confluent.connect.avro.AvroConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,183] INFO Added aliases 'JsonConverter' and 'Json' to plugin 'org.apache.kafka.connect.json.JsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,184] INFO Added aliases 'StringConverter' and 'String' to plugin 'org.apache.kafka.connect.storage.StringConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:293)
[2017-11-07 06:02:47,185] INFO Added alias 'Transformation' to plugin 'com.landoop.connect.sql.Transformation' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:290)
[2017-11-07 06:02:47,186] INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:290)
[2017-11-07 06:02:47,187] INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:290)
[2017-11-07 06:02:47,188] INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:290)
[2017-11-07 06:02:47,215] INFO DistributedConfig values: 
	access.control.allow.methods = GET,POST,PUT,DELETE,OPTIONS
	access.control.allow.origin = *
	bootstrap.servers = [localhost:9092]
	client.id = 
	config.storage.replication.factor = 1
	config.storage.topic = connect-configs
	connections.max.idle.ms = 540000
	group.id = connect-cluster
	heartbeat.interval.ms = 3000
	internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
	internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
	key.converter = class io.confluent.connect.avro.AvroConverter
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.sample.window.ms = 30000
	offset.flush.interval.ms = 60000
	offset.flush.timeout.ms = 5000
	offset.storage.partitions = 25
	offset.storage.replication.factor = 1
	offset.storage.topic = connect-offsets
	plugin.path = [/opt/connectors, /extra-connect-jars, /connectors]
	rebalance.timeout.ms = 60000
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 40000
	rest.advertised.host.name = 10.96.11.187
	rest.advertised.port = null
	rest.host.name = null
	rest.port = 8082
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	status.storage.partitions = 5
	status.storage.replication.factor = 1
	status.storage.topic = connect-statuses
	task.shutdown.graceful.timeout.ms = 5000
	value.converter = class io.confluent.connect.avro.AvroConverter
	worker.sync.timeout.ms = 3000
	worker.unsync.backoff.ms = 300000
 (org.apache.kafka.connect.runtime.distributed.DistributedConfig:223)
[2017-11-07 06:02:47,395] INFO Logging initialized @128737ms (org.eclipse.jetty.util.log:186)
[2017-11-07 06:02:47,562] INFO AvroConverterConfig values: 
	schema.registry.url = [http://localhost:8900]
	max.schemas.per.subject = 1000
 (io.confluent.connect.avro.AvroConverterConfig:170)
[2017-11-07 06:02:47,780] INFO AvroDataConfig values: 
	schemas.cache.config = 1000
	enhanced.avro.schema.support = false
	connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:170)
[2017-11-07 06:02:47,782] INFO AvroConverterConfig values: 
	schema.registry.url = [http://localhost:8900]
	max.schemas.per.subject = 1000
 (io.confluent.connect.avro.AvroConverterConfig:170)
[2017-11-07 06:02:47,783] INFO AvroDataConfig values: 
	schemas.cache.config = 1000
	enhanced.avro.schema.support = false
	connect.meta.data = true
 (io.confluent.connect.avro.AvroDataConfig:170)
[2017-11-07 06:02:47,837] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:47,838] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:47,842] INFO Kafka Connect starting (org.apache.kafka.connect.runtime.Connect:49)
[2017-11-07 06:02:47,843] INFO Starting REST server (org.apache.kafka.connect.runtime.rest.RestServer:98)
[2017-11-07 06:02:47,843] INFO Herder starting (org.apache.kafka.connect.runtime.distributed.DistributedHerder:192)
[2017-11-07 06:02:47,843] INFO Worker starting (org.apache.kafka.connect.runtime.Worker:144)
[2017-11-07 06:02:47,843] INFO Starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:108)
[2017-11-07 06:02:47,843] INFO Starting KafkaBasedLog with topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:124)
[2017-11-07 06:02:47,847] INFO AdminClientConfig values: 
	bootstrap.servers = [localhost:9092]
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig:223)
[2017-11-07 06:02:47,868] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,868] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,869] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,870] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,870] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:47,951] INFO jetty-9.2.15.v20160210 (org.eclipse.jetty.server.Server:327)
[2017-11-07 06:02:48,191] INFO ProducerConfig values: 
	acks = all
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 1
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig:223)
[2017-11-07 06:02:48,219] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,219] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,220] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,221] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,222] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:48,244] INFO ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = earliest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.id = 
	connections.max.idle.ms = 540000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = connect-cluster
	heartbeat.interval.ms = 3000
	interceptor.classes = null
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 305000
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig:223)
[2017-11-07 06:02:48,284] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,284] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,285] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,286] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,286] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:48,365] INFO Discovered coordinator 10.96.11.187:9092 (id: 2147483647 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:597)
[2017-11-07 06:02:48,426] INFO Finished reading KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:153)
[2017-11-07 06:02:48,427] INFO Started KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:155)
[2017-11-07 06:02:48,428] INFO Finished reading offsets topic and starting KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:110)
[2017-11-07 06:02:48,430] INFO Worker started (org.apache.kafka.connect.runtime.Worker:149)
[2017-11-07 06:02:48,430] INFO Starting KafkaBasedLog with topic connect-statuses (org.apache.kafka.connect.util.KafkaBasedLog:124)
[2017-11-07 06:02:48,430] INFO AdminClientConfig values: 
	bootstrap.servers = [localhost:9092]
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig:223)
[2017-11-07 06:02:48,432] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,432] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,433] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,433] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,433] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,433] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,434] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,434] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,434] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,435] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,435] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,435] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,435] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,436] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,436] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,436] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,437] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,437] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,437] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,438] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,558] INFO ProducerConfig values: 
	acks = all
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 1
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig:223)
[2017-11-07 06:02:48,574] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,575] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,576] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,577] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,577] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,577] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:48,578] INFO ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = earliest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.id = 
	connections.max.idle.ms = 540000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = connect-cluster
	heartbeat.interval.ms = 3000
	interceptor.classes = null
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 305000
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig:223)
[2017-11-07 06:02:48,580] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,581] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,582] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,583] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,583] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
Nov 07, 2017 6:02:48 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.

[2017-11-07 06:02:48,604] INFO Started o.e.j.s.ServletContextHandler@5068a1f5{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:744)
[2017-11-07 06:02:48,608] WARN FAILED ServerConnector@28b2da8a{HTTP/1.1}{0.0.0.0:8082}: java.net.BindException: Address in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
	at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
	at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.server.Server.doStart(Server.java:366)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:145)
	at org.apache.kafka.connect.runtime.Connect.start(Connect.java:53)
	at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:85)
[2017-11-07 06:02:48,609] WARN FAILED org.eclipse.jetty.server.Server@51c7bd2e: java.net.BindException: Address in use (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
java.net.BindException: Address in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
	at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
	at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.server.Server.doStart(Server.java:366)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:145)
	at org.apache.kafka.connect.runtime.Connect.start(Connect.java:53)
	at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:85)
[2017-11-07 06:02:48,610] ERROR Failed to start Connect (org.apache.kafka.connect.cli.ConnectDistributed:87)
org.apache.kafka.connect.errors.ConnectException: Unable to start REST server
	at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:147)
	at org.apache.kafka.connect.runtime.Connect.start(Connect.java:53)
	at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:85)
Caused by: java.net.BindException: Address in use
	at sun.nio.ch.Net.bind0(Native Method)
	at sun.nio.ch.Net.bind(Net.java:433)
	at sun.nio.ch.Net.bind(Net.java:425)
	at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
	at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
	at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:321)
	at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
	at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.eclipse.jetty.server.Server.doStart(Server.java:366)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at org.apache.kafka.connect.runtime.rest.RestServer.start(RestServer.java:145)
	... 2 more
[2017-11-07 06:02:48,610] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2017-11-07 06:02:48,610] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:154)
[2017-11-07 06:02:48,611] INFO Stopped ServerConnector@28b2da8a{HTTP/1.1}{0.0.0.0:8082} (org.eclipse.jetty.server.ServerConnector:306)
[2017-11-07 06:02:48,616] INFO Stopped o.e.j.s.ServletContextHandler@5068a1f5{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
[2017-11-07 06:02:48,618] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:165)
[2017-11-07 06:02:48,618] INFO Herder stopping (org.apache.kafka.connect.runtime.distributed.DistributedHerder:377)
[2017-11-07 06:02:48,624] INFO Discovered coordinator 10.96.11.187:9092 (id: 2147483647 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:597)
[2017-11-07 06:02:48,666] INFO Finished reading KafkaBasedLog for topic connect-statuses (org.apache.kafka.connect.util.KafkaBasedLog:153)
[2017-11-07 06:02:48,667] INFO Started KafkaBasedLog for topic connect-statuses (org.apache.kafka.connect.util.KafkaBasedLog:155)
[2017-11-07 06:02:48,668] INFO Starting KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:244)
[2017-11-07 06:02:48,668] INFO Starting KafkaBasedLog with topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:124)
[2017-11-07 06:02:48,669] INFO AdminClientConfig values: 
	bootstrap.servers = [localhost:9092]
	client.id = 
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
 (org.apache.kafka.clients.admin.AdminClientConfig:223)
[2017-11-07 06:02:48,670] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,670] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,670] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,670] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,671] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,672] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:231)
[2017-11-07 06:02:48,784] INFO ProducerConfig values: 
	acks = all
	batch.size = 16384
	bootstrap.servers = [localhost:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 1
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig:223)
[2017-11-07 06:02:48,787] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,787] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,788] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,789] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,789] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,789] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig:231)
[2017-11-07 06:02:48,789] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,789] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:48,789] INFO ConsumerConfig values: 
	auto.commit.interval.ms = 5000
	auto.offset.reset = earliest
	bootstrap.servers = [localhost:9092]
	check.crcs = true
	client.id = 
	connections.max.idle.ms = 540000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = connect-cluster
	heartbeat.interval.ms = 3000
	interceptor.classes = null
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 305000
	retry.backoff.ms = 100
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig:223)
[2017-11-07 06:02:48,792] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,792] WARN The configuration 'internal.key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,792] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,792] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,792] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'internal.key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'access.control.allow.methods' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,793] WARN The configuration 'access.control.allow.origin' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,794] WARN The configuration 'internal.value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,794] WARN The configuration 'internal.value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,794] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,794] WARN The configuration 'key.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:231)
[2017-11-07 06:02:48,794] INFO Kafka version : 0.11.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-11-07 06:02:48,794] INFO Kafka commitId : 5cadaa94d0a69e0d (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-11-07 06:02:48,813] INFO Discovered coordinator 10.96.11.187:9092 (id: 2147483647 rack: null) for group connect-cluster. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:597)
[2017-11-07 06:02:48,829] INFO Finished reading KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:153)
[2017-11-07 06:02:48,830] INFO Started KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:155)
[2017-11-07 06:02:48,831] INFO Started KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:249)
[2017-11-07 06:02:48,831] INFO Herder started (org.apache.kafka.connect.runtime.distributed.DistributedHerder:196)
[2017-11-07 06:02:48,831] INFO Stopping connectors and tasks that are still assigned to this worker. (org.apache.kafka.connect.runtime.distributed.DistributedHerder:351)
[2017-11-07 06:02:48,832] INFO Stopping KafkaBasedLog for topic connect-statuses (org.apache.kafka.connect.util.KafkaBasedLog:159)
[2017-11-07 06:02:48,833] INFO Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:972)
[2017-11-07 06:02:48,836] INFO Stopped KafkaBasedLog for topic connect-statuses (org.apache.kafka.connect.util.KafkaBasedLog:185)
[2017-11-07 06:02:48,836] INFO Closing KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:254)
[2017-11-07 06:02:48,837] INFO Stopping KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:159)
[2017-11-07 06:02:48,837] INFO Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:972)
[2017-11-07 06:02:48,840] INFO Stopped KafkaBasedLog for topic connect-configs (org.apache.kafka.connect.util.KafkaBasedLog:185)
[2017-11-07 06:02:48,841] INFO Closed KafkaConfigBackingStore (org.apache.kafka.connect.storage.KafkaConfigBackingStore:256)
[2017-11-07 06:02:48,841] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:156)
[2017-11-07 06:02:48,841] INFO Stopping KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:115)
[2017-11-07 06:02:48,842] INFO Stopping KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:159)
[2017-11-07 06:02:48,842] INFO Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer:972)
[2017-11-07 06:02:48,845] INFO Stopped KafkaBasedLog for topic connect-offsets (org.apache.kafka.connect.util.KafkaBasedLog:185)
[2017-11-07 06:02:48,845] INFO Stopped KafkaOffsetBackingStore (org.apache.kafka.connect.storage.KafkaOffsetBackingStore:117)
[2017-11-07 06:02:48,845] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:176)
[2017-11-07 06:02:48,846] INFO Herder stopped (org.apache.kafka.connect.runtime.distributed.DistributedHerder:204)
[2017-11-07 06:02:48,846] INFO Herder stopped (org.apache.kafka.connect.runtime.distributed.DistributedHerder:397)
[2017-11-07 06:02:48,847] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)

Kafka Connect HDFS/Hive

Hi, great Docker image btw!

I successfully made it work with HDFS/Hive, and I wanted to share the experience. Most of the issues I think are actually related to Confluent, more than Landoop, but I like this packaging better and I hope hdfs will be supported out-of-the-box soon :)

First, my config:

  • HDFS/Hive in HA
  • format.class=io.confluent.connect.hdfs.parquet.ParquetFormat

Here are the issues I had to solve:

  1. class path order -- stream-reactor has java lib versions which are incompatible with confluent connect 3.0.1. Depending who is loaded first, you can have connect hdfs working or not (and, in my case, if I turn on hdfs then I break stream-reactor, e.g. cassandra is no longer working).
    Temp solution:
    ln -s /opt/confluent-3.0.1/share/java/kafka-connect-hdfs /opt/confluent-3.0.1/share/java/kafka-connect-0hdfs
    (so kafka-connect-hdfs is included first in the cp)
  2. Parquet by default compresses with Snappy. Snappy lib requires libstdc++ (to be installed via apk)
  3. for HA to work, /etc/hadoop/conf needs to be available to the container. Moreover, Connect v3.0.1 has an issue with Parquet and HA:
    confluentinc/kafka-connect-hdfs@176e7d1
  4. finally, there are several improvement to connect post v3.0.1 that I think are pretty fundamental, for instance this one:
    confluentinc/kafka-connect-hdfs@38aaded

I'm maintaining a branch with the minimal changes to v3.0.1 that makes kafka-connect-hdfs working in my settings here:
https://github.com/shopkick/kafka-connect-hdfs/tree/sk_v3.0.1

How to pass custom config options to Kafka

For example I would like my topics to be created automatically, therefore, I'd like to somehow configure auto.create.topics.enable=true. Based on the documentation, it doesn't seem like there's a way to pass in this option as an environment variable, therefore I have tried the following:

-v ./docker/landoop/opt/confluent/etc/kafka:/opt/confluent/etc/kafka

Then add the auto.create.topics.enable=true in the server.properties file.

However, the above causes some weird behaviour on the Kafka Broker, here are the logs:

landoop_1       | 2017-09-28 11:53:03,886 INFO exited: broker (exit status 1; not expected)
landoop_1       | 2017-09-28 11:53:04,749 INFO spawned: 'broker' with pid 281
landoop_1       | 2017-09-28 11:53:05,851 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
landoop_1       | 2017-09-28 11:53:11,867 INFO exited: rest-proxy (exit status 1; not expected)
landoop_1       | 2017-09-28 11:53:12,604 INFO spawned: 'rest-proxy' with pid 314
landoop_1       | 2017-09-28 11:53:13,024 INFO exited: schema-registry (exit status 1; not expected)
landoop_1       | 2017-09-28 11:53:13,735 INFO spawned: 'schema-registry' with pid 341
landoop_1       | 2017-09-28 11:53:13,739 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

which obviously repeats over time when trying to restart it.

Plus, when I go to http://localhost:3030/kafka-topics-ui/, I see the following:

KAFKA REST
/api/kafka-rest-proxy 
CONNECTIVITY ERROR

Thank you.

Unable to get the application running

Hey all, working on trying to get Kafka / Confluent running with FDD, and am ... well unsuccessful. I'm getting the following when trying to run the application on AWS EC2:

Authenticating with public key "HR-DevOps-HRTech"

ubuntu@ip-172-16-75-233:~$ screen -S kafka -- sudo docker run -it -v /srv/kafka:/tmp --rm --net=host -e ADV_HOST=172.16.75.233 landoop/fast-data-dev
256a5053f0c5: Pull complete
df59bd94bdd4: Pull complete
6aac547c4d90: Pull complete
1818c7d8b194: Pull complete
8297d3821b7b: Pull complete
27f5c49688ee: Pull complete
3a13574f4ddd: Pull complete
13d13524515b: Pull complete
dd4dddc9dc97: Pull complete
633bee4809e9: Pull complete
a6842c7eb2cf: Pull complete
22ae19fc8a44: Pull complete
c58290624b9c: Pull complete
09afeafbcca3: Pull complete
30033ace6359: Pull complete
14951d7d8ded: Pull complete
6797a96a7092: Pull complete
5e958485d959: Pull complete
6e77d9fc1e81: Pull complete
99046468067f: Pull complete
def900f2c3f5: Pull complete
e3298ece27a7: Pull complete
46b35ea8b707: Pull complete
498af3d85513: Pull complete
c29a64ba4098: Pull complete
d5267660efab: Pull complete
068d760d616b: Pull complete
12c2d93a4a0c: Pull complete
Digest: sha256:041358122a185144261c95cb8abfb38d47461d585e1672ed58d3ce9800634629
Status: Downloaded newer image for landoop/fast-data-dev:latest
Setting advertised host to 172.16.75.233.
Operating system RAM available is 3563 MiB, which is less than the lowest
recommended of 5120 MiB. Your system performance may be seriously impacted.
Starting services.
This is landoop’s fast-data-dev. Kafka 0.11.0.1, Confluent OSS 3.3.1.
You may visit http://172.16.75.233:3030 in about a minute.
2018-02-28 14:42:48,528 CRIT Supervisor running as root (no user in config file)
2018-02-28 14:42:48,528 INFO Included extra file "/etc/supervisord.d/01-fast-data.conf" during parsing
2018-02-28 14:42:48,528 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2018-02-28 14:42:48,530 INFO supervisord started with pid 6
2018-02-28 14:42:49,532 INFO spawned: 'sample-data' with pid 89
2018-02-28 14:42:49,534 INFO spawned: 'zookeeper' with pid 90
2018-02-28 14:42:49,535 INFO spawned: 'caddy' with pid 91
2018-02-28 14:42:49,537 INFO spawned: 'broker' with pid 92
2018-02-28 14:42:49,538 INFO spawned: 'smoke-tests' with pid 94
2018-02-28 14:42:49,540 INFO spawned: 'connect-distributed' with pid 95
2018-02-28 14:42:49,541 INFO spawned: 'logs-to-kafka' with pid 96
2018-02-28 14:42:49,543 INFO spawned: 'schema-registry' with pid 97

┌────────────────────────────────────────────────────────────────────┐
│ • MobaXterm 10.4 • │
│ (SSH client, X-server and networking tools) │
│ │
│ ➤ SSH session to [email protected]
│ • SSH compression : ✔ │
│ • SSH-browser : ✔ │
│ • X11-forwarding : ✔ (remote display is forwarded through SSH) │
│ • DISPLAY : ✔ (automatically set on remote server) │
│ │
│ ➤ For more info, ctrl+click on help or visit our website │
└────────────────────────────────────────────────────────────────────┘

Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1049-aws x86_64)

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

/usr/bin/xauth: file /home/ubuntu/.Xauthority does not exist
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-172-16-75-233:$ sudo apt-get update
sudo: unable to resolve host ip-172-16-75-233
Hit:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Get:4 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/main Sources [868 kB]
Get:5 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/restricted Sources [4,808 B]
Get:6 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/universe Sources [7,728 kB]
Get:7 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/multiverse Sources [179 kB]
Get:8 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [7,532 kB]
Get:9 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/universe Translation-en [4,354 kB]
Get:10 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages [144 kB]
Get:11 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/multiverse Translation-en [106 kB]
Get:12 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main Sources [298 kB]
Get:13 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/restricted Sources [2,524 B]
Get:14 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/universe Sources [191 kB]
Get:15 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/multiverse Sources [7,968 B]
Get:16 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [736 kB]
Get:17 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [305 kB]
Get:18 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [592 kB]
Get:19 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [239 kB]
Get:20 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [16.2 kB]
Get:21 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/multiverse Translation-en [8,052 B]
Get:22 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/main Sources [3,432 B]
Get:23 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/universe Sources [4,900 B]
Get:24 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [4,836 B]
Get:25 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/main Translation-en [3,220 B]
Get:26 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [6,628 B]
Get:27 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,768 B]
Get:28 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:29 http://security.ubuntu.com/ubuntu xenial-security/main Sources [116 kB]
Get:30 http://security.ubuntu.com/ubuntu xenial-security/restricted Sources [2,116 B]
Get:31 http://security.ubuntu.com/ubuntu xenial-security/universe Sources [59.1 kB]
Get:32 http://security.ubuntu.com/ubuntu xenial-security/multiverse Sources [1,516 B]
Get:33 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [459 kB]
Get:34 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [198 kB]
Get:35 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [320 kB]
Get:36 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [119 kB]
Get:37 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [3,208 B]
Get:38 http://security.ubuntu.com/ubuntu xenial-security/multiverse Translation-en [1,408 B]
Fetched 24.9 MB in 7s (3,200 kB/s)
Reading package lists... Done
ubuntu@ip-172-16-75-233:
$ sudo apt-get install \

apt-transport-https \
ca-certificates \
curl \
software-properties-common

sudo: unable to resolve host ip-172-16-75-233
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates is already the newest version (2017071716.04.1).
software-properties-common is already the newest version (0.96.20.7).
The following additional packages will be installed:
libcurl3-gnutls
The following packages will be upgraded:
apt-transport-https curl libcurl3-gnutls
3 upgraded, 0 newly installed, 0 to remove and 48 not upgraded.
Need to get 349 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 curl amd64 7.47.0-1ubuntu2.6 [138 kB]
Get:2 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.6 [184 kB]
Get:3 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-transport-https amd64 1.2.25 [26.1 kB]
Fetched 349 kB in 0s (13.9 MB/s)
(Reading database ... 51122 files and directories currently installed.)
Preparing to unpack .../curl_7.47.0-1ubuntu2.6_amd64.deb ...
Unpacking curl (7.47.0-1ubuntu2.6) over (7.47.0-1ubuntu2.5) ...
Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.6_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.6) over (7.47.0-1ubuntu2.5) ...
Preparing to unpack .../apt-transport-https_1.2.25_amd64.deb ...
Unpacking apt-transport-https (1.2.25) over (1.2.24) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.6) ...
Setting up curl (7.47.0-1ubuntu2.6) ...
Setting up apt-transport-https (1.2.25) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
ubuntu@ip-172-16-75-233:
$ sudo nano /etc/hosts
sudo: unable to resolve host ip-172-16-75-233
ubuntu@ip-172-16-75-233:$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo: unable to resolve host ip-172-16-75-233
OK
ubuntu@ip-172-16-75-233:
$ sudo apt-key fingerprint 0EBFCD88
sudo: unable to resolve host ip-172-16-75-233
pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) [email protected]
sub 4096R/F273FCD8 2017-02-22

ubuntu@ip-172-16-75-233:~$ sudo add-apt-repository \

"deb [arch=amd64] https://download.docker.com/linux/ubuntu
$(lsb_release -cs)
stable"
sudo: unable to resolve host ip-172-16-75-233
ubuntu@ip-172-16-75-233:$ sudo apt-get update
sudo: unable to resolve host ip-172-16-75-233
Hit:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:4 https://download.docker.com/linux/ubuntu xenial InRelease [65.8 kB]
Hit:5 http://security.ubuntu.com/ubuntu xenial-security InRelease
Get:6 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages [3,329 B]
Fetched 69.1 kB in 1s (67.2 kB/s)
Reading package lists... Done
ubuntu@ip-172-16-75-233:
$ sudo apt-get install docker-ce
sudo: unable to resolve host ip-172-16-75-233
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount libltdl7
Suggested packages:
mountall
The following NEW packages will be installed:
aufs-tools cgroupfs-mount docker-ce libltdl7
0 upgraded, 4 newly installed, 0 to remove and 48 not upgraded.
Need to get 30.3 MB of archives.
After this operation, 149 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
Get:2 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/universe amd64 cgroupfs-mount all 1.2 [4,970 B]
Get:3 http://us-west-2.ec2.archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
Get:4 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 17.12.1ce-0ubuntu [30.2 MB]
Fetched 30.3 MB in 2s (11.2 MB/s)
Selecting previously unselected package aufs-tools.
(Reading database ... 51122 files and directories currently installed.)
Preparing to unpack .../aufs-tools_1%3a3.2+20130722-1.1ubuntu1_amd64.deb ...
Unpacking aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../cgroupfs-mount_1.2_all.deb ...
Unpacking cgroupfs-mount (1.2) ...
Selecting previously unselected package libltdl7:amd64.
Preparing to unpack .../libltdl7_2.4.6-0.1_amd64.deb ...
Unpacking libltdl7:amd64 (2.4.6-0.1) ...
Selecting previously unselected package docker-ce.
Preparing to unpack .../docker-ce_17.12.1ce-0ubuntu_amd64.deb ...
Unpacking docker-ce (17.12.1ce-0ubuntu) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu21) ...
Setting up aufs-tools (1:3.2+20130722-1.1ubuntu1) ...
Setting up cgroupfs-mount (1.2) ...
Setting up libltdl7:amd64 (2.4.6-0.1) ...
Setting up docker-ce (17.12.1ce-0ubuntu) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ip-172-16-75-233:$ sudo groupadd docker
sudo: unable to resolve host ip-172-16-75-233
groupadd: group 'docker' already exists
ubuntu@ip-172-16-75-233:
$ sudo usermod -aG docker $USER
sudo: unable to resolve host ip-172-16-75-233
ubuntu@ip-172-16-75-233:$ sudo nano /etc/hosts
sudo: unable to resolve host ip-172-16-75-233
ubuntu@ip-172-16-75-233:
$ screen -S kafka -- sudo docker run -it -v /srv/kafka:/tmp --rm --net=host -e ADV_HOST=172.16.75.233 landoop/fast-data-dev
[detached from 13762.kafka]
ubuntu@ip-172-16-75-233:$ ^C
ubuntu@ip-172-16-75-233:
$ ^A?
?: command not found
ubuntu@ip-172-16-75-233:$ ?
?: command not found
ubuntu@ip-172-16-75-233:
$
ubuntu@ip-172-16-75-233:~$ logout

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Session stopped
- Press to exit tab
- Press R to restart session
- Press S to save terminal output to file
Authenticating with public key "HR-DevOps-HRTech"
┌────────────────────────────────────────────────────────────────────┐
│ • MobaXterm 10.4 • │
│ (SSH client, X-server and networking tools) │
│ │
│ ➤ SSH session to [email protected]
│ • SSH compression : ✔ │
│ • SSH-browser : ✔ │
│ • X11-forwarding : ✔ (remote display is forwarded through SSH) │
│ • DISPLAY : ✔ (automatically set on remote server) │
│ │
│ ➤ For more info, ctrl+click on help or visit our website │
└────────────────────────────────────────────────────────────────────┘

Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-1049-aws x86_64)

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

51 packages can be updated.
17 updates are security updates.

Last login: Wed Feb 28 14:21:54 2018 from 10.8.16.32
ubuntu@ip-172-16-75-233:$ ^C
ubuntu@ip-172-16-75-233:
$ c
c: command not found
ubuntu@ip-172-16-75-233:$ screen -v
Screen version 4.03.01 (GNU) 28-Jun-15
ubuntu@ip-172-16-75-233:
$ which screen
/usr/bin/screen
ubuntu@ip-172-16-75-233:~$ screen -r
2018-02-28 18:11:12,481 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:15,291 INFO exited: schema-registry (exit status 1; not expected)
2018-02-28 18:11:15,298 INFO spawned: 'schema-registry' with pid 22032
2018-02-28 18:11:16,303 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:16,673 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:16,757 INFO spawned: 'broker' with pid 22056
2018-02-28 18:11:17,762 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:20,143 INFO exited: rest-proxy (exit status 1; not expected)
2018-02-28 18:11:20,228 INFO spawned: 'rest-proxy' with pid 22076
2018-02-28 18:11:21,233 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:22,017 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:22,039 INFO spawned: 'broker' with pid 22100
2018-02-28 18:11:23,044 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:25,631 INFO exited: schema-registry (exit status 1; not expected)
2018-02-28 18:11:25,711 INFO spawned: 'schema-registry' with pid 22144
2018-02-28 18:11:25,822 INFO exited: connect-distributed (exit status 1; not expected)
2018-02-28 18:11:25,912 INFO spawned: 'connect-distributed' with pid 22146
2018-02-28 18:11:26,716 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:26,967 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:27,290 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:27,321 INFO spawned: 'broker' with pid 22170
2018-02-28 18:11:28,326 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:32,567 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:32,641 INFO spawned: 'broker' with pid 22194
2018-02-28 18:11:33,658 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:35,469 INFO exited: rest-proxy (exit status 1; not expected)
2018-02-28 18:11:35,520 INFO spawned: 'rest-proxy' with pid 22214
2018-02-28 18:11:35,966 INFO exited: schema-registry (exit status 1; not expected)
2018-02-28 18:11:36,023 INFO spawned: 'schema-registry' with pid 22234
2018-02-28 18:11:36,524 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:37,027 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:37,908 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:37,934 INFO spawned: 'broker' with pid 22258
2018-02-28 18:11:38,939 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:41,330 INFO exited: connect-distributed (exit status 1; not expected)
2018-02-28 18:11:41,355 INFO spawned: 'connect-distributed' with pid 22284
2018-02-28 18:11:42,360 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:43,182 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:43,216 INFO spawned: 'broker' with pid 22308
2018-02-28 18:11:44,221 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:46,274 INFO exited: schema-registry (exit status 1; not expected)
2018-02-28 18:11:46,336 INFO spawned: 'schema-registry' with pid 22328
2018-02-28 18:11:47,391 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-02-28 18:11:48,477 INFO exited: broker (exit status 1; not expected)
2018-02-28 18:11:48,550 INFO spawned: 'broker' with pid 22352

I am SO new at this, and a DBA, not an engineer, so I am very much so confused as to how ANY of this works at all.

Kafka Connect - HDFS Sink connector doen't work

my task worked with the official confluent kafka connect image

Task Def:
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
flush.size=3
topics=event-sent-email
tasks.max=1
timezone=UTC
partitioner.class=io.confluent.connect.hdfs.partitioner.HourlyPartitioner
format.class=io.confluent.connect.hdfs.parquet.ParquetFormat
hdfs.url=hdfs://hadoop:9000/events
name=event-sent-email-to-hdfs
locale=en

[pool-1-thread-4] INFO io.confluent.connect.hdfs.HdfsSinkTask - Couldn't start HdfsSinkConnector:
org.apache.kafka.connect.errors.ConnectException: java.lang.reflect.InvocationTargetException
at io.confluent.connect.hdfs.storage.StorageFactory.createStorage(StorageFactory.java:31)
at io.confluent.connect.hdfs.DataWriter.(DataWriter.java:168)
at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:76)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:221)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at io.confluent.connect.hdfs.storage.StorageFactory.createStorage(StorageFactory.java:29)
... 11 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.ipc.RPC.getProtocolProxy(Ljava/lang/Class;JLjava/net/InetSocketAddress;Lorg/apache/hadoop/security/UserGroupInformation;Lorg/apache/hadoop/conf/Configura
at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:418)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:314)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:668)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:604)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2613)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:415)
at io.confluent.connect.hdfs.storage.HdfsStorage.(HdfsStorage.java:39)
... 16 more

Exposing via minikube

Hello, I've been trying to setup the image to run inside a local minikube deployment but I'm not able to manage to access kafka from the host. It is not fully clear to me if I should use ADV_HOST or not, and which value should be published there. I tried it by setting the nodeIP (Virtual box machine), but in that case the broker connection fails even from internally.

Configuration for reference:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: kafka
  labels:
    chart: kafka
    system: kafka
    type: messaging
    component: kafka 
spec:
    replicas: 1
    template:
      metadata:
        labels:
          app: kafka
      spec:
        hostname: kafka-ms
        containers:
        - name: kafka-container 
          image: landoop/fast-data-dev
          ports:
          - containerPort: 9092
          - containerPort: 3030
          - containerPort: 2181
          env:
          - name: RUNTESTS
            value: "0"
          - name: SAMPLEDATA
            value: "0"
#          - name: ADV_HOST
#           valueFrom:
#              fieldRef:
#                fieldPath: status.hostIP
apiVersion: v1
kind: Service
metadata:
  name: kafka
  labels:
    chart: kafka
    system: calypso
    type: messaging
    component: kafka
spec:
  type: NodePort
  ports:
  - port: 3030
    name: "web"
    nodePort: 30001
  - port: 9092
    name: "messaging"
    nodePort: 30002
  - port: 2181
    name: "zoo"
    nodePort: 30003
  selector:
      app: kafka

Any inputs are appreciated.

Thanks

Make logs view tail

Great stuff with exposing the logs through my browser, I think that's an amazing feature. Definitely going to help.
Last thing I could suggest would be to have a view that "streams / tails" the logs in the browser. So we don't have to refresh : http://localhost:3030/logs/broker.log

http://127.0.0.1:3030 not showing the UI on Win 10 (Site can't be reached)

I am new on docker and try to run the fast-data-dev but not able to hit the UI
Here Is the command and the logs :

docker run --rm -it \

       -p 2181:2181 -p 3030:3030 -p 8081:8081 \
       -p 8082:8082 -p 8083:8083 -p 9092:9092 \
       -e ADV_HOST=127.0.0.1 \
       landoop/fast-data-dev

Setting advertised host to 127.0.0.1.
Starting services.
This is landoop’s fast-data-dev. Kafka 0.11.0.1, Confluent OSS 3.3.1.
You may visit http://127.0.0.1:3030 in about a minute.
2018-03-03 20:19:21,818 CRIT Supervisor running as root (no user in config file)
2018-03-03 20:19:21,819 INFO Included extra file "/etc/supervisord.d/01-fast-data.conf" during parsing
2018-03-03 20:19:21,819 INFO Included extra file "/etc/supervisord.d/99-supervisord-sample-data.conf" during parsing
2018-03-03 20:19:21,822 INFO supervisord started with pid 5
2018-03-03 20:19:22,827 INFO spawned: 'sample-data' with pid 88
2018-03-03 20:19:22,836 INFO spawned: 'zookeeper' with pid 89
2018-03-03 20:19:22,843 INFO spawned: 'caddy' with pid 90
2018-03-03 20:19:22,858 INFO spawned: 'broker' with pid 91
2018-03-03 20:19:22,892 INFO spawned: 'smoke-tests' with pid 92
2018-03-03 20:19:22,924 INFO spawned: 'connect-distributed' with pid 93
2018-03-03 20:19:22,941 INFO spawned: 'logs-to-kafka' with pid 95
2018-03-03 20:19:22,960 INFO spawned: 'schema-registry' with pid 96
2018-03-03 20:19:22,975 INFO spawned: 'rest-proxy' with pid 101
2018-03-03 20:19:24,783 INFO success: sample-data entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,784 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,784 INFO success: caddy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,785 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,786 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,786 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,788 INFO success: logs-to-kafka entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,789 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:19:24,790 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2018-03-03 20:22:23,123 INFO exited: logs-to-kafka (exit status 0; expected)
2018-03-03 20:22:54,999 INFO exited: sample-data (exit status 0; expected)
2018-03-03 20:25:42,894 INFO exited: smoke-tests (exit status 0; expected)

Enabling JMX ports causes test failures.

Mac OS X -Sierra
Docker CE
Version 17.09.1-ce-mac42 (21090)
Channel: stable
3176a6af01

Container: 2 CPUs, 6G RAM

Running a container with JMX ports enabled results in only 9.5% of the Coyote tests passing. Plus there is no JMX connectivity.

When I run this command:

docker run --rm  \
           -p 2181:2181 -p 3030:3030 -p 8081:8081 \
           -p 8082:8082 -p 8083:8083 -p 9092:9092 \
           -e ADV_HOST=127.0.0.1 \
           landoop/fast-data-dev

Everything starts and Coyote tests are %100.

When I run this command to enable JMX access:

docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \
           -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=192.168.99.100 \
           landoop/fast-data-dev:latest

Coyote only reports 9.5 % tests passing. Only 4 out of the 13 Connect test passed. All other tests fail, Brokers, Rest, SchemaRegistry.

I will be happy to send any logs required.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.