Giter VIP home page Giter VIP logo

confluent-cli's Introduction

Danger: Unmaintained Code Ahead

This repository is no longer maintained by Confluent. It has been superceded by the proprietary confluent-cli which provides significantly more functionality, including ACLs, RBAC, and Audit Log management for Confluent Platform.

Confluent Platform CLI

A CLI to start and manage Confluent Platform from command line.

Installation

  • Download and install Confluent Platform

  • Checkout confluent-cli by running:

    $ git clone [email protected]:confluentinc/confluent-cli.git
  • Set CONFLUENT_HOME environment variable to point to the location of Confluent Platform. For instance:

    $ export CONFLUENT_HOME=/usr/local/confluent-3.3.0
  • Install confluent-cli:

    $ cd confluent-cli; make install

Usage

To get a list of available commands, run:

$ export PATH=${CONFLUENT_HOME}/bin:${PATH};
$ confluent help

Examples:

  • Start all the services!
$ confluent start
  • Retrieve their status:
$ confluent status
  • Open the log file of a service:
$ confluent log connect
  • Access runtime stats of a service:
$ confluent top kafka
  • Discover the availabe Connect plugins:
$ confluent list plugins
  • or list the predefined connector names:
$ confluent list connectors
  • Load a couple connectors:
$ confluent load file-source
$ confluent load file-sink
  • Get a list with the currently loaded connectors:
$ confluent status connectors
  • Check the status of a loaded connector:
$ confluent status file-source
  • Read the configuration of a connector:
$ confluent config file-source
  • Reconfigure a connector:
$ confluent config file-source -d ./updated-file-source-config.json
  • or reconfigure using a properties file:
$ confluent config file-source -d ./updated-file-source-config.properties
  • Figure out where the data and the logs of the current confluent run are stored:
$ confluent current
  • Unload a specific connector:
$ confluent unload file-sink
  • Stop the services:
$ confluent stop
  • Start on a clean slate next time (deletes data and logs of a confluent run):
$ confluent destroy

Set CONFLUENT_CURRENT if you want to use a top directory for confluent runs other than your platform's tmp directory.

$ cd $CONFLUENT_HOME
$ mkdir -p var
$ export CONFLUENT_CURRENT=${CONFLUENT_HOME}/var
$ confluent current

confluent-cli's People

Contributors

andrewegel avatar c0urante avatar codyaray avatar confluentjenkins avatar cyrusv avatar dabh avatar dnozay avatar ewencp avatar hjafarpour avatar kkonstantine avatar mageshn avatar maxzheng avatar norwood avatar rhauch avatar rmoff avatar stanislavkozlovski avatar wicknicks avatar xiangxin72 avatar xli1996 avatar ybyzek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

confluent-cli's Issues

Confluent cli only showing open-source components on enterprise install

Hello,

I’ve just installed the enterprise platform trial on a rhel 7.2 S.O using:
sudo yum install confluent-platform-2.11

installation went ok, but when I execute confluent start only the open source components are going up
confluent list shows:
Available services:
zookeeper
kafka
schema-registry
kafka-rest
connect
ksql-server

confluent version shows:
Confluent Open Source:

looks like the is_enterprise method on the confluent.sh is returning false.
necessary jar files are available in the folder: /usr/share/java/confluent-control-center

looks like those vars are not being loaded correctly in the yum installation

confluent_bin="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
confluent_home="$( dirname "${confluent_bin}" )"

Up:
I've added a echo $enterprise_prefix on the line 261 to see what's happening
after this running: confluent list shows:

Available services:
zookeeper
kafka
schema-registry
kafka-rest
connect
ksql-server
//share/java/confluent-control-center/control-center-
control-center

and I can start control-center also.

Confluent start kafka also triggers zookeeper

Hi all,
My confluent installation:

  • confluent open source (installation through deb package)
  • version 4.1
  • scala: 2.11
  • OS: ubuntu 17.10

I have a question with regards to the command confluent start kafka

My situation is as follows: I have 3 zookeepers running on remote servers
I have specified these hosts in the server.properties file:
zookeeper.connect=x:2181,x:2181,x:2181/kafka_10_test

Yet when I issue the command confluent start kafka it also triggers the start of zookeeper.
I think this is faulty behaviour. If you have external zookeeper it should not try to start the zookeeper locally.
Any idea if I am doing something wrong from the command line or is incorrect behaviour ?

Thanks for your insight

Kafka-rest not picking topics from kafka

I have a "test" topic into kafka, and that works, because i can list it using kafka commands.

I start the kafka-rest proxy by using confluent start kafka-rest

The server works, because I can perform commands onto it. By using curl, I try to perform a command onto the resource topics/ or topics/test, but I always get "404: Not Found" on any resource. It seems like that the rest proxy is empty, while i should have at least the 'test' topic i mentioned earlier.

Any suggestion?

local: -n: invalid option on MacOS bash

[bin] $ confluent start zookeeper
/Users/hans/confluent/confluent-3.2.1/bin/confluent: line 628: local: -n: invalid option
local: usage: local name[=value] ...
/Users/hans/confluent/confluent-3.2.1/bin/confluent: line 577: local: -n: invalid option
local: usage: local name[=value] ...
Unknown service: zookeeper

Need better error messages for invalid connector config

Valid JSON but not in the expected format (i):

> cat ../connector_config/jdbc_sink_poem.json
{
  "topics": "dummy_topic",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "name": "poem_sqlite",
  "connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
  "auto.create": true,
  "auto.evolve": true
}

> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
parse error: Invalid numeric literal at line 2, column 0

Malformed JSON (ii):

> cat ../connector_config/jdbc_sink_poem.json
{ name:"jdbc_sink_poem",config: {
  "topics": "dummy_topic",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "name": "poem_sqlite",
  "connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
  "auto.create": true,
  "auto.evolve": true}
}

> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
Invalid argument '../connector_config/jdbc_sink_poem.json given to 'config'.

In both of these instances, it should be clear what the error is (JSON is invalid / does not match the expected format). For the first example, since the name of the connector is passed in as a commandline argument can we add extra smarts to confluent CLI to actually massage the JSON into the correct syntax, and simply warn the user that the format is not correct?

Fix parsing of error message when html is returned instead of json.

When an error code and message is returned by Jetty in html format rather than the Kafka Connect framework in json format, jq is unable to parse the returned message and instead it prints a very unintuitive message such as:

parse error: Invalid numeric literal at line 2, column 0

for an error message such as:

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /connectors. Reason:
<pre>    Request failed.</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
</body>
</html>

Confluent CLI should be able to distinguish between the two error messages and print the root cause in all cases.

Confluent CLI log misses error & stack trace

Connect fails to start, but with confluent log connect the stacktrace & error is missed, making it appear that it's just hung.

With Confluent CLI:

$ confluent start connect
$ confluent log connect
[...]
[2018-01-03 12:07:10,336] INFO Loading plugin from: /Users/Robin/git/kafka-connect-twitter/target/classes (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
(END)

Run directly

rmoff@proxmox01 ~/confluent-4.0.0> ./bin/connect-distributed ./etc/schema-registry/connect-avro-distributed.properties
[...]
[2018-01-03 11:40:33,856] INFO Loading plugin from: /home/rmoff/kafka-connect-twitter/target/classes (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:179)
Exception in thread "main" java.lang.NoClassDefFoundError: com/github/jcustenborder/kafka/connect/utils/VersionUtil
        at com.github.jcustenborder.kafka.connect.twitter.TwitterSourceConnector.version(TwitterSourceConnector.java:41)
        at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:279)
        at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:260)
        at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:201)
        at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:193)
        at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:153)
        at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:47)
        at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:70)
Caused by: java.lang.ClassNotFoundException: com.github.jcustenborder.kafka.connect.utils.VersionUtil
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:62)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 8 more

java.lang.IllegalArgumentException: /tmp/confluent.SVNfiLFU/zookeeper/data/myid file is missing

hi
I am running Kafka via a Confluent platform on 3 nodes but when i running confluent start get this error :

[2018-04-09 10:54:25,995] INFO Reading configuration from: /tmp/confluent.SVNfiLFU/zookeeper/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 192.168.0.36 to address: /192.168.0.36 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 192.168.0.22 to address: /192.168.0.22 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Defaulting to majority quorums (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2018-04-09 10:54:26,012] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /tmp/confluent.SVNfiLFU/zookeeper/zookeeper.properties
        at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101)
        at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Caused by: java.lang.IllegalArgumentException: /tmp/confluent.SVNfiLFU/zookeeper/data/myid file is missing
        at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:406)
        at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150)
        ... 2 more

this is zookeeper.properties :

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
# 
#    http://www.apache.org/licenses/LICENSE-2.0
# 
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper/
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config

#############################################################################################
#maxClientCnxns=0
initLimit=5
syncLimit=2
tickTime=2000

server.1=0.0.0.0:2888:3888
server.2=192.168.0.22:2888:3888
server.3=192.168.0.36:2888:3888

also, I created myid file that contains integer id in /var/lib/zookeeper/ directory

confluent start not starting control-center

I have downloaded Confluent 5.0.0 open source and following the quickstart guide to explore confluent. When I ran bin/confluent start it gives the following output

Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]

Whereas in the documentation, it also starts control-center.

I even tried to start control-center manually(bin/confluent control-center) but got this error Unknown command 'control-center'

I think due to this, whenever I try to add sample data for the Kafka topic I am getting java.lang.ClassNotFoundException: io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

list connectors behaviour

I'm expecting list connectors to give me the same information as /connectors REST call, but it's different.

Is this expected behaviour? If so (a) can the help text be made more specific as the terminology is ambiguous/confusing and (b) is there a way with confluent CLI to see the connectors currently defined?

thanks!

Robin@asgard02:confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail"]

Robin@asgard02:confluent-3.3.0-SNAPSHOT-20170629> ./bin/confluent list connectors
Bundled Predefined Connectors (edit configuration under etc/):
  elasticsearch-sink
  file-source
  file-sink
  jdbc-source
  jdbc-sink
  hdfs-sink
  s3-sink

When selectively stopping Connect, CLI also stops ksql-server and Confluent Control Center

If I selectively shutdown Connect, CLI also stops ksql-server and Confluent Control Center.

Robin@asgard02 ~> confluent status
control-center is [UP]
ksql-server is [UP]
connect is [UP]
kafka-rest is [UP]
schema-registry is [UP]
kafka is [UP]
zookeeper is [UP]
Robin@asgard02 ~> confluent stop connect
Using CONFLUENT_CURRENT: /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.6yVWsyCf
Stopping control-center
control-center is [DOWN]
Stopping ksql-server
ksql-server is [DOWN]
Stopping connect
connect is [DOWN]
Robin@asgard02 ~> confluent status
control-center is [DOWN]
ksql-server is [DOWN]
connect is [DOWN]
kafka-rest is [UP]
schema-registry is [UP]
kafka is [UP]
zookeeper is [UP]

(If I selectively startup Connect, CLI does not change the state of ksql-server or Confluent Control Center)

confluent start -> Kafka failed to start

I'm following the quick start guide and am seeing the following message after executing confluent start schema-registry.

ubuntu@ip-172-31-36-54:~$ confluent start schema-registry
    Starting zookeeper
    zookeeper is [UP]
    Starting kafka
    -Kafka failed to start
    kafka is [DOWN]
    Cannot start Schema Registry, Kafka Server is not running. Check your deployment

This is on a clean Ubuntu machine, the following commands were executed:

    1  sudo apt-get update
    2  sudo apt-get install default-jre
    3  wget -qO - http://packages.confluent.io/deb/3.1/archive.key | sudo apt-key add -
    4  sudo add-apt-repository "deb [arch=amd64] http://packages.confluent.io/deb/3.1 stable main"
    5  sudo apt-get update && sudo apt-get install confluent-platform-2.11
    6  sudo apt-get update && sudo apt-get install confluent-platform-oss-2.11
    7  confluent start schema-registry

It would be nice to display a bit more information about what is preventing Kafka from starting.

Error while starting Kafka (Cannot start Schema Registry, Kafka Server is not running. Check your deployment)

so, I followed this guide to get the Kafka running

https://docs.confluent.io/current/quickstart/ce-quickstart.html#ce-quickstart

and it worked perfectly fine for sometime.
I recently started getting this error while starting the services

This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT: /tmp/confluent.llaA9yuK
Starting zookeeper
zookeeper is [UP]
Starting kafka
-Kafka failed to start
kafka is [DOWN]
Cannot start Schema Registry, Kafka Server is not running. Check your deployment

I looked into some previously raised issues but that was not very helpful. Anyone has any idea on how to debug this, or its better to have the complete log displayed here.

I am running Confluent Platform: 5.1.0 on

NAME="Red Hat Enterprise Linux Server"
VERSION="7.4 (Maipo)"
ID="rhel"
ID_LIKE="fedora"

machine.

Confluent CLI should check `JAVA_HOME`

Symptoms:

> java -version 2>&1 | head -1
java version "11.0.2" 2018-10-16 LTS

> $JAVA_HOME/bin/java -version 2>&1 | head -1
java version "1.8.0_201"

> export JAVA_HOME

> bin/confluent start
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html

Current Java version '11' is unsupported at this time. Confluent CLI will exit.

WARNING: Java version 1.8 is recommended. 
See https://docs.confluent.io/current/installation/versions-interoperability.html

Source of problem:

bin/confluent appears to have a routine called validate_java_version which just runs java without checking JAVA_HOME

Suggested fix:

Confluent CLI should check JAVA_HOME to enable confluent-cli to use a different JDK.

Temporary workaround:

Since Confluent CLI is a bash script, edit $CONFLUENT_HOME/bin/confluent directly

top is broken in Linux

There's a stray comma that does not allow top to run on certain linux distros for all the services.
Also a check for empty pid needs to be added.

Config directory 'etc' not set correctly when is not located under the same directory as bin

When Confluent Platform is installed in standard system locations, such as /usr/bin/ and /etc the config directory is not located under the same directory as bin.

Instead of performing a complete search across the filesystem, a reasonable fix should be to assume etc is located one level up, since this covers the vast majority of directory layouts and their conventions.

Error message on start can be more useful...

I accidentally left my old ZK running and tried to use the CLI to start the stack:

[centos@ip-172-31-78-114 ~]$ confluent start
Starting zookeeper
|Zookeeper failed to start
zookeeper is [DOWN]
Cannot start Kafka, Zookeeper is not running. Check your deployment

Will be nice to use "ps" and tell me that it is already running

confluent restart

Hello,

Please add confluent restart.
there is already support for confluent stop and confluent start. but confluent restart would be nice.

Confluent CLI doesn't use the standard Confluent Config with no warning

If you install CP from DEB/RPM, you have /etc/kafka/server.properties which says that the data dir is in /var/lib/kafka. So if you do “confluent start” to start kafka, you may expect your data to be there.
It isn’t.
CLI has a magic properties file in /tmp/confluent.WGTZu15Y/kafka/kafka.properties which uses /tmp/confluent.WGTZu15Y/kafka/data for the data.

This was a pretty big surprise to me, so we should be more explicit when we do that

Accept remote server for Connect Server interaction

Inspired by confluentinc/cp-docker-images#460

Be able to load properties file to a remote server rather than only localhost.

Get Config

Hit GET /connectors/:connector/config

confluent config ${connector} --host http://connect.server:8083

Load Connector

Hit POST /connectors

confluent load ${connector} --host http://connect.server:8083  

Plugins

Hit GET /connector-plugins

confluent list plugins --host http://connect.server:8083

Add instructions to show how to change CONFLUENT_CURRENT to the help menu

Currently confluent help current will print a description of the command, its output and an example, but it does not mention the way to override the default location.

Mentioning that this is achieved by setting the environment variable CONFLUENT_CURRENT should be probably added here as well.

confluent status does not report the status of Zookeeper started by zookeeper-server-start

I start Zookeeper with bin/zookeeper-server-start -daemon etc/kafka/zookeeper.properties.

$ echo srvr | nc localhost 2181
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: standalone
Node count: 4

But confluent status shows:

confluent status
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html

ksql-server is [DOWN]
connect is [DOWN]
kafka-rest is [DOWN]
schema-registry is [DOWN]
kafka is [DOWN]
zookeeper is [DOWN]

I am using confluent-community-5.1.0-2.11 with JDK 1.8 on macOS 10.13.6.

CLI doesnt seem to be releasing everything

Confluent CP 4.0 installed on CentoOS 7.2
After installation started
confluent start schema-reigstry

Played with producers and consumers

confluent stop
confluent destroy

Tried restarting

confluent schema-registry

Got, Error starting zookeeper

confluent log zookeeper indicates that java coundnt bind to port 2181 because it's already in use.

Had to kill the 2 java processes holding on to 2181 and 9092.

Confluent CLI says stack is down, even if it's not

Robin@asgard02 ~> confluent status
connect is [DOWN]
kafka-rest is [DOWN]
schema-registry is [DOWN]
kafka is [DOWN]
zookeeper is [DOWN]

But it's clearly running:

Robin@asgard02 ~> ps -ef|grep confluent
  502  3300     1   0 Wed02pm ??         3:13.19 /usr/bin/java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/Users/Robin/cp/confluent-3.3.0/bin/../logs -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/kafka/log4j.properties -cp :/Users/Robin/cp/confluent-3.3.0/bin/../share/java/kafka/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* org.apache.zookeeper.server.quorum.QuorumPeerMain /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/zookeeper/zookeeper.properties
  502  3463     1   0 Wed02pm ??         4:33.09 /usr/bin/java -Xmx512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dschema-registry.log.dir=/Users/Robin/cp/confluent-3.3.0/bin/../logs -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/schema-registry/log4j.properties -cp :/Users/Robin/cp/confluent-3.3.0/bin/../package-schema-registry/target/kafka-schema-registry-package-*-development/share/java/schema-registry/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-common/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/rest-utils/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/schema-registry/* io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/schema-registry/schema-registry.properties
  502  3700     1   0 Wed02pm ??         2:39.24 /usr/bin/java -Xmx256M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/kafka-rest/log4j.properties -cp :/Users/Robin/cp/confluent-3.3.0/bin/../target/kafka-rest-*-development/share/java/kafka-rest/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-common/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/rest-utils/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/kafka-rest/* io.confluent.kafkarest.KafkaRestMain /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/kafka-rest/kafka-rest.properties
  502  5926     1   0 Wed03pm ??       135:50.08 /usr/bin/java -Xmx256M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/Users/Robin/cp/confluent-3.3.0/bin/../logs -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/kafka/connect-log4j.properties -cp /Users/Robin/cp/confluent-3.3.0/share/java/kafka/*:/Users/Robin/cp/confluent-3.3.0/share/java/confluent-common/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-serde-tools/*:/Users/Robin/cp/confluent-3.3.0/share/java/monitoring-interceptors/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-elasticsearch/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-hdfs/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-irc/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-jdbc/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-replicator/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-s3/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-storage-common/*:/Users/Robin/cp/confluent-3.3.0/share/java/kafka-connect-twitter/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/kafka/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/connect/connect.properties
  502 52893     1   0 Fri05pm ??        57:38.69 /usr/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/Users/Robin/cp/confluent-3.3.0/bin/../logs -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/kafka/log4j.properties -cp :/Users/Robin/cp/confluent-3.3.0/bin/../share/java/kafka/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* io.confluent.support.metrics.SupportedKafka /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/kafka/kafka.properties
  502 63772 63522   0 10:58pm ttys000    0:00.00 grep --color=auto confluent

This was after numerous days suspending/unsuspending my laptop, having previously started the stack up.

This issue causes two problems:

  1. Can't use the CLI to shutdown the running components
  2. Can't use the CLI to start up the stack, because it's running, and you get port clashes:
Robin@asgard02 ~> ps -ef|grep confluent
  502  3300     1   0 Wed02pm ??         3:13.45 /usr/bin/java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/Users/Robin/cp/confluent-3.3.0/bin/../logs -Dlog4j.configuration=file:/Users/Robin/cp/confluent-3.3.0/bin/../etc/kafka/log4j.properties -cp :/Users/Robin/cp/confluent-3.3.0/bin/../share/java/kafka/*:/Users/Robin/cp/confluent-3.3.0/bin/../share/java/confluent-support-metrics/*:/usr/share/java/confluent-support-metrics/* org.apache.zookeeper.server.quorum.QuorumPeerMain /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.yAzjsc10/zookeeper/zookeeper.properties
  502 64006 63522   0 11:02pm ttys000    0:00.00 grep --color=auto confluent
Robin@asgard02 ~> confluent status
connect is [DOWN]
kafka-rest is [DOWN]
schema-registry is [DOWN]
kafka is [DOWN]
zookeeper is [DOWN]
Robin@asgard02 ~> confluent start
Starting zookeeper
Zookeeper failed to start
zookeeper is [DOWN]
Cannot start Kafka, Zookeeper is not running. Check your deployment

confluent log zookeeper shows:

[2017-10-03 23:02:27,339] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2017-10-03 23:02:27,340] ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.net.BindException: Address already in use

I don't quite know how my setup got into the state it did, but the CLI needs to improve how it detects if processes are running or not.

No way to override properties

Is there a way to override Kafka's server.properties. I'm on AWS EC2 and I need to add an advertised listener so I may access it from outside the VPC. I like the CLI for development; easy to install, easy to start, just needs an additional config feature or two.
When I changed out the properties that's in the /tmp/, and then restart, the properties gets overwritten, but from where? Not a bash expert, but as a workaround, maybe we can go to the source properties file and change it there?

Streamline Confluent CLI progress messages

As more components are added, the startup/shutdown sequence is less clear to follow.

Robin@asgard02:~> confluent start
Using CONFLUENT_CURRENT: /var/folders/q9/2tg_lt9j6nx29rvr5r5jn_bw0000gp/T/confluent.6yVWsyCf
Starting zookeeper
zookeeper is [UP]
Starting kafka
kafka is [UP]
Starting schema-registry
schema-registry is [UP]
Starting kafka-rest
kafka-rest is [UP]
Starting connect
connect is [UP]
Starting ksql-server
ksql-server is [UP]
Starting control-center
control-center is [UP]

I would suggest consolidating each component to a single line:

Starting zookeeper …
Starting zookeeper …     [DONE]
Starting zookeeper …     [DONE]
Starting kafka …     
Starting zookeeper …     [DONE]
Starting kafka …         [DONE]

etc

This makes it (a) neater and (b) easier to see at a glance what has/hasn't started.

Confluent CLI doesn't generate ZooKeeper config having more than 1 instance.

If you install CP from DEB/RPM, you have /etc/kafka/zookeeper.properties which says that the serve.x is the server information for each node. So if you do “confluent start” to start zookeeper, you may expect the current node will join the cluster in the section of server.x group. But actually it wasn't.

One magic file /tmp/confluent.zJ3sbPTi/zookeeper/zookeeper.properties was generated in my AWS run, however that makes sense with the explanation of Confluent CLI doesn't use the standard Confluent Config with no warning!. My running log still shows that the file to specify Zookeeper myid must be required as well though, here comes the complete stack trace.

[2017-09-25 20:45:54,120] INFO Defaulting to majority quorums (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-09-25 20:45:54,121] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /tmp/confluent.zJ3sbPTi/zookeeper/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Caused by: java.lang.IllegalArgumentException: /tmp/confluent.zJ3sbPTi/zookeeper/data/myid file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:406)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150)

start command - screenful of "lsof: no pwd entry for UID "

rmoff@proxmox01:~/confluent-3.3.0-SNAPSHOT-20170725$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
rmoff@proxmox01:~/confluent-3.3.0-SNAPSHOT-20170725$ uname -a
Linux proxmox01 4.4.6-1-pve #1 SMP Thu Apr 21 11:25:40 CEST 2016 x86_64 GNU/Linux

Using Confluent-cli from the 3.3.0 Snapshot (downloaded 25 Jul 2017), the start command fails and the user gets repeating screenfuls of the following :

rmoff@proxmox01:~/confluent-3.3.0-SNAPSHOT-20170725$ ./bin/confluent start
Starting zookeeper
lsof: no pwd entry for UID 499
lsof: no pwd entry for UID 499
lsof: no pwd entry for UID 499
lsof: no pwd entry for UID 499
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 489
lsof: no pwd entry for UID 497
lsof: no pwd entry for UID 497
lsof: no pwd entry for UID 497
lsof: no pwd entry for UID 497
lsof: no pwd entry for UID 494
lsof: no pwd entry for UID 494
[...]

It does actually succeed, but the impression given to the user (since start up takes a minute or so) is that the program is stuck in a loop.

Confluent CLI returns no output for Connect REST 500 server error

No output from this command:

Robin@asgard02 ~/c/confluent-4.1.0> confluent status connectors
Robin@asgard02 ~/c/confluent-4.1.0>

But the Connect log shows that it errored:

[2018-05-01 15:35:52,088] INFO 0:0:0:0:0:0:0:1 - - [01/May/2018:14:34:22 +0000] "GET /connectors HTTP/1.1" 500 48  90008 (org.apache.kafka.connect.runtime.rest.RestServer:60)

A straight curl call shows the error:

Robin@asgard02 ~/c/confluent-4.1.0> curl -s "http://localhost:8083/connectors"
{"error_code":500,"message":"Request timed out"}⏎

==> confluent status connectors should echo to the console any errors, otherwise it appears to the end-user that there are simply no connectors defined.

How do I configure JMX_PORT for the broker?

When I set JMX_PORT as an environment variable, I see a failure as zookeeper is using the same setting.

$ export JMX_PORT=9990
$ confluent start kafka
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT: /var/folders/gj/gkgkr_p141s0r7w1wbs2zzlr0000gn/T/confluent.Msl0A76A
Starting zookeeper
zookeeper is [UP]
Starting kafka
-Kafka failed to start
kafka is [DOWN]

Digging thru the kafka.stdout I see Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 9990. I can work around things by setting the full KAFKA_JMX_OPTS.

export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false  -Dcom.sun.management.jmxremote.port=9990"

Is there an easier way to set JMX_PORT scoped to the kafka broker only?

Schema Registry failed to start with confluence version 5.0.0

I'm trying to run the confluent OSS platform locally following the instructions as described here: https://docs.confluent.io/current/quickstart/cos-quickstart.html#cos-quickstart

However on executing bin/confluent start schema registry fails to start. Here's the output of the command:

current dir: /var/folders/c8/q06dsl7s3jz0lqc6xvcpnqnm0000gp/T/
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html

Using CONFLUENT_CURRENT:
zookeeper is already running. Try restarting if needed
Starting kafka
kafka is [UP]
Starting schema-registry
\Schema Registry failed to start
schema-registry is [DOWN]
Cannot start Kafka Rest, Kafka Server is not running. Check your deployment

Logs of the schema registry are as below:

io.confluent.common.config.ConfigException: No supported Kafka endpoints are configured. Either kafkastore.bootstrap.servers must have at least one endpoint matching kafkastore.security.protocol or broker endpoints loaded from ZooKeeper must have at least one endpoint matching kafkastore.security.protocol.
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig.endpointsToBootstrapServers(SchemaRegistryConfig.java:630)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig.bootstrapBrokers(SchemaRegistryConfig.java:570)
	at io.confluent.kafka.schemaregistry.storage.KafkaStore.<init>(KafkaStore.java:101)
	at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:139)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:59)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:41)
	at io.confluent.rest.Application.createServer(Application.java:169)
	at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)

Looking at the SchemaRegistry values displayed in the logs. Looks like the issue is with the
kafkastore.bootstrap.servers = [] setting.

Cosmetic improvements to help

The help text takes up a bit too much vertical space. The below might be an improvement. Also, responding to --help on the command line would be a nice tweak.


Usage: confluent <command> [<subcommand>] [<parameters>]

These are the available commands:

    list        List available services.
    start       Start all services or a specific service along with its dependencies
    stop        Stop all services or a specific service along with the services depending on it.
    status      Get the status of all services or the status of a specific service along with its dependencies.
    current     Get the path of the data and logs of the services managed by the current confluent run.
    destroy     Delete the data and logs of the current confluent run.
    log         Read or tail the log of a service.
    top         Track resource usage of a service.
    load        Load a connector.
    unload      Unload a connector.
    config      Configure a connector.

'confluent help' lists available commands. See 'confluent help <command>' to read about a
specific command.```

Add restart option

It would be really useful to include a 'restart' option, instead of needing to manually run stop and then start.

Connector creation silently fails

One connector (file_poem_tail) already created. Trying now to create a second one (jdbc_sink_poem). The command emits no error message, but no connector is created.

confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail"]

confluent-3.3.0-SNAPSHOT-20170629>
confluent-3.3.0-SNAPSHOT-20170629> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail"]

confluent-3.3.0-SNAPSHOT-20170629> cat ../connector_config/jdbc_sink_poem.json
{ "name":"jdbc_sink_poem","config": {
  "topics": "dummy_topic",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "name": "poem_sqlite",
  "connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
  "auto.create": true,
  "auto.evolve": true}
}

The connect log just shows an INFO:

[2017-07-03 11:06:52,617] INFO SinkConnectorConfig values:
        connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
        key.converter = null
        name = poem_sqlite
        tasks.max = 1
        topics = [dummy_topic]
        transforms = null
        value.converter = null
 (org.apache.kafka.connect.runtime.SinkConnectorConfig:223)
[2017-07-03 11:06:52,617] INFO EnrichedConnectorConfig values:
        connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
        key.converter = null
        name = poem_sqlite
        tasks.max = 1
        topics = [dummy_topic]
        transforms = null
        value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:223)
[2017-07-03 11:07:22,774] INFO 0:0:0:0:0:0:0:1 - - [03/Jul/2017:10:06:52 +0000] "POST /connectors HTTP/1.1" 201 285  30158 (org.apache.kafka.connect.runtime.rest.RestServer:60) 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.