Giter VIP home page Giter VIP logo

lenses-docker's Issues

Topic data grid does not follow window width changes

When viewing a topic data in grid view, the grid does not resize horizontally to follow window size changes. For example, if the window is resized larger, the right side of the grid still ends at where the window's original size was. If the grid is scrolled horizontally, only the grid headers expands to fill the extra space. The grid body still ends with the window's original size.

Here's some example screenshots.

Original window

screen shot 2018-05-04 at 4 49 39 pm

Window resized larger horizontally

screen shot 2018-05-04 at 4 49 45 pm

Scrolling the grid to the right

screen shot 2018-05-04 at 4 49 54 pm

Avien docker image Failed to open the storage [lensesdb]

I am following your guide on getting lenses running locally using my Avien Kafka cluster from this page:

https://docs.lenses.io/install_setup/deployment-options/aiven-deployment.html#cloud-aiven-docker-compose-example

However when i run docker-compose up, it gets most of the way through the process but then fails with the following error:

2019-10-11 16:24:57,093 INFO  [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
lenses_1  | 2019-10-11 16:24:57,094 INFO  [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
lenses_1  | 2019-10-11 16:24:57,094 INFO  [i.l.r.H2ConnectionBuilder$:43] Opening the storage to lensesdb...
lenses_1  | 2019-10-11 16:24:57,617 ERROR [i.l.r.H2ConnectionBuilder$:48] Failed to open the storage [lensesdb].
lenses_1  | org.h2.jdbc.JdbcSQLNonTransientException: IO Exception: "java.net.UnknownHostException: linuxkit-025000000001: linuxkit-025000000001: Name or service not known" [90028-199]
lenses_1  | 	at org.h2.message.DbException.getJdbcSQLException(DbException.java:502)
lenses_1  | 	at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
lenses_1  | 	at org.h2.message.DbException.get(DbException.java:194)
lenses_1  | 	at org.h2.message.DbException.convert(DbException.java:339)
lenses_1  | 	at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:254)
lenses_1  | 	at org.h2.server.TcpServer.getURL(TcpServer.java:199)
lenses_1  | 	at org.h2.tools.Server.start(Server.java:512)
lenses_1  | 	at org.h2.engine.Database.startServer(Database.java:958)
lenses_1  | 	at org.h2.engine.Database.open(Database.java:736)
lenses_1  | 	at org.h2.engine.Database.openDatabase(Database.java:319)
lenses_1  | 	at org.h2.engine.Database.<init>(Database.java:313)
lenses_1  | 	at org.h2.engine.Engine.openSession(Engine.java:69)
lenses_1  | 	at org.h2.engine.Engine.openSession(Engine.java:201)
lenses_1  | 	at org.h2.engine.Engine.createSessionAndValidate(Engine.java:178)
lenses_1  | 	at org.h2.engine.Engine.createSession(Engine.java:161)
lenses_1  | 	at org.h2.engine.Engine.createSession(Engine.java:31)
lenses_1  | 	at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:336)
lenses_1  | 	at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:169)
lenses_1  | 	at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:148)
lenses_1  | 	at org.h2.Driver.connect(Driver.java:69)
lenses_1  | 	at java.sql.DriverManager.getConnection(DriverManager.java:664)
lenses_1  | 	at java.sql.DriverManager.getConnection(DriverManager.java:270)
lenses_1  | 	at io.lenses.runtime.H2ConnectionBuilder$.apply(H2ConnectionBuilder.scala:45)
lenses_1  | 	at com.landoop.kafka.lenses.Main$.delayedEndpoint$com$landoop$kafka$lenses$Main$1(Main.scala:194)
lenses_1  | 	at com.landoop.kafka.lenses.Main$delayedInit$body.apply(Main.scala:147)
lenses_1  | 	at scala.Function0.apply$mcV$sp(Function0.scala:39)
lenses_1  | 	at scala.Function0.apply$mcV$sp$(Function0.scala:39)
lenses_1  | 	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
lenses_1  | 	at scala.App.$anonfun$main$1$adapted(App.scala:80)
lenses_1  | 	at scala.collection.immutable.List.foreach(List.scala:392)
lenses_1  | 	at scala.App.main(App.scala:80)
lenses_1  | 	at scala.App.main$(App.scala:78)
lenses_1  | 	at com.landoop.kafka.lenses.Main$.main(Main.scala:147)
lenses_1  | 	at com.landoop.kafka.lenses.Main.main(Main.scala)
lenses_1  | Caused by: java.net.UnknownHostException: linuxkit-025000000001: linuxkit-025000000001: Name or service not known
lenses_1  | 	at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
lenses_1  | 	at org.h2.util.NetUtils.getLocalAddress(NetUtils.java:252)
lenses_1  | 	... 29 common frames omitted
lenses_1  | Caused by: java.net.UnknownHostException: linuxkit-025000000001: Name or service not known
lenses_1  | 	at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
lenses_1  | 	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
lenses_1  | 	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
lenses_1  | 	at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
lenses_1  | 	... 30 common frames omitted
lenses_1  | 2019-10-11 16:24:57,619 INFO  [i.l.r.H2ConnectionBuilder$:49] Lenses shut down.

Any ideas?

Thank you.

ERROR! Lenses port (LENSES_PORT=9991) is in use by some other program.

I'm trying to run the lenses locally on my mac. However every time I try to start the container I get the following error:

"Initializing environment —docker setup script.
LENSES_SECURITY_PASSWORD is not set. You may be using the default password which is dangerous.
runtime: failed to create new OS thread (have 2 already; errno=22)
fatal error: newosproc

runtime stack:
runtime.throw(0x512269, 0x9)
/usr/lib/go/src/runtime/panic.go:566 +0x95
runtime.newosproc(0xc420028000, 0xc420037fc0)
/usr/lib/go/src/runtime/os_linux.go:160 +0x194
runtime.newm(0x5203a0, 0x0)
/usr/lib/go/src/runtime/proc.go:1572 +0x132
runtime.main.func1()
/usr/lib/go/src/runtime/proc.go:126 +0x36
runtime.systemstack(0x593600)
/usr/lib/go/src/runtime/asm_amd64.s:298 +0x79
runtime.mstart()
/usr/lib/go/src/runtime/proc.go:1079

goroutine 1 [running]:
runtime.systemstack_switch()
/usr/lib/go/src/runtime/asm_amd64.s:252 fp=0xc420022768 sp=0xc420022760
runtime.main()
/usr/lib/go/src/runtime/proc.go:127 +0x6c fp=0xc4200227c0 sp=0xc420022768
runtime.goexit()
/usr/lib/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200227c8 sp=0xc4200227c0
ERROR! Lenses port (LENSES_PORT=9991) is in use by some other program.
Lenses will probably fail to start.
lenses.port=9991
[Variable needed quotes] lenses.sql.state.dir=/data/kafka-streams-state
Running as root. Will change data ownership to nobody:nogroup (65534:65534)
and drop priviliges.
Setup script finished.
Docker environment initialized. Starting Lenses.

Initializing (pre-run) Lenses
Installation directory autodetected: /opt/lenses
Current directory: /data
Logback configuration file autodetected: logback.xml
/usr/bin/find: '/opt/lenses/plugins': No such file or directory
These directories will be monitored for new jar files:

  • /opt/lenses/plugins
  • /data/plugins
  • /opt/lenses/serde
    Starting application
    Error: LinkageError occurred while loading main class io.lenses.backend.Main
    java.lang.ClassFormatError: Incompatible magic value 1447843890 in class file io/lenses/backend/Main
    Initializing environment —docker setup script.
    LENSES_SECURITY_PASSWORD is not set. You may be using the default password which is dangerous.
    runtime: failed to create new OS thread (have 2 already; errno=22)
    fatal error: newosproc

runtime stack:
runtime.throw(0x512269, 0x9)
/usr/lib/go/src/runtime/panic.go:566 +0x95
runtime.newosproc(0xc420028000, 0xc420037fc0)
/usr/lib/go/src/runtime/os_linux.go:160 +0x194
runtime.newm(0x5203a0, 0x0)
/usr/lib/go/src/runtime/proc.go:1572 +0x132
runtime.main.func1()
/usr/lib/go/src/runtime/proc.go:126 +0x36
runtime.systemstack(0x593600)
/usr/lib/go/src/runtime/asm_amd64.s:298 +0x79
runtime.mstart()
/usr/lib/go/src/runtime/proc.go:1079

goroutine 1 [running]:
runtime.systemstack_switch()
/usr/lib/go/src/runtime/asm_amd64.s:252 fp=0xc420022768 sp=0xc420022760
runtime.main()
/usr/lib/go/src/runtime/proc.go:127 +0x6c fp=0xc4200227c0 sp=0xc420022768
runtime.goexit()
/usr/lib/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200227c8 sp=0xc4200227c0
ERROR! Lenses port (LENSES_PORT=9991) is in use by some other program.
Lenses will probably fail to start.
lenses.port=9991
[Variable needed quotes] lenses.sql.state.dir=/data/kafka-streams-state
Running as root. Will change data ownership to nobody:nogroup (65534:65534)
and drop priviliges.
Setup script finished.
Docker environment initialized. Starting Lenses.

Initializing (pre-run) Lenses
Installation directory autodetected: /opt/lenses
Current directory: /data
Logback configuration file autodetected: logback.xml
/usr/bin/find: '/opt/lenses/plugins': No such file or directory
These directories will be monitored for new jar files:

  • /opt/lenses/plugins
  • /data/plugins
  • /opt/lenses/serde
    Starting application
    Error: LinkageError occurred while loading main class io.lenses.backend.Main
    java.lang.ClassFormatError: Incompatible magic value 1447843890 in class file io/lenses/backend/Main"

I've tried a few different ports and they all have the same issue. I'm using docker compose to create my container using the following:
lenses:
image: lensesio/lenses
platform: linux/amd64
networks:
- app-tier
environment:
- LENSES_PORT=9991
ports:
- 9991:9991

The mac i'm running on locally is a newer model with an M1 chip

Failed to create symbolic link '/var/www/config/run' on container restart

I reported an issue in the #lenses Slack channel about this and @andmarios committed a fix. However, I may have prematurely declared it fixed. I'm still running into the problem, but it seems to happen randomly.

Sometimes, my Lenses Box container will fail to start again after being stopped, with an error /var/www/config/run already existing. The only thing I can then do is remove the container and start over again.

▸ ~ docker container start -a lenses-dev
Setting advertised host to 127.0.0.1.
Broker config found at '/var/run/broker/server.properties'. We won't process variables.
Connect worker config found at '/var/run/connect/connect-avro-distributed.properties'. We won't process variables.
Schema registry config found at '/var/run/schema-registry/schema-registry.properties'. We won't process variables.
REST Proxy config found at '/var/run/rest-proxy/kafka-rest.properties'. We won't process variables.
Zookeeper config found at '/var/run/zookeeper/zookeeper.properties'. We won't process variables.
Lenses conf config found at '/var/run/lenses/lenses.conf'. We won't process variables.
Lenses security conf config found at '/var/run/lenses/security.conf'. We won't process variables.
ln: failed to create symbolic link '/var/www/config/run': File exists

ssl.endpoint.identification.algorithm not working

Hi Lenses team. I have kafka cluster with SASL SSL SCRAM-512 Authentication. I install multiply ui for kafka nad it working with ssl.endpoint.identification.algorithm= " " . But when I try to use lense for kafka ui. I add additional config with ssl.endpoint.identification.algorithm= " " But it error on test and said failed tls handshake.

org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: Unknown identification algorithm: " "
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:350)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:293)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:288)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1356)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1231)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1174)
        at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
        at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
        at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:430)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:514)
        at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:368)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:561)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1333)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1264)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.security.cert.CertificateException: Unknown identification algorithm: " "
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:462)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:415)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
        at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
        at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1334)
        ... 19 common frames omitted

image

Really need some help to solve this issue. Also I can't recreate cert because there are dev that using this kafka cluster.

Lenses behind reverse-proxy

Hey, currently I'm having trouble accessing lenses UI through our reverse-proxy(Traefik).
since the dashboard landing page is '/' with a redirect.. We're working with PathPrefix to filter between different services behind a 443 proxy, Lenses is one of them. according to Traefik's devs
https://community.containo.us/t/why-doesnt-traefik-readd-pathprefix-prefixes-to-backend-responses/1240 there is no way to add this prefix to lenses responds so the possible solution is only to configure a corrolating pathprefix both to Traefik and Lenses.

I'm having trouble finding this configuration property in Lenses docs. Is is possible to change the Lenses UI url prefix?

Error on starting container: [dumb-init] -e: No such file or directory

Hi All,

I'm attempting to follow the install guide here: https://docs.lenses.io/5.0/installation/docker/ I've managed to pull the most recent version of the lenses image. When I try to start up a container using the command in the install guide (with the tag changed from 5.0 to latest):
docker run --name lenses lensesio/lenses:latest -e LENSES_PORT=9991 -e LENSES_SECURITY_USER=admin -e LENSES_SECURITY_PASSWORD=sha256:8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918

I get the following error:
[dumb-init] -e: No such file or directory

i've tried pulling and starting version 5.0 and got the same issue. I've also tried starting the container with sudo.

Apologies if i'm missing any other information, this is my first time reporting an issue on github

Cannot create connectors for connect cluster named “connect”

Lenses doesn't seem to work with Connect clusters named “connect”. For example, given the following value for the LENSES_CONNECT_CLUSTERS environment variable:

LENSES_CONNECT_CLUSTERS: |
  [
    {
      name: "connect",
      urls: [
        {
          url: "http://kafka-connect:8083"
        }
      ],
      statuses: "connect-status",
      configs: "connect-config",
      offsets: "connect-offset"
    }
  ]

Attempting to create a new connector results in the New Connector page listing a cluster named connect-new, which doesn't exist. It is not possible to select the connect cluster.

screen shot 2018-05-09 at 3 05 55 pm

screen shot 2018-05-09 at 3 07 51 pm

Changing the cluster name to something else (I tried dev and something) both work.

Can't connect to default Influxdb v1.6.2, connection refused

Hi, I'm unable to connect to influxDB using built in topics and the built in connector. Here is the exception that I get. I'm using docker version 18.06.1-ce, Influxdb version 1.6.2 and Lenses 2.1.5.

Connector config:
connector.class=com.datamountaineer.streamreactor.connect.influx.InfluxSinkConnector
connect.influx.url=http://127.0.0.1:8086
connect.influx.db=sea
topics=sea_vessel_position_reports
connect.influx.kcql=INSERT INTO sea SELECT * FROM sea_vessel_position_reports
name=influx4
connect.influx.username=root

Exception:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:559)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:315)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:218)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:186)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: org.influxdb.InfluxDBIOException: java.net.ConnectException: Failed to connect to /127.0.0.1:8086
at com.datamountaineer.streamreactor.connect.errors.ThrowErrorPolicy.handle(ErrorPolicy.scala:58)
at com.datamountaineer.streamreactor.connect.errors.ErrorHandler$class.handleError(ErrorHandler.scala:83)
at com.datamountaineer.streamreactor.connect.errors.ErrorHandler$class.handleTry(ErrorHandler.scala:64)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter.handleTry(InfluxDbWriter.scala:29)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter.write(InfluxDbWriter.scala:47)
at com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask$$anonfun$put$2.apply(InfluxSinkTask.scala:75)
at com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask$$anonfun$put$2.apply(InfluxSinkTask.scala:75)
at scala.Option.foreach(Option.scala:257)
at com.datamountaineer.streamreactor.connect.influx.InfluxSinkTask.put(InfluxSinkTask.scala:75)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:537)
... 10 more
Caused by: org.influxdb.InfluxDBIOException: java.net.ConnectException: Failed to connect to /127.0.0.1:8086
at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:587)
at org.influxdb.impl.InfluxDBImpl.write(InfluxDBImpl.java:355)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter$$anonfun$1.apply$mcV$sp(InfluxDbWriter.scala:45)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter$$anonfun$1.apply(InfluxDbWriter.scala:45)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter$$anonfun$1.apply(InfluxDbWriter.scala:45)
at scala.util.Try$.apply(Try.scala:192)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxDbWriter.write(InfluxDbWriter.scala:45)
... 15 more
Caused by: java.net.ConnectException: Failed to connect to /127.0.0.1:8086
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:240)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:158)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:256)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:134)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:113)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at org.influxdb.impl.GzipRequestInterceptor.intercept(GzipRequestInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.logging.HttpLoggingInterceptor.intercept(HttpLoggingInterceptor.java:143)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at okhttp3.RealCall.execute(RealCall.java:77)
at retrofit2.OkHttpCall.execute(OkHttpCall.java:180)
at org.influxdb.impl.InfluxDBImpl.execute(InfluxDBImpl.java:579)
... 21 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at okhttp3.internal.platform.Platform.connectSocket(Platform.java:125)
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:238)
... 46 more

kafka connect not showing on topology

I'm creating a topology for my kafka streams, the flow is something like this:
external app -> topics -> kafka-connect sink.
I manage to configure the external app via rest API, and it shown on topology.
kafka-connect even it shown on the "lenses connectors" menu, does not reflect on topology

lenses:
image: lensesio/lenses:4.1
depends_on:
- broker
- schema-registry
- kafka-connect
environment:
LENSES_PORT: 9991
LENSES_KAFKA_BROKERS: "PLAINTEXT://broker:29092"
LENSES_ZOOKEEPER_HOSTS: |
[
{url:"zookeeper:2181", jmx:"zookeeper:9585"},
]
LENSES_SCHEMA_REGISTRY_URLS: |
[
{url:"http://schema-registry:8081", jmx:"schema-registry:9582"}
]
LENSES_KAFKA_CONNECT_CLUSTERS: |
[
{
name:"kafka-connect",
urls: [
{url:"http://kafka-connect:8083", jmx:"kafka-connect:9589"}
],
statuses:"kafka-connect",
configs:"kafka-connect",
offsets:"kafka-connect"
}
]
LENSES_SECURITY_USER: admin
LENSES_SECURITY_PASSWORD: sha256:8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
ports:
- 9991:9991
- 9102:9102
volumes:
- ./license.json:/license.json

image
image

kafka connect JMX connection issue

I got 2 kafka connect,

  1. https://hub.docker.com/r/confluentinc/cp-kafka-connect
  2. https://hub.docker.com/r/debezium/connect

Our kafka and kafka connect run in a kubernetes, same cluster.

debezium.yaml and kafka-connect.yaml,

- name: JMX_PORT
   value: "5555"

lenses.yaml,

- name: LENSES_KAFKA_CONNECT_CLUSTERS
   value: '[{name:"debezium",urls: [{url:"http://debezium:8083", metrics:{url:"debezium:5555", type:"JMX"}}],statuses:"my_source_connect_statuses",configs:"my_connect_configs",offsets:"my_connect_offsets"},{name:"connect",urls: [{url:"http://kafka-connect:8083", metrics:{url:"kafka-connect:5555", type:"JMX"}}],statuses:"connect_statuses",configs:"connect_configs",offsets:"connect_offsets"}]'

After a few minutes (around 3-5 mins),

2020-03-15 09:19:54,274 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [kafka-connect:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:19:54,295 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [debezium:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:20:24,312 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [debezium:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:20:24,312 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [kafka-connect:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:20:54,313 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [kafka-connect:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:20:54,324 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [debezium:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:21:24,350 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [kafka-connect:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:21:24,363 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [debezium:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:21:54,364 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [kafka-connect:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"
2020-03-15 09:21:54,372 WARN [i.l.s.m.k.j.ConnectMetricsJMXReader:18] Can not retrieve the Connect Metrics on [debezium:5555].kafka.connect:type=connect-coordinator-metrics,client-id="0.0.0.0:8083"

lenses dashboard cannot retrieved metrics, not sure what is wrong from my configuration. Any help thank you so much!

Connector config editor gives false impression that lines can be continued with \

The config editor for connectors gives the false impression that long lines can be wrapped with a \ character. When a \ is used at the end of a line, the next line is coloured green as if it continues the value from the preceeding key=value line. However, using Show JSON clearly shows this isn't the case, and such a configuration cannot be saved.

Editor without \ on line 4 (note that line 5 is not coloured as a continuation of line 4):

screen shot 2018-05-04 at 4 57 42 pm

Editor with \ on line 4 (note that line 5 is now green):

screen shot 2018-05-04 at 4 57 48 pm

Show JSON (the continued line is treated as a new property):

screen shot 2018-05-04 at 4 57 57 pm

Expected Behaviour:

The editor should not do syntax highlighting suggesting that \ can be used to continue a line.

Alternatively, Lenses should support wrapping long lines with \.

Actual Behaviour:

Using \ to wrap a long line is not valid.

License as env variable not getting picked up

Hi,

I'm trying a deployment of the container in Kubernetes, and as suggested in the readme, I'm passing in the environment variable LICENSE with the json of license.json. The container fails to start correctly with the following error:

2018-06-20 11:13:58,506 ERROR [c.l.k.l.Main$:155] Can not read the license file:/data/license.json

I've tested with version 2.0 and 2.0.11 - both fail to start. Is there something else that needs to be done for it to pick up the license via env variable?

For context: I'm using only environment variables to pass in all settings - I'm not mounting a lenses.conf or security.conf file.

Minikube lenses example bundled in the repo doesn't work

I'm trying to run Lenses in Minikube. After following the instructions https://github.com/lensesio/lenses-docker/tree/master/_examples I got no error and both pods are running.

The problem is that no http listener is running on the lenses pod (getting a 404 response in browser).

kubectl logs -f pods/lenses` produces:

2020-01-06 13:54:01,060 INFO [k.u.Log4jControllerRegistration$:31] Registered kafka:type=kafka.Log4jController MBean
2020-01-06 13:54:01,089 INFO [k.z.ZooKeeperClient:66] [ZooKeeperClient] Initializing a new session to kafka:2181/.
2020-01-06 13:54:01,118 INFO [k.z.ZooKeeperClient:66] [ZooKeeperClient] Waiting until connected.
2020-01-06 13:54:01,139 INFO [k.z.ZooKeeperClient:66] [ZooKeeperClient] Connected.
2020-01-06 13:54:01,168 INFO [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
2020-01-06 13:54:01,168 INFO [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
2020-01-06 13:54:01,169 INFO [i.l.r.H2ConnectionBuilder$:43] Opening the storage to lensesdb...
2020-01-06 13:54:02,052 INFO [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
2020-01-06 13:54:02,053 INFO [i.l.r.H2ConnectionBuilder$:43] Opening the storage to lensesdb...
2020-01-06 13:54:02,081 INFO [i.l.r.H2ConnectionBuilder$:27] Setting the local storage to [/data/storage]
2020-01-06 13:54:02,082 INFO [i.l.r.H2ConnectionBuilder$:43] Opening the storage to lensesdb...
2020-01-06 13:54:04,616 INFO [c.z.h.HikariDataSource:110] HikariPool-1 - Starting...
2020-01-06 13:54:04,633 INFO [c.z.h.HikariDataSource:123] HikariPool-1 - Start completed.
2020-01-06 13:54:05,043 INFO [i.l.c.k.ConnectClustersTopicValidation$:25] Validating Connect cluster topics...
2020-01-06 13:56:05,448 ERROR [c.l.k.l.Main$:159] Uncaught exception on Thread[main,5,main]
java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.lenses.core.kafka.ConnectClustersTopicValidation$.apply(ConnectClustersTopicValidation.scala:33)
at com.landoop.kafka.lenses.Main$.$anonfun$new$2(Main.scala:225)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at io.lenses.domain.logging.MetricsSupport.timed(MetricsSupport.scala:18)
at io.lenses.domain.logging.MetricsSupport.timed$(MetricsSupport.scala:16)
at com.landoop.kafka.lenses.Main$.timed(Main.scala:148)
at com.landoop.kafka.lenses.Main$.delayedEndpoint$com$landoop$kafka$lenses$Main$1(Main.scala:224)
at com.landoop.kafka.lenses.Main$delayedInit$body.apply(Main.scala:148)
at scala.Function0.apply$mcV$sp(Function0.scala:39)
at scala.Function0.apply$mcV$sp$(Function0.scala:39)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
at scala.App.$anonfun$main$1$adapted(App.scala:80)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.App.main(App.scala:80)
at scala.App.main$(App.scala:78)
at com.landoop.kafka.lenses.Main$.main(Main.scala:148)
at com.landoop.kafka.lenses.Main.main(Main.scala)

So it seems that Lenses cannot connect properly to the kafka pod (even though it seems to succeed connecting to the Zookeeper).

When I'm checking out the kubectl logs -f pods/kafkagroup I see that the pod services seem to start properly but then they restart evey few minutes

2020-01-06 13:53:09,856 INFO success: running-sample-data-telecom-italia entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:53:09,856 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:53:09,856 INFO success: smoke-tests entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:53:09,856 INFO success: zookeeper entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:55:17,425 INFO exited: schema-registry (exit status 1; not expected)
2020-01-06 13:55:17,735 INFO spawned: 'schema-registry' with pid 2521
2020-01-06 13:55:19,161 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:56:22,718 INFO exited: running-sample-data-ais (exit status 1; not expected)
2020-01-06 13:56:22,960 INFO exited: running-sample-data-taxis (exit status 1; not expected)
2020-01-06 13:56:28,951 INFO exited: running-sample-data-telecom-italia (exit status 1; not expected)
2020-01-06 13:57:22,719 INFO exited: schema-registry (exit status 1; not expected)
2020-01-06 13:57:22,723 INFO spawned: 'schema-registry' with pid 5805
2020-01-06 13:57:23,761 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:58:10,014 INFO exited: logs-to-kafka (exit status 0; expected)
2020-01-06 13:59:27,411 INFO exited: schema-registry (exit status 1; not expected)
2020-01-06 13:59:27,479 INFO spawned: 'schema-registry' with pid 6729
2020-01-06 13:59:28,513 INFO success: schema-registry entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-01-06 13:59:34,563 INFO exited: connect-distributed (exit status 2; not expected)
2020-01-06 13:59:34,569 INFO spawned: 'connect-distributed' with pid 6761
2020-01-06 13:59:35,593 INFO success: connect-distributed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Am I doing something wrong?

[Redis-Sink] No writers set for connect.redis.kcql

Hello, I am trying to set up a Redis Sink connector on the lenses dashboard.
In particular, I am using the following version:

  • Lenses 3.2.1
  • Kafka 2.2.2-L0

The configuration is the following one:

connector.class=com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkConnector
connect.redis.port=6370
connect.redis.kcql=INSERT INTO table1 SELECT * FROM topic1_test
topics= topic1_test
tasks.max=1
connect.redis.host=172.17.0.4
name=redis-sink

The related task shows the following error:

java.lang.IllegalArgumentException: requirement failed: No writers set for connect.redis.kcql!
	at scala.Predef$.require(Predef.scala:224)
	at com.datamountaineer.streamreactor.connect.redis.sink.RedisSinkTask.start(RedisSinkTask.scala:97)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:301)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Any suggestion? Thanks.

For large truststore files, the container fails to start

We use env vars to process the contents of various files. Env vars have a limit of 128KB, so if someone provides a large truststore file, like Java's default cacerts, the container fails to start due to an argument too long error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.