Giter VIP home page Giter VIP logo

zookeeper-docker's Introduction

Docker image packaging for Apache Zookeeper

This is the Git repo of the Docker "Official Image" for zookeeper. See the Docker Hub page for the full readme on how to use this Docker image and for information regarding contributing and issues.

The full image description on Docker Hub is generated/maintained over in the docker-library/docs repository, specifically in the zookeeper directory.

See a change merged here that doesn't show up on Docker Hub yet?

For more information about the full official images change lifecycle, see the "An image's source changed in Git, now what?" FAQ entry.

For outstanding zookeeper image PRs, check PRs with the "library/zookeeper" label on the official-images repository. For the current "source of truth" for zookeeper, see the library/zookeeper file in the official-images repository.

zookeeper-docker's People

Contributors

31z4 avatar amuraru avatar bablzz avatar bianjp avatar decaz avatar dustinschultz avatar eikemeier avatar eyalzek avatar furikake avatar hlwanghl avatar hronom avatar jaceq avatar janhoy avatar jdekoning avatar joveyu avatar rd-michel avatar timwolla avatar vistrcm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zookeeper-docker's Issues

Strange port issue when using 3.5 with docker-compose

Not sure if it's just my problem or not. I am using the provided docker-compose.yml example in the documentation. When I started on local, all 3 containers started up, but when I went into containers themselves and ran ./zkServer.sh status, all of them gave me the following output:

ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port not found in static config file. Looking in dynamic config file.
grep: : No such file or directory
Client port not found. Terminating.

zookeeper cannot start

But then when I switch zookeeper version to 3.4, everything works perfectly, zookeeper starts correctly, I can connect to each of the node using zkCli.sh

Expected behavior

Using the provided docker-compose.yml example with zookeeper:3.5 on local, zookeeper should start correctly, no port issue

Actual behavior

Cannot find port

Steps to reproduce the behavior

  1. Using the following compose file
version: '3.1'

services:
  zoo1:
    image: zookeeper:3.5
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo2:
    image: zookeeper:3.5
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888

  zoo3:
    image: zookeeper:3.5
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888

and run docker-compose up -d

  1. Go into any of the containers, then cd to bin folder, and ran ./zkServer.sh status

System configuration

MacBook Pro
Intel Core i7
Memory: 16 GB

Docker container time synchronization problem?

Docker container time synchronization problem?
The following step is that I have specified Shanghai time in dockerfile, but when the mirror build is completed, the startup container has not changed to Shanghai time.

I tested the Java:9 image and specified that Shanghai time could be changed successfully in Dockerfile.

AppledeMacBook-Pro:tm-zookeeper apple$ ll
total 24
drwxr-xr-x   5 apple  staff   160 May 19 23:00 ./
drwxr-xr-x  10 apple  staff   320 May 19 22:44 ../
-rw-r--r--   1 apple  staff   313 May 19 22:59 Dockerfile
-rw-r--r--   1 apple  staff  1301 May 19 23:00 README.MD
-rw-r--r--   1 apple  staff   872 May 13 12:48 sources.list
AppledeMacBook-Pro:tm-zookeeper apple$ cat Dockerfile 
FROM zookeeper:3.3

MAINTAINER [email protected]

COPY sources.list /etc/apt/

#RUN echo "Asia/shanghai" > /etc/timezone;

#设置时区
#RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
#  && echo 'Asia/Shanghai' >/etc/timezone


RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime
AppledeMacBook-Pro:tm-zookeeper apple$ docker build -t tm-zookeeper:3.3 .
Sending build context to Docker daemon  5.632kB
Step 1/4 : FROM zookeeper:3.3
 ---> 89ed8efbcf1a
Step 2/4 : MAINTAINER [email protected]
 ---> Running in 9de20e310382
Removing intermediate container 9de20e310382
 ---> 79d124371e6e
Step 3/4 : COPY sources.list /etc/apt/
 ---> c785c5c7e4d6
Step 4/4 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime
 ---> Running in aee20982293b
Removing intermediate container aee20982293b
 ---> 5114fb0923c4
Successfully built 5114fb0923c4
Successfully tagged tm-zookeeper:3.3
AppledeMacBook-Pro:tm-zookeeper apple$ docker run \
> --name tm-zookeeper3 \
> -p 2181:2181 \
> -d \
> tm-zookeeper:3.3
3db0b49e3b96d0f6e52a44ec93a601c8f5dcd146edf883771a2ef257193245d2
AppledeMacBook-Pro:tm-zookeeper apple$ docker exec -it tm-zookeeper3 bash
bash-4.4# date
Sat May 19 15:10:18 GMT 2018
bash-4.4# exit
exit
AppledeMacBook-Pro:tm-zookeeper apple$ date
Sat May 19 23:10:21 CST 2018


clientPort 2181 can't telnet

Before you file an issue here, please keep in mind that your issue may be not related to the image itself. Please make sure that it is, otherwise report the issue upstream.

Expected behavior

docker run --name my_zookeeper -d -p 2181:2181 -p 2888:2888 -p 3888:3888 --env ZOO_MY_ID=1 --env ZOO_SERVERS="server.1=0.0.0.0:2888:3888 server.2=192.168.154.33:5889:5890 server.3=192.168.213.91:2888:3888" zookeeper:latest

and the client port 2181 can't telnet

Actual behavior

i found the /conf/zoo.cfg does not cotain client port item

Steps to reproduce the behavior

Tell us how to reproduce the issue.

System configuration

dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
server.1=0.0.0.0:2888:3888
server.2=192.168.154.33:5889:5890
server.3=192.168.213.91:2888:3888

request. Add ability to configure `4lw.commands.whitelist` cluster option for zookeeper 3.5

Expected behavior

Users can change 4lw.commands.whitelist cluster parameter easily and consistently with other parameters.

Reason is simple. In Zookeeper 3.5.5 4lw.commands.whitelist cluster parameter was introduced. Default value is srvr (Source).

Some of the applications like Solr extensively use zookeeper Four Letter Words. Plus Solr requires more than only srvr to be allowed.

Solr requirements: https://github.com/apache/lucene-solr/blob/master/solr/CHANGES.txt#L206

Actual behavior

User can not change 4lw.commands.whitelist cluster parameter via environment variables, only via a mounted config file.

dataDir and dataLogDir Swapped

Is anyone else seeing that the dataDir and dataLogDir directories are swapped?

4-letter-word conf output:

$ echo conf | nc localhost 2181
clientPort=2181
dataDir=/datalog/version-2
dataLogDir=/data/version-2
...

Configuration file (in docker container):

# cat /conf/zoo.cfg 
clientPort=2181
dataDir=/data
dataLogDir=/datalog

An indeed, the logs are going to dataDir directory

# ls /data/version-2/
log.100000001
# ls /datalog/version-2/
acceptedEpoch       currentEpoch        snapshot.100000000

Image Details:

$ docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
zookeeper                 3.4.11              56d414270ae3        3 weeks ago         146MB
$ docker inspect  -f "{{.Name}} {{.Config.Cmd}}" zookeeper
/zookeeper [zkServer.sh start-foreground]

I assume this is not expected.

Zookeeper has problems at a period of time under swarm mode

Before you file an issue here, please keep in mind that your issue may be not related to the image itself. Please make sure that it is, otherwise report the issue upstream.

Exception

Exception causing close of session 0x0: ZooKeeperServer not running

swarm service

version: '3.7'
services:
   zookeeper-101:
      image: zookeeper:3.5.5
      restart: always
      environment:
         ZOO_MY_ID: 101
         ZOO_TICK_TIME: 10000
         ZOO_INIT_LIMIT: 6
         ZOO_SYNC_LIMIT: 6
         ZOO_MAX_CLIENT_CNXNS: 100
         ZOO_STANDALONE_ENABLED: 'false'
         ZOO_ADMINSERVER_ENABLED: 'true'
         ZOO_AUTOPURGE_PURGEINTERVAL: 0
         ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
         ZOO_4LW_COMMANDS_WHITELIST: srvr
         ZOO_SERVERS: server.101=0.0.0.0:2888:3888;2181 server.112=zookeeper-112:2888:3888;2181 server.108=zookeeper-108:2888:3888;2181 server.118=zookeeper-118:2888:3888;2181
      volumes:
      - /opt/deploy/data/zookeeper/data:/data
      - /opt/deploy/data/zookeeper/datalog:/datalog
      - /opt/deploy/data/zookeeper/logs:/logs
      - /etc/localtime:/etc/localtime
      ports:
      - 2181:2181
      - 28081:8080
      deploy:
         replicas: 1
         placement:
            constraints:
            - node.hostname==dbs-server101
   zookeeper-112:
      image: zookeeper:3.5.5
      restart: always
      environment:
         ZOO_MY_ID: 112
         ZOO_TICK_TIME: 10000
         ZOO_INIT_LIMIT: 6
         ZOO_SYNC_LIMIT: 6
         ZOO_MAX_CLIENT_CNXNS: 100
         ZOO_STANDALONE_ENABLED: 'false'
         ZOO_ADMINSERVER_ENABLED: 'true'
         ZOO_AUTOPURGE_PURGEINTERVAL: 0
         ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
         ZOO_4LW_COMMANDS_WHITELIST: srvr
         ZOO_SERVERS: server.101=zookeeper-101:2888:3888;2181 server.112=0.0.0.0:2888:3888;2181 server.108=zookeeper-108:2888:3888;2181 server.118=zookeeper-118:2888:3888;2181
      volumes:
      - /opt/deploy/data/zookeeper/data:/data
      - /opt/deploy/data/zookeeper/datalog:/datalog
      - /opt/deploy/data/zookeeper/logs:/logs
      - /etc/localtime:/etc/localtime
      ports:
      - 2182:2181
      - 28082:8080
      deploy:
         replicas: 1
         placement:
            constraints:
            - node.hostname==dbs-server112
   zookeeper-108:
      image: zookeeper:3.5.5
      restart: always
      environment:
         ZOO_MY_ID: 108
         ZOO_TICK_TIME: 10000
         ZOO_INIT_LIMIT: 6
         ZOO_SYNC_LIMIT: 6
         ZOO_MAX_CLIENT_CNXNS: 100
         ZOO_STANDALONE_ENABLED: 'false'
         ZOO_ADMINSERVER_ENABLED: 'true'
         ZOO_AUTOPURGE_PURGEINTERVAL: 0
         ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
         ZOO_4LW_COMMANDS_WHITELIST: srvr
         ZOO_SERVERS: server.101=zookeeper-101:2888:3888;2181 server.112=zookeeper-112:2888:3888;2181 server.108=0.0.0.0:2888:3888;2181 server.118=zookeeper-118:2888:3888;2181
      volumes:
      - /opt/deploy/data/zookeeper/data:/data
      - /opt/deploy/data/zookeeper/datalog:/datalog
      - /opt/deploy/data/zookeeper/logs:/logs
      - /etc/localtime:/etc/localtime
      ports:
      - 2183:2181
      - 28083:8080
      deploy:
         replicas: 1
         placement:
            constraints:
            - node.hostname==dbs-server108
   zookeeper-118:
      image: zookeeper:3.5.5
      restart: always
      environment:
         ZOO_MY_ID: 118
         ZOO_TICK_TIME: 10000
         ZOO_INIT_LIMIT: 6
         ZOO_SYNC_LIMIT: 6
         ZOO_MAX_CLIENT_CNXNS: 100
         ZOO_STANDALONE_ENABLED: 'false'
         ZOO_ADMINSERVER_ENABLED: 'true'
         ZOO_AUTOPURGE_PURGEINTERVAL: 0
         ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3
         ZOO_4LW_COMMANDS_WHITELIST: srvr
         ZOO_SERVERS: server.101=zookeeper-101:2888:3888;2181 server.112=zookeeper-112:2888:3888;2181 server.108=zookeeper-108:2888:3888;2181 server.118=0.0.0.0:2888:3888;2181
      volumes:
      - /opt/deploy/data/zookeeper/data:/data
      - /opt/deploy/data/zookeeper/datalog:/datalog
      - /opt/deploy/data/zookeeper/logs:/logs
      - /etc/localtime:/etc/localtime
      ports:
      - 2184:2181
      - 28084:8080
      deploy:
         replicas: 1
         placement:
            constraints:
            - node.hostname==dbs-server118

Environment variables "reconfigEnabled" no suppoert

Before you file an issue here, please keep in mind that your issue may be not related to the image itself. Please make sure that it is, otherwise report the issue upstream.

Expected behavior

Tell us what should happen.

Actual behavior

Tell us what happens instead.

Steps to reproduce the behavior

Tell us how to reproduce the issue.

System configuration

Please include as much relevant information as possible.

Security vulnerabilities with version 3.5.5

Expected behavior

Vulnerability scans of container image should not report critical/high severity security vulnerabilities.

Actual behavior

  1. Image scans using Blackduck reported several critical and high severity security vulnerabilities for version 3.5.5 of the image.

    Please let me know how to share the report with you. I can generate a csv file, and send it to an email if that'd work. Alternatively, I can share the report here.

  2. The scan report (https://hub.docker.com/_/zookeeper/scans/library/zookeeper/3.5.5) available in Docker hub for the image, also shows several critical/high severity vulnerabilities. (Note: the user must be logged in to Docker Hub to be able to see the report).

Steps to reproduce the behavior

Not applicable.

System configuration

Not applicable.

ppc64le dockerhub images

Hi, I'm looking to enable docker containers for packages like zookeeper for multiple architectures on dockerhub, starting off with a focus on ppc64le. Wanted to check on the work that will be involved to do that

I have locally been able to successfully test ppc64le changes to the dockerfiles in this repo - changes for enabling additional arch would be minimum, however the base image might be a problem as mentioned in issue #7 but that can eventually be resolved

However I want to understand if this community is open to supporting ppc64le images officially. The official images have this method for enabling multi-arch : https://github.com/docker-library/official-images#multiple-architectures, as mentioned there they recommend having a single dockerfile with swithches for arch-specific changes.

The steps would involve getting the single dockerfile here to include ppc64le and then subsequently raising a PR under docker-library/official-images (would mostly need changes to the metadata file https://github.com/docker-library/official-images/blob/master/library/zookeeper)

I'm willing to work on these steps, want to know if community is okay with it or any other thoughts

Thanks!

Without `clientPort` config ZooKeeper won't actually listen to the port

Before you file an issue here, please keep in mind that your issue may be not related to the image itself. Please make sure that it is, otherwise report the issue upstream.

Expected behavior

Simple config like this should just work:

version: '3'
services:
  zookeeper:
    image: zookeeper:3.5
    ports:
      - 2181:2181
    environment:
      - ZOO_SERVERS=server.1=0.0.0.0:2888:3888

Actual behavior

However it isn't, the server did start but it wasn't listening to port 2181. The fix is to modify the entrypoint.sh to include this config by default.

Dockerfile modification

Hi there.
Need to add this line before ENTRYPOINT section when building in docker-compose:

RUN chmod +x /docker-entrypoint.sh && chown $ZOO_USER /docker-entrypoint.sh

Without this got an error

Recreating zookeeper ... error

ERROR: for zookeeper  Cannot start service zookeeper: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/docker-entrypoint.sh\": permission denied": unknown

ERROR: for zookeeper  Cannot start service zookeeper: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/docker-entrypoint.sh\": permission denied": unknown
ERROR: Encountered errors while bringing up the project.

Fails to Connect to Port 3888

I suspect this is a simple misconfiguration on my part, but for some reason zookeeper 3.5 isn't listening on 3888 and won't come up as an ensemble.

The only configuration change I've done to upgrade from 3.4 to 3.5 is edited the ZOO_SERVERS environment variable.

Check Port Connection:

$ telnet 172.27.1.35 3888
Trying 172.27.1.35...
telnet: Unable to connect to remote host: Connection refused

Logs (this is the same for each host/zk instance):

$ docker logs zookeeper
...
2018-02-27 17:52:10,613 [myid:5] - WARN  [WorkerSender[myid=5]:QuorumCnxManager@457] - Cannot open channel to 4 at election address /172.27.1.35:3888
java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:443)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:486)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:421)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:486)
	at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:465)
	at java.lang.Thread.run(Thread.java:748)
...

Docker Configuration:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                                                            NAMES
b57bc2ce1be9        zookeeper:3.5       "/docker-entrypoin..."   6 minutes ago       Up 6 minutes        0.0.0.0:2181->2181/tcp, 0.0.0.0:2888->2888/tcp, 0.0.0.0:3888->3888/tcp, 0.0.0.0:8080->8080/tcp   zookeeper
$ docker inspect zookeeper
[
    {
        "Id": "b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016",
        "Created": "2018-02-27T17:48:24.112542148Z",
        "Path": "/docker-entrypoint.sh",
        "Args": [
            "zkServer.sh",
            "start-foreground"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 2976,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2018-02-27T17:48:24.442237938Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:21141b14d33ee38abb97b37645e185e8922839b1f7d2d538ac07f75e47f79269",
        "ResolvConfPath": "/var/lib/docker/containers/b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016/hostname",
        "HostsPath": "/var/lib/docker/containers/b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016/hosts",
        "LogPath": "/var/lib/docker/containers/b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016/b57bc2ce1be97b1da2d41fce1a22e27d815ee0e6162e371da8411ca6d14fe016-json.log",
        "Name": "/zookeeper",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "2181/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "2181"
                    }
                ],
                "2888/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "2888"
                    }
                ],
                "3888/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "3888"
                    }
                ],
                "8080/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "8080"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "Tmpfs": {
                "/data": "rw,size=1G"
            },
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/18858c553feee2d0340cf102411206db072a11cedb33e7f089545c18397fd6e6-init/diff:/var/lib/docker/overlay2/759cb40e5abea1383926f56fca033df56f258b565ba5d2d308e88974ac134bc9/diff:/var/lib/docker/overlay2/87d1ce537310cec8c9519c1b891e06a71cf4c85194a4dacc1bfaeb07d3c11d66/diff:/var/lib/docker/overlay2/13254cf8a41fbb5c96e97b33e134f436bee3124e1ecf48ae4d1f44935d07074e/diff:/var/lib/docker/overlay2/e089d7d80aba98cb204320ad0dd43def9452b1c05d5134faffbf9aa27730ef5e/diff:/var/lib/docker/overlay2/b5a31c87f007cbfc63fa4e810bc2a925a6cbf2f9952a4769439cdec4e3d17824/diff:/var/lib/docker/overlay2/b50150ec0f99701848dc38a3e7b72ff1ec89603b64c4dd1cf9857a6546a19875/diff:/var/lib/docker/overlay2/962303bcabf8087dd8485b0c7cfca5682bef37e6b7ce30d552672d8171c3b215/diff",
                "MergedDir": "/var/lib/docker/overlay2/18858c553feee2d0340cf102411206db072a11cedb33e7f089545c18397fd6e6/merged",
                "UpperDir": "/var/lib/docker/overlay2/18858c553feee2d0340cf102411206db072a11cedb33e7f089545c18397fd6e6/diff",
                "WorkDir": "/var/lib/docker/overlay2/18858c553feee2d0340cf102411206db072a11cedb33e7f089545c18397fd6e6/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "654db3b70d80482433a019e56dcee9e50ab45407b895469bf97cf5ef33206ddc",
                "Source": "/var/lib/docker/volumes/654db3b70d80482433a019e56dcee9e50ab45407b895469bf97cf5ef33206ddc/_data",
                "Destination": "/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "volume",
                "Name": "679dc5839d62c11866f17e50f383aa691f4d1f585b31429bcd3e207af78aee3c",
                "Source": "/var/lib/docker/volumes/679dc5839d62c11866f17e50f383aa691f4d1f585b31429bcd3e207af78aee3c/_data",
                "Destination": "/datalog",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "zk5",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "2181/tcp": {},
                "2888/tcp": {},
                "3888/tcp": {},
                "8080/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "ZOO_DATA_DIR=/data",
                "ZOO_MY_ID=5",
                "ZOO_SERVERS=server.1=172.27.1.140:2888:3888;2181 server.2=172.27.1.152:2888:3888;2181 server.3=172.27.1.217:2888:3888;2181 server.4=172.27.1.35:2888:3888;2181 server.5=0.0.0:2888:3888;2181",
                "ZOO_DATA_LOG_DIR=/datalog",
                "ZOO_CONF_DIR=/conf",
                "ZOO_TICK_TIME=2000",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin:/zookeeper-3.5.3-beta/bin",
                "LANG=C.UTF-8",
                "JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre",
                "JAVA_VERSION=8u151",
                "JAVA_ALPINE_VERSION=8.151.12-r0",
                "ZOO_USER=zookeeper",
                "ZOO_PORT=2181",
                "ZOO_INIT_LIMIT=5",
                "ZOO_SYNC_LIMIT=2",
                "ZOO_MAX_CLIENT_CNXNS=60",
                "ZOO_STANDALONE_ENABLED=false",
                "ZOOCFGDIR=/conf"
            ],
            "Cmd": [
                "zkServer.sh",
                "start-foreground"
            ],
            "ArgsEscaped": true,
            "Image": "zookeeper:3.5",
            "Volumes": {
                "/data": {},
                "/datalog": {}
            },
            "WorkingDir": "/zookeeper-3.5.3-beta",
            "Entrypoint": [
                "/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "ee46414fa83fe7299001cd953ca45d02b92f54579d3cf28d2009faae4dfd7b82",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "2181/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "2181"
                    }
                ],
                "2888/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "2888"
                    }
                ],
                "3888/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "3888"
                    }
                ],
                "8080/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "8080"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/ee46414fa83f",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "4bb987d2a7459c37b192dba464cb8dacbbab1baffb3f1a710903e72a9149364e",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "d2906d74e9ae1490b4166d1f04c8f897b3f622aaf9a9c52910f6b01a9a0b35c6",
                    "EndpointID": "4bb987d2a7459c37b192dba464cb8dacbbab1baffb3f1a710903e72a9149364e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

Permission issue with mounted volume for /data and /datalog

I'm trying to use filesystem persistency in Docker using the mounting option when running the zookeeper image :

sudo docker run --name zookeeper --restart always -d -v /data/castle/zookeeper/data:/data -v /data/castle/zookeeper/datalog:/datalog zookeeper

Here is the error raised by the container :

/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied
/docker-entrypoint.sh: line 29: /data/myid: Permission denied

As everything is ran by Docker, I configured everything to be owned by root and only read/write by root :

/data/castle/zookeeper# ls -l
drwx--x--x 2 root root 4096 Sep 30 17:06 data
drwx--x--x 2 root root 4096 Sep 30 17:07 datalog

For example, it works very well with the official Redis image. The Redis folder :

drwx--x--x 2 systemd-timesync root 4096 Sep 30 16:55 redis/

Do you have any idea how to get that running without compromising security of the data folder ?

logs will be missing when mounting logs and conf at the same time

logs will be missing when mounting logs and conf at the same time.
zookeeper 3.5.5

Expected behavior

the log4j file should exist.

Actual behavior

the log4j file is missing.

Steps to reproduce the behavior

docker-compose up by the following configuration file.

System configuration

docker-compose.yml

version: '3.1'

services:
  zoo1:
    image: zookeeper:3.5.5
    restart: always
    hostname: zoo1
    container_name: zookeeper_1
    ports:
      - 2181:2181
    volumes:
      - /app/zookeeper/zoo1/data:/data
      - /app/zookeeper/zoo1/datalog:/datalog
      - /app/zookeeper/zoo1/logs:/logs
      - /app/zookeeper/zoo1/conf:/conf
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_LOG4J_PROP: "INFO,ROLLINGFILE"
      ZOO_ADMINSERVER_ENABLED: "false"

  zoo2:
    image: zookeeper:3.5.5
    restart: always
    hostname: zoo2
    container_name: zookeeper_2
    ports:
      - 2182:2181
    volumes:
      - /app/zookeeper/zoo2/data:/data
      - /app/zookeeper/zoo2/datalog:/datalog
      - /app/zookeeper/zoo2/logs:/logs
      - /app/zookeeper/zoo2/conf:/conf
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_LOG4J_PROP: "INFO,ROLLINGFILE"
      ZOO_ADMINSERVER_ENABLED: "false"

  zoo3:
    image: zookeeper:3.5.5
    restart: always
    hostname: zoo3
    container_name: zookeeper_3
    ports:
      - 2183:2181
    volumes:
      - /app/zookeeper/zoo3/data:/data
      - /app/zookeeper/zoo3/datalog:/datalog
      - /app/zookeeper/zoo3/logs:/logs
      - /app/zookeeper/zoo3/conf:/conf
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ZOO_LOG4J_PROP: "INFO,ROLLINGFILE"
      ZOO_ADMINSERVER_ENABLED: "false"

when i use the stack.yml in swarm i get the error:only the node that run in same node can commucation

in swarm three node was created,but only the node that run in the same node can commucation.so the zk cluster can't run properly.but the network in three node can access each other.the error is:

2019-04-05 14:26:02,042 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer$QuorumServer@184] - Resolved hostname: zoo3 to address: zoo3/10.0.1.4
2019-04-05 14:26:02,042 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FastLeaderElection@847] - Notification time out: 60000

Error response from deamon: could not get container for dockercompose_zookeeper_1

`+ docker run -a STDOUT -a STDERR --link dockercompose_zookeeper_1:zookeeper ches/kafka:0.10.0.0 kafka-topics.sh --create --topic fuse_failure --replication-factor 1 --partitions 1 --zookeeper zookeeper:2181

docker: Error response from daemon: Could not get container for dockercompose_zookeeper_1.

See 'docker run --help'.
`
It gives me an error as above. What does this error mean and how could I fix it?

AutoPurge Not Working

I'm using the image zookeeper:3.4.11 to run container. And I have configured the autoPurge config. But it's not working, the snapshot and log file not deleteing. How can i fix this?

autopurge.purgeInterval=24
autopurge.snapRetainCount=10

Mounting an external configuration file is invalid

I want to mount an external configuration file but it is invalid.

$ docker -v
Docker version 18.06.1-ce, build e68fc7a
$ docker run --name some-zookeeper --restart always -d -v $(pwd)/zoo.cfg:/conf/zoo.cfg zookeeper
flythread@pzz:~/apps/docker/zookeeper/zk$ tree
.
├── data
└── zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/flythread/apps/docker/zookeeper/zk/data
clientPort=2181

STATUS
Up Less than a second
Restarting (1) 36 seconds ago

flythread@pzz:~/apps/docker/zookeeper/zk$ sudo docker run --name some-zookeeper --restart always -d -v $(pwd)/zoo.cfg:/conf/zoo.cfg zookeeper
1c4734ee648ba106499f65d8d7cc2b46b843e20b6136cb228b853ec49804fffc
flythread@pzz:~/apps/docker/zookeeper/zk$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS                          NAMES
1c4734ee648b        zookeeper           "/docker-entrypoint.…"   5 seconds ago       Up Less than a second           2181/tcp, 2888/tcp, 3888/tcp   some-zookeeper

3.5 syntax for specifying ZOO_SERVERS doesn't work

Hi,

I'm seeing an error when using the 3.5 syntax for specifying servers, e.g.:

ZOO_SERVERS: server.1=s1.com:2888:3888;2181 server.2=s2.com:2888:3888;2181

The error is:

/docker-entrypoint.sh: line 25: [: too many arguments

This because the if condition on line 25 uses [ rather than [[, and no quotation marks:

 25     if [[ -z $ZOO_SERVERS ]]; then
 26       ZOO_SERVERS="server.1=localhost:2888:3888;$ZOO_PORT"
 27     fi

When $ZOO_SERVERS is expanded, the semi-colon's are as well. Happily though the error isn't fatal and Zookeeper will start regardless as the if condition just fails. Would still be good to fix so the logs don't show the error.

According

Accoding to comments on #66
i try adapt https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/ recipes which published on https://github.com/kow3ns/kubernetes-zookeeper/
to your image docker.io/zookeeper:latest

Steps to reproduce the behavior

i use following manifest

# Setup Service to provide access to Zookeeper for clients
apiVersion: v1
kind: Service
metadata:
  # DNS would be like zookeeper.zoons
  name: zookeeper
  labels:
    app: zookeeper
spec:
  ports:
    - port: 2181
      name: client
  selector:
    app: zookeeper
    what: node
---
# Setup Headless Service for StatefulSet
apiVersion: v1
kind: Service
metadata:
  # DNS would be like zookeeper-0.zookeepers.etc
  name: zookeepers
  labels:
    app: zookeeper
spec:
  ports:
    - port: 2888
      name: server
    - port: 3888
      name: leader-election
  clusterIP: None
  selector:
    app: zookeeper
    what: node
---
# Setup max number of unavailable pods in StatefulSet
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zookeeper-pod-disruption-budget
spec:
  selector:
    matchLabels:
      app: zookeeper
  maxUnavailable: 1
---
# Setup Zookeeper StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  # nodes would be named as zookeeper-0, zookeeper-1, zookeeper-2
  name: zookeeper
spec:
  selector:
    matchLabels:
      app: zookeeper
  serviceName: zookeepers
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zookeeper
        what: node
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - zookeeper
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: kubernetes-zookeeper
          imagePullPolicy: Always
          image: "docker.io/zookeeper:latest"
          resources:
            requests:
              memory: "1Gi"
              cpu: "0.5"
          ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: server
            - containerPort: 3888
              name: leader-election
          # for Clickhouse recomended ZooKeeper settings
          # look https://clickhouse.yandex/docs/en/operations/tips/#zookeeper
          command:
            - bash
            - -x
            - -c
            - |
              {
                echo 'clientPort=2181'
                echo 'tickTime=2000'
                echo 'initLimit=30000'
                echo 'syncLimit=10'
                echo 'maxClientCnxns=2000'
                echo 'maxSessionTimeout=60000000'
                echo 'dataDir=/data'
                echo 'dataLogDir=/datalog'
                echo 'autopurge.snapRetainCount=10'
                echo 'autopurge.purgeInterval=1'
                echo 'preAllocSize=131072'
                echo 'snapCount=3000000'
                echo 'leaderServes=yes'
                echo 'standaloneEnabled=true'
              } > /conf/zoo.cfg &&
              {
                echo "zookeeper.root.logger=CONSOLE"
                echo "zookeeper.console.threshold=INFO"
                echo "log4j.rootLogger=\${zookeeper.root.logger}"
                echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender"
                echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}"
                echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout"
                echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n"
              } > /conf/log4j.properties &&
              echo 'JVMFLAGS="-Xms128M -Xmx1G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled"' > /conf/java.env &&
              chown -Rv zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR" &&
              zkServer.sh start-foreground
          readinessProbe:
            exec:
              command:
                - bash
                - -c
                - "OK=$(echo ruok | nc 127.0.0.1 2181); if [[ \"$OK\" == \"imok\" ]]; then exit 0; else exit 1; fi"
            initialDelaySeconds: 10
            timeoutSeconds: 5
          livenessProbe:
            exec:
              command:
                - bash
                - -c
                - "OK=$(echo ruok | nc 127.0.0.1 2181); if [[ \"$OK\" == \"imok\" ]]; then exit 0; else exit 1; fi"
            initialDelaySeconds: 10
            timeoutSeconds: 5
          volumeMounts:
            - name: datadir-volume
              mountPath: /var/lib/zookeeper
      # Run as a non-privileged user
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      volumes:
        - name: datadir-volume
          emptyDir:
            medium: "" #accepted values:  empty str (means node's default medium) or Memory
            sizeLimit: 1Gi

and following commands

kubectl create namespace zoo1ns
kubectl apply -f <file_above>.yaml

Expected behavior

zookeeper-0 pods create and successfully started

Actual behavior

zookeeper-0 pod create but liveness \ readiness probe work incorrectly, I try reproduce liveness probe inside kubernetes-zookeeper container shell

zookeeper@zookeeper-0:/apache-zookeeper-3.5.6-bin$ echo ruok | nc 127.0.0.1 2181
ruok is not executed because it is not in the whitelist.

Tell us how to reproduce the issue.

System configuration

kubernetes 1.17
zookeeper 3.5.6

ZOO_STANDALONE_ENABLED default is contradictory

The Docker Hub doc says:

ZOO_STANDALONE_ENABLED
Defaults to false. Zookeeper's standaloneEnabled

Prior to 3.5.0, one could run ZooKeeper in Standalone mode or in a Distributed mode. These are separate implementation stacks, and switching between them during run time is not possible. By default (for backward compatibility) standaloneEnabled is set to true. The consequence of using this default is that if started with a single server the ensemble will not be allowed to grow, and if started with more than one server it will not be allowed to shrink to contain fewer than two participants.

In the beginning, it's stated that the default value is false. However, in the detail description, the default is claimed to be true. Unless the default value behaves like Schrodinger's cat, only one of the two statements is true.

zookeeper log directory incorrectly specified

Hello,

Noticed a minor issue when using this docker container. The environment variable for the log directory is incorrect. It's set as ZOO_DATA_LOG_DIR where it should instead be ZOO_LOG_DIR as per here.

If this isn't set correctly, you have issues with the reading logs!

"Using Docker Swarm when running Zookeeper"?

I found this sentence in the Readme to be really promising:

Consider using Docker Swarm when running Zookeeper in replicated mode.

Is this actual possible to run Zookeeper with Docker Swarm on multiple servers? In "real" swarm mode, where I can easily just scale it up from 3 to 5 servers? How would I do that? Don't I need to set all other Zookeeper server (IP) addresses via environment variable?

So far I found only Elastic to be easily docker-swarm-scalable, thanks to DZone. Would love to see this possible with Zookeeper, Apache Kafka, Apache NiFi and Hadoop/HDFS.

Image 3.4.14 or 3.5.5 not starting on Kubernetes

Hi,

I am not sure if this is link to the the use of openjdk:8-jre-slim as base image for zookeeper, but since acceptance of #63 and #55 there is an issue with the default user used to start zookeeper.

I have a Zookeeper cluster in version 3.4.13 running on a kubernetes cluster. I tried to upgrade it to 3.4.14 by "simply" changing the image tag to use. Unfortunately, the pod crashes immediatly with the following logs:

ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
2019-06-03 12:45:25,165 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /conf/zoo.cfg
2019-06-03 12:45:25,171 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2019-06-03 12:45:25,171 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0
2019-06-03 12:45:25,171 [myid:] - INFO  [main:DatadirCleanupManager@101] - Purge task is not scheduled.
2019-06-03 12:45:25,172 [myid:] - WARN  [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running  in standalone mode
2019-06-03 12:45:25,184 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /conf/zoo.cfg
2019-06-03 12:45:25,185 [myid:] - INFO  [main:ZooKeeperServerMain@98] - Starting server
2019-06-03 12:45:25,190 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2019-06-03 12:45:25,191 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=zookeeper-0.zookeeper-headless.default.svc.cluster.local
2019-06-03 12:45:25,191 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_212
2019-06-03 12:45:25,191 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
2019-06-03 12:45:25,191 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/local/openjdk-8
2019-06-03 12:45:25,191 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/zookeeper-3.4.14/bin/../build/classes:/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/zookeeper-3.4.14/bin/../build/lib/*.jar:/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/conf:
2019-06-03 12:45:25,192 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-06-03 12:45:25,192 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
2019-06-03 12:45:25,192 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
2019-06-03 12:45:25,193 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
2019-06-03 12:45:25,193 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
2019-06-03 12:45:25,193 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=4.15.0
2019-06-03 12:45:25,194 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=?
2019-06-03 12:45:25,194 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=?
2019-06-03 12:45:25,194 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper-3.4.14
2019-06-03 12:45:25,200 [myid:] - ERROR [main:ZooKeeperServerMain@66] - Unexpected exception, exiting abnormally
java.io.IOException: Unable to create data directory /datalog/version-2
	at org.apache.zookeeper.server.persistence.FileTxnSnapLog.<init>(FileTxnSnapLog.java:87)
	at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112)
	at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:89)
	at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:55)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:119)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)

Notice the Server environment:user.name=? and Server environment:user.home=? in the logs? I changed the entrypoint command to have some kind of eternal sleep for the pod to keep running while I am trying to understand what was happening and I found this:

I have no name!@zookeeper-0:/zookeeper-3.4.14$ whoami
whoami: cannot find name for user ID 1000
I have no name!@zookeeper-0:/zookeeper-3.4.14$ id zookeeper
uid=999(zookeeper) gid=999(zookeeper) groups=999(zookeeper)

Not sure why but it seems that the container is not started with the right user. It is using the user with ID 1000 (which does not exists) instead of the zookeeper one (with ID 999). I noticed the same behaviour with the 3.5.5 image.

Again, I am not sure this is linked to the build of the new image but the image with tag 3.4.13 is running correctly.

Do you have an idea about the cause of this issue?

Can't run cluster, connection refused

I am trying to set up a cluster with 3 servers with https://hub.docker.com/_/zookeeper/

ssh zoo1

docker run \
  --name zk1 \
  --restart always \
  -p 2181:2181 \
  -p 2888:2888 \
  -p 3888:3888 \
  -e ZOO_MY_ID=1 \
  -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
  -d zookeeper

ssh zoo2

docker run \
  --name zk2 \
  --restart always \
  -p 2181:2181 \
  -p 2888:2888 \
  -p 3888:3888 \
  -e ZOO_MY_ID=2 \
  -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
  -d zookeeper

ssh zoo3

docker run \
  --name zk3 \
  --restart always \
  -p 2181:2181 \
  -p 2888:2888 \
  -p 3888:3888 \
  -e ZOO_MY_ID=3 \
  -e ZOO_SERVERS="server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888" \
  -d zookeeper

But somehow the cluster is not established, connection is refused.

2017-05-23 19:28:53,830 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@588] - Cannot open channel to 2 at election address zoo2/1.2.3.4:3888
java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:562)
	at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:614)
	at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
	at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:913)
2017-05-23 19:28:53,832 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@167] - Resolved hostname: zoo2 to address: zoo2/1.2.3.4

Why is this not working? Has zookeeper been tested with multiple servers?

maxClientCnxns support in Docker image

Hey there, looking at docker-entrypoint.sh (which is the entrypoint of your zookeeper:3.4 image)

I see that it only support some of the configurations:

 Generate the config only if it doesn't exist
if [ ! -f "$ZOO_CONF_DIR/zoo.cfg" ]; then
    CONFIG="$ZOO_CONF_DIR/zoo.cfg"

    echo "clientPort=$ZOO_PORT" >> "$CONFIG"
    echo "dataDir=$ZOO_DATA_DIR" >> "$CONFIG"
    echo "dataLogDir=$ZOO_DATA_LOG_DIR" >> "$CONFIG"

    echo "tickTime=$ZOO_TICK_TIME" >> "$CONFIG"
    echo "initLimit=$ZOO_INIT_LIMIT" >> "$CONFIG"
    echo "syncLimit=$ZOO_SYNC_LIMIT" >> "$CONFIG"

    for server in $ZOO_SERVERS; do
        echo "$server" >> "$CONFIG"
    done
fi

but not all of them, is there any way to make this script a bit more smart in order to support: maxClientCnxns for example?

confluentinc/kafka has a good example of it.

31z4/zookeeper:latest is not 3.5 but 3.4

Before you file an issue here, please keep in mind that your issue may be not related to the image itself. Please make sure that it is, otherwise report the issue upstream.

Expected behavior

As the main README.md states at via-docker-stack-deploy-or-docker as follows

This will start Zookeeper 3.5 in replicated mode. Please note...

I expected for the given docker-compose.yml to work with a automatically-pulled lastest zookeeper but it turned out that those 2181 ports were missing.

I think the README.md itself is contradictory. One needs to either remove ;2181 part as in :2888:3888;2181 or remove This will start Zookeeper 3.5 in replicated mode. Please note... like the quote above.

Actual behavior

Tell us what happens instead.

docker pull 31z4/zookeeper:latest should fetch a version of 3.5

Steps to reproduce the behavior

As you would see below the history of the zk image reveals its version as 3.4

servers jinchoi$ docker images | grep 31z4
31z4/zookeeper                               latest                       916e61594d17        13 months ago       148MB
servers jinchoi$ docker history 31z4/zookeeper | grep 3
916e61594d17        13 months ago       /bin/sh -c #(nop)  CMD ["zkServer.sh" "start…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  ENTRYPOINT ["/docker-entr…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop) COPY file:edfd7f9189668bf2…   1.13kB              
<missing>           13 months ago       /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  EXPOSE 2181 2888 3888        0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  VOLUME [/data /datalog /l…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop) WORKDIR /zookeeper-3.4.13     0B                  
<missing>           13 months ago       |2 DISTRO_NAME=zookeeper-3.4.13 GPG_KEY=C61B…   60.7MB              
<missing>           13 months ago       /bin/sh -c #(nop)  ARG DISTRO_NAME=zookeeper…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  ARG GPG_KEY=C61B346552DC5…   0B                  
<missing>           13 months ago       /bin/sh -c set -ex;     adduser -D "$ZOO_USE…   4.82kB              
<missing>           13 months ago       /bin/sh -c #(nop)  ENV ZOO_USER=zookeeper ZO…   0B                  
<missing>           13 months ago       /bin/sh -c apk add --no-cache     bash     s…   3.9MB               
<missing>           13 months ago       /bin/sh -c set -x  && apk add --no-cache   o…   78.6MB              
<missing>           13 months ago       /bin/sh -c #(nop)  ENV JAVA_ALPINE_VERSION=8…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  ENV JAVA_VERSION=8u171       0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  ENV JAVA_HOME=/usr/lib/jv…   0B                  
<missing>           13 months ago       /bin/sh -c {   echo '#!/bin/sh';   echo 'set…   87B                 
<missing>           13 months ago       /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0B                  
<missing>           13 months ago       /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           13 months ago       /bin/sh -c #(nop) ADD file:25c10b1d1b41d46a1…   4.41MB  

Support enabling JMX port.

In bin/zkServer.sh there is this bit of code:


if [ "x$JMXDISABLE" = "x" ] || [ "$JMXDISABLE" = 'false' ]
then
  echo "ZooKeeper JMX enabled by default" >&2
  if [ "x$JMXPORT" = "x" ]
  then
    # for some reason these two options are necessary on jdk6 on Ubuntu
    #   accord to the docs they are not necessary, but otw jconsole cannot
    #   do a local attach
    ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY org.apache.zookeeper.server.quorum.QuorumPeerMain"
  else
    if [ "x$JMXAUTH" = "x" ]
    then
      JMXAUTH=false
    fi
    if [ "x$JMXSSL" = "x" ]
    then
      JMXSSL=false
    fi
    if [ "x$JMXLOG4J" = "x" ]
    then
      JMXLOG4J=true
    fi
    echo "ZooKeeper remote JMX Port set to $JMXPORT" >&2
    echo "ZooKeeper remote JMX authenticate set to $JMXAUTH" >&2
    echo "ZooKeeper remote JMX ssl set to $JMXSSL" >&2
    echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2
    ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
  fi
else
    echo "JMX disabled by user request" >&2
    ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi

And it will always end up in the first part with "# for some reason these two options are necessary on jdk6 on Ubuntu" in it.

With the default setup, this will never assign a port to JMX.

Which is related to this ticket: apache/zookeeper#180

Where is it explained a snapshot can only be forced via JMX.

And then recently this discussion on the zookeeper mailing list: https://mail-archives.apache.org/mod_mbox/zookeeper-user/201907.mbox/%3CCAO05p7CHSbPxjRoPpQ1-E1eLVmS_6jpDRRRqWusJs1P%3DS4%2BUhA%40mail.gmail.com%3E

Resulting in this conclusion: https://mail-archives.apache.org/mod_mbox/zookeeper-user/201907.mbox/%[email protected]%3e

Where it is explained you can't update from 3.4 to 3.5 without a snapshot. Which is why I need it.

So that's the nature of my problem and it seems I can only come here to get it fixed or explained.

EDIT: Link to mailing archive fixed.

Key not found in store.

Expected behavior

zookeeper can run

Actual behavior

zookeeper can't run

Steps to reproduce the behavior

$ docker run --name myZookeeper --restart always -d 31z4/zookeeper
86ce613f9a90a3334a01875b4e57e724ac30d23f26277c3982d9034e8e1f0943
docker: Error response from daemon: failed to update store for object type *libnetwork.endpointCnt: Key not found in store.

System configuration

centos7
docker Version: 17.03.1-ce
zookeeper:latest

ZOO_MY_ID not used by zookeeper

Documentation states:

ZOO_MY_ID
The id must be unique within the ensemble and should have a value between 1 and 255. Do note that this variable will not have any effect if you start the container with a /data directory that already contains the myid file.

However, starting zookeeper with that ENV (-e ZOO_MY_ID=1):

2017-05-18 17:00:26,613 [myid:] - ERROR [main:QuorumPeerMain@85] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /conf/zoo.cfg
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Caused by: java.lang.IllegalArgumentException: /var/lib/zookeeper/myid file is missing
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:406)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150)
	... 2 more

And for verification:

bash-4.3# echo $ZOO_MY_ID
1
bash-4.3#

Either Zookeeper is not using the ZOO_MY_ID env variable, or the documentation is missing a key piece of information for starting zookeeper.

zookeeper maxClientCnxns parameter

We need to update this parameter from docker env variables . I mean , we need to update the zoo.cfg.

In the Dockerfile :

ZOO_MAX_CLIENT_CNXNS=60

And after the line we should include:

echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS" >> "$CONFIG"

dokcer run exec format error

docker run --rm
--name=zookser001
--net=host
-e ZOO_USER=whoami
-e ZOO_MY_ID=1
-e ZOO_DATA_DIR=/opt/zookeeper/data
-e ZOO_DATA_LOG_DIR=/opt/zookeeper/datalog
-e ZOO_LOG_DIR=/opt/zookeeper/logs
-e ZOO_SERVERS="server.1=zook001:2888:3888 server.2=zook002:2888:3888 server.3=zook003:2888:3888"
zook:3.5.4-beta

Healthcheck

Expected behavior

If the zookeeper inside the container is not reachable anymore or any error that keeps the zookeeper from working happens, the container should have a failing healthcheck so it can be restarted by the docker engine.

Actual behavior

No matter what happens to the zookeeper inside, the container keeps running pretending it is healthy. Which leeds to broken connections and an ureachable zookeeper and it is even not noticed by operations.

Steps to reproduce the behavior

Start an instance of the zookeeper image. Break the zookeeper inside (We're currently investigating what causes the crashes and connection aborts, so we don't know exactly what breaks the connections).

But we get lots of this stuff:

Refusing session request for client /172.31.0.1:46016 as it has seen zxid 0x4000000000 our last zxid is 0x3e00000cc9 client must try another server,

System configuration

Docker swarm, three instances (however, due to the missing healthcheck insinde the image, we think this is irrelevant to the issue).

gpg: keyserver receive failed

Expected behavior

Docker Build Success.
Should get the GPG key from Key Server

Actual behavior

  • gpg --keyserver ha.pool.sks-keyservers.net --recv-key C61B346552DC5E0CB53AA84F59147497767E7473
    gpg: directory '/root/.gnupg' created
    gpg: keybox '/root/.gnupg/pubring.kbx' created
    gpg: keyserver receive failed: No keyserver available
  • gpg --keyserver pgp.mit.edu --recv-keys C61B346552DC5E0CB53AA84F59147497767E7473
    gpg: keyserver receive failed: No keyserver available
  • gpg --keyserver keyserver.pgp.com --recv-keys C61B346552DC5E0CB53AA84F59147497767E7473
    gpg: keyserver receive failed: No keyserver available

Steps to reproduce the behavior

Clone the Zookeeper
(master branch)
cd 3.4.13
docker build -t zk .

System configuration

"Ubuntu"
VERSION="18.04.1 LTS
Please include as much relevant information as possible.

I want to build zookeeper official Docker image.
I am facing error in GPG Key receiving . I checked in the key-server, C61B346552DC5E0CB53AA84F59147497767E7473 is not available.
I am not behind any proxy.
Should I change GPG KEY in my Dockerfile ?
err

Can anyone help ?

Change in base image removed `nc` command commonly used in kubernetes liveness probes

The change in base image from openjdk:8-jre-alpine to openjdk:8-jre-slim 4 days ago in 01d134a resulted in the nc command no longer being in the image.

This breaks a reasonable way to write liveness/readiness probes when this image is used to deploy zookeeper on Kubernetes that uses nc to send the ruok work to the zookeeper server. For example echo ruok | nc -w 1 localhost:2181 | grep imok as a readiness probe.

Would you consider adding nc back into the image by adding it to the apt-get stanza in your Dockerfile?

This ZooKeeper instance is not currently serving requests

Expected behavior

I ran docker-compose up -d with almost the same docker-compose.yml as yours:

     1	version: '2'
     2	services:
     3	  zoo1:
     4	    image: library/zookeeper:3.5.6
     5	    restart: always
     6	    hostname: zoo1
     7	    container_name: zoo1
     8	    ports:
     9	      - 2181:2181
    10	    environment:
    11	      ZOO_MY_ID: 1
    12	      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
    13	  zoo2:
    14	    image: library/zookeeper:3.5.6
    15	    restart: always
    16	    hostname: zoo2
    17	    container_name: zoo2
    18	    ports:
    19	      - 2182:2181
    20	    environment:
    21	      ZOO_MY_ID: 2
    22	      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
    23	  zoo3:
    24	    image: library/zookeeper:3.5.6
    25	    restart: always
    26	    hostname: zoo3
    27	    container_name: zoo3
    28	    ports:
    29	      - 2183:2181
    30	    environment:
    31	      ZOO_MY_ID: 3
    32	      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

But mostly, one of 3 zk nodes doesn't serve on srvr command via netcat.

Actual behavior

$ docker ps 
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                  NAMES
8517b00c795c        zookeeper:3.5.6       "/docker-entrypoint.…"   7 seconds ago       Up 3 seconds        2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp   zoo1
e8449449b29e        zookeeper:3.5.6       "/docker-entrypoint.…"   7 seconds ago       Up 4 seconds        2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp   zoo2
ffe1763ed36a        zookeeper:3.5.6       "/docker-entrypoint.…"   7 seconds ago       Up 4 seconds        2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp   zoo3
$ echo srvr  | nc localhost 2181
This ZooKeeper instance is not currently serving requests

$ echo srvr  | nc localhost 2182
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: follower
Node count: 5

$ echo srvr  | nc localhost 2183
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x100000000
Mode: leader
Node count: 5
Proposal sizes last/min/max: -1/-1/-1

Steps to reproduce the behavior

I just ran docker-compose up -d with the docker-compose.yml on the main README.md

I researched the error message a bit and I found this.

Do you think the docker-compose.yml on README.md of yours needs to be fixed and add some ordering constraints between zk_id 1,2 and 3?

System configuration

Whether I run it on my macos or in ubuntu16, same error for my case :(

zkCleanup doesn't work due to bad permissions and environment variables not being available to the zookeeper user

I recently had this conversation on the zookeeper mailing list:

https://mail-archives.apache.org/mod_mbox/zookeeper-user/201907.mbox/browser

Concerning the .log files not being cleaned up, despite setting purgeInterval.

From debugging this, I learned that the docker image does sets a bunch of environment variable which are not available to the zookeeper user. This can be tested by going into the container, switching users to the zookeeper user (su - zookeeper) and trying to echo those environment variables.

For instance, the zookeeper configuration dir is not set and this messes with zkEnv.sh. which tries to find it in /zookeeper-3.4.13/bin/../conf/

While actually, the docker image defines the file as being in /conf

The main issue though, is the cleanup of the log files plain not working. Even if you set the environment variables manually and execute the generated java command, the old logs are not cleaned up and nothing in terms of error is printed.

Looking for any kind of advice on this.

Wrong docker-compose example

I observed that there should be a working example for docker-compose available. I propose to add working docker-compose examples for zookeeper 3.4 and 3.5.

The difference between the two is in ZOO_SERVERS parameter.

Zookeeper 3.4

version: '3.7'

services:
  zookeeper1:
    image: zookeeper:3.4.14
    restart: always
    hostname: zookeeper1
    ports:
      - 2181:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.41
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888

  zookeeper2:
    image: zookeeper:3.4.14
    restart: always
    hostname: zookeeper2
    ports:
      - 2182:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.42
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zookeeper1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper3:2888:3888

  zookeeper3:
    image: zookeeper:3.4.14
    restart: always
    hostname: zookeeper3
    ports:
      - 2183:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.43
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=0.0.0.0:2888:3888

networks:
  zk-network:
    driver: bridge
    ipam:
     driver: default
     config:
       - subnet: 10.1.1.0/24

Zookeeper 3.5

version: '3.7'

services:
  zookeeper1:
    image: zookeeper:3.5.5
    restart: always
    hostname: zookeeper1
    ports:
      - 2181:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.41
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zookeeper2:2888:3888;2181 server.3=zookeeper3:2888:3888;2181

  zookeeper2:
    image: zookeeper:3.5.5
    restart: always
    hostname: zookeeper2
    ports:
      - 2182:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.42
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zookeeper1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zookeeper3:2888:3888;2181

  zookeeper3:
    image: zookeeper:3.5.5
    restart: always
    hostname: zookeeper3
    ports:
      - 2183:2181
    networks: 
      zk-network:
        ipv4_address: 10.1.1.43
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zookeeper1:2888:3888;2181 server.2=zookeeper2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

networks:
  zk-network:
    driver: bridge
    ipam:
     driver: default
     config:
       - subnet: 10.1.1.0/24

Trying to install offline

Hi team. I'm kinda new to Docker and I'm trying to set up a zookeeper cluster on an offline environment.

My problem is that I can't pull via docker, so I downloaded the zookeeper-3.4.12.tar.gz directly from the repository.

And when I try to load the image, as I did for the UCP module, onto my docker I get this json error :
docker load < /root/zookeeper-3.4.12.tar.gz
open /var/lib/docker/tmp/docker-import-242583754/zookeeper-3.4.12/json: no such file or directory

I'm guessing I'm doing something wrong because in your docker file there as wget for an asc file aswell. ( at the minimum, this, is wrong )

Is it possible to get a little bit of guidance ?

Regards,
Newt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.