Giter VIP home page Giter VIP logo

redis-cluster's Introduction

Hello There

redis-cluster's People

Contributors

alexcolorado avatar sanderploegsma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis-cluster's Issues

Waiting for the cluster to join(k8s1.10,flannel-v0.10.0)

Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.30.6.8:6379 to 172.30.6.7:6379
Adding replica 172.30.8.8:6379 to 172.30.8.10:6379
Adding replica 172.30.5.13:6379 to 172.30.5.14:6379
M: bbbb13673757cca93b8887a4a6372a96c1155345 172.30.6.7:6379
slots:[0-5460] (5461 slots) master
M: 4de050ea712c7664def4f8c3f5d5ed1fc9a57278 172.30.8.10:6379
slots:[5461-10922] (5462 slots) master
M: c5386ea9cf29b3786b98c2b90d3cceae3873efee 172.30.5.14:6379
slots:[10923-16383] (5461 slots) master
S: 6dac6a67c3d39a9d62038d9c2b83f925b9de9a76 172.30.6.8:6379
replicates bbbb13673757cca93b8887a4a6372a96c1155345
S: 649e718cdb5344b956e7a10f0809acfc65634dd8 172.30.8.8:6379
replicates 4de050ea712c7664def4f8c3f5d5ed1fc9a57278
S: 3acc404a36b7f5dcb7184da9144f42a895351845 172.30.5.13:6379
replicates c5386ea9cf29b3786b98c2b90d3cceae3873efee
Can I set the above configuration? (type 'yes' to accept): yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

can't join cluster tks

${POD_IP} in /data/nodes.conf

Hi there,

I installed redis-cluster in GKE. I'm doing failover tests, and I restarted all (but one) redis-cluster pods (deleting them one by one).

Before the restarts, the original cluster info looked like this:
f20056c91f0f3b33c506928ecaa1f1dddc6d320a 10.20.5.15:6379@16379 slave 8976e3638ae7c64e3815b4d716a70802b6333f70 0 1537506969822 6 connected
8976e3638ae7c64e3815b4d716a70802b6333f70 10.20.6.88:6379@16379 master - 0 1537506968000 3 connected 10923-16383
c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 10.20.6.87:6379@16379 master - 0 1537506968818 1 connected 0-5460
1ed73f245e5b0e76b230333af052beaaa8c2e118 10.20.3.10:6379@16379 myself,slave c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 0 1537506966000 4 connected
dc0673e5fdf55c8016fdb4e74e4714a8c011c019 10.20.6.89:6379@16379 slave 1b40f01a19c940df1779f159c5a40f14fc88e2ed 0 1537506969000 5 connected
1b40f01a19c940df1779f159c5a40f14fc88e2ed 10.20.5.14:6379@16379 master - 0 1537506967000 2 connected 5461-10922

After restarting the pods, the redis-cluster nodes show ${POD_IP} as their IP address.
Everything seems to be working fine.
I just wanted to bring this to your attention in case there could be a bug with the latest patch.
I will probably delete and recreate the cluster and start from scratch again. But I will leave it as is for a few days in case you require me to check anything.

$ kubectl exec redis-cluster-0 -- redis-cli cluster nodes
1ed73f245e5b0e76b230333af052beaaa8c2e118 10.20.3.10:6379@16379 slave c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 0 1537510245134 10 connected
8976e3638ae7c64e3815b4d716a70802b6333f70 10.20.5.17:6379@16379 master - 0 1537510244131 3 connected 10923-16383
c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 ${POD_IP}:6379@16379 myself,master - 0 1537510246000 10 connected 0-5460
dc0673e5fdf55c8016fdb4e74e4714a8c011c019 10.20.6.93:6379@16379 slave 1b40f01a19c940df1779f159c5a40f14fc88e2ed 0 1537510245000 5 connected
1b40f01a19c940df1779f159c5a40f14fc88e2ed 10.20.6.92:6379@16379 master - 0 1537510247139 2 connected 5461-10922
f20056c91f0f3b33c506928ecaa1f1dddc6d320a 10.20.5.18:6379@16379 slave 8976e3638ae7c64e3815b4d716a70802b6333f70 0 1537510246137 6 connected

$ kubectl exec redis-cluster-1 -- redis-cli cluster nodes
dc0673e5fdf55c8016fdb4e74e4714a8c011c019 10.20.6.93:6379@16379 slave 1b40f01a19c940df1779f159c5a40f14fc88e2ed 0 1537510258000 5 connected
8976e3638ae7c64e3815b4d716a70802b6333f70 10.20.5.17:6379@16379 master - 0 1537510256000 3 connected 10923-16383
f20056c91f0f3b33c506928ecaa1f1dddc6d320a 10.20.5.18:6379@16379 slave 8976e3638ae7c64e3815b4d716a70802b6333f70 0 1537510258569 6 connected
1ed73f245e5b0e76b230333af052beaaa8c2e118 10.20.3.10:6379@16379 slave c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 0 1537510259571 10 connected
1b40f01a19c940df1779f159c5a40f14fc88e2ed ${POD_IP}:6379@16379 myself,master - 0 1537510256000 2 connected 5461-10922
c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 10.20.6.96:6379@16379 master - 0 1537510257566 10 connected 0-5460

$ kubectl exec redis-cluster-2 -- redis-cli cluster nodes
dc0673e5fdf55c8016fdb4e74e4714a8c011c019 10.20.6.93:6379@16379 slave 1b40f01a19c940df1779f159c5a40f14fc88e2ed 0 1537510270489 5 connected
c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 10.20.6.96:6379@16379 master - 0 1537510269488 10 connected 0-5460
8976e3638ae7c64e3815b4d716a70802b6333f70 ${POD_IP}:6379@16379 myself,master - 0 1537510268000 3 connected 10923-16383
1b40f01a19c940df1779f159c5a40f14fc88e2ed 10.20.6.92:6379@16379 master - 0 1537510268000 2 connected 5461-10922
f20056c91f0f3b33c506928ecaa1f1dddc6d320a 10.20.5.18:6379@16379 slave 8976e3638ae7c64e3815b4d716a70802b6333f70 0 1537510267484 6 connected
1ed73f245e5b0e76b230333af052beaaa8c2e118 10.20.3.10:6379@16379 slave c20cac02214e6ea1d7bca9cadaa81ef0ea6d6316 0 1537510269000 10 connected

etc.

Thanks!

statefulset failover IP change

Hi there,

First thanks very much for contribute your solution for redis cluster which is super useful.

I just have a concern that statefulset only guarantees no change on domain of each node, however, the ip will still be changed. Will this break the cluster meet in cluster mode?

Thanks,
Jiabin

if you use Flannel network in k8s,you can't success

        Performing hash slots allocation on 6 nodes...
        Master[0] -> Slots 0 - 5460
        Master[1] -> Slots 5461 - 10922
        Master[2] -> Slots 10923 - 16383
        Adding replica 172.30.6.8:6379 to 172.30.6.7:6379
        Adding replica 172.30.8.8:6379 to 172.30.8.10:6379
        Adding replica 172.30.5.13:6379 to 172.30.5.14:6379
        M: bbbb13673757cca93b8887a4a6372a96c1155345 172.30.6.7:6379
        slots:[0-5460] (5461 slots) master
        M: 4de050ea712c7664def4f8c3f5d5ed1fc9a57278 172.30.8.10:6379
        slots:[5461-10922] (5462 slots) master
        M: c5386ea9cf29b3786b98c2b90d3cceae3873efee 172.30.5.14:6379
        slots:[10923-16383] (5461 slots) master
        S: 6dac6a67c3d39a9d62038d9c2b83f925b9de9a76 172.30.6.8:6379
        replicates bbbb13673757cca93b8887a4a6372a96c1155345
        S: 649e718cdb5344b956e7a10f0809acfc65634dd8 172.30.8.8:6379
        replicates 4de050ea712c7664def4f8c3f5d5ed1fc9a57278
        S: 3acc404a36b7f5dcb7184da9144f42a895351845 172.30.5.13:6379
        replicates c5386ea9cf29b3786b98c2b90d3cceae3873efee
        Can I set the above configuration? (type 'yes' to accept): yes
        Nodes configuration updated
        Assign a different config epoch to each node
        Sending CLUSTER MEET messages to join the cluster
        Waiting for the cluster to join
        .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

can't join cluster tks
please don't delete my issues.if you use Flannel network in k8s,you can't success!!!!!!!!

Docker build fails (redis requires Ruby version >= 2.2.2)

Looks like the public Redis image uses Jessie, which only has ruby 2.1 available.

ERROR:  Error installing redis:
	redis requires Ruby version >= 2.2.2.
The command '/bin/sh -c apt-get -y update   && apt-get -y upgrade   && apt-get -y --no-install-recommends install ruby   && gem install redis   && apt-get -y autoremove   && apt-get -y clean' returned a non-zero code: 1

I'm hacking away at it to get ruby >= 2.2.2 installed but figured I'd drop this here for posterity (and in case you know of a solution quicker than I am able to find one).

Auto-start of

This is a nice clean example of redis cluster running in k8s. The one challenge is the cluster initialization and adding/removing nodes.

Is there any clean self-managing (i.e. autonomous) way to do it? Since you are using a StatefulSet, you know the names of the (initial) pods will be redis-cluster-0, redis-cluster-1, etc. You probably even could do 2 StatefulSets if you wanted guarantees as to which pods are master vs slave.

Is there no way to have redis-cluster-0 automatically come up and initialize a cluster with cluster-1 and cluster-2, or for that matter, just itself, and have cluster-1 and cluster-2 self-register to cluster-0? Same for expansion.

In a kube environment, having to do kubectl exec ... is not optimal (or recommended).

I am kind of surprised that redis doesn't have any option in the config file for saying, "here are your initial cluster peers".

You kind of wish an InitContainer could do this, but they complete before the pod containers run, so that will not help. Perhaps some extension of the image, so it spins up a child process? Or a sidecar container (although that would have a one-off and shouldn't be restarted, whereas restart policies are pod-level, not container-level)? Or a post-start lifecycle hook?

clusterIP is None

This is more of a question than an issue actually, what is the reason for setting clusterIP to None? Wouldn't this make the cluster inaccessible to its clients? Nice work by the way.

redis5.0.4 problem with GOSSIP

redis5.0.4 problem with GOSSIP

hi ,when i use redis5.0.4 to create cluster in k8s. meet gossip error.
when redis-cli --cluster create *** command, connect node with 172.20.17.0:16379 in log.
is 172.20.17.0 a network address not node ip 172.20.17.34. so i use node nodeSelector to bind all my redis instance to a single node is ok .
how can i solve this issue? thanks .

Auto-Scaling

How does redis cluster handle auto-scaling in/out depends on load?

In the case of Kubernetes cluster, there is horizontal pod autoscaler. Does it work with redis cluster?

Waiting for the cluster to join takes forever....

$ k logs -f redis-cluster-init-27fkk
+ yes yes
++ kubectl get pods -l app=redis-cluster -o 'jsonpath={range.items[*]}{.status.podIP}:6379 '
+ redis-cli --cluster create --cluster-replicas 1 10.1.1.97:6379 10.1.1.98:6379 10.1.1.99:6379 10.1.1.100:6379 10.1.1.101:6379 10.1.1.102:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.1.1.101:6379 to 10.1.1.97:6379
Adding replica 10.1.1.102:6379 to 10.1.1.98:6379
Adding replica 10.1.1.100:6379 to 10.1.1.99:6379
M: 221c3e7f661f31dc842b07377250ac75fa996ab3 10.1.1.97:6379
   slots:[0-5460] (5461 slots) master
M: acc3fe09d0217e72be41e2fd935779295986c10e 10.1.1.98:6379
   slots:[5461-10922] (5462 slots) master
M: a3f890d05a2ccb8648c46d8cabfe0752272a74ff 10.1.1.99:6379
   slots:[10923-16383] (5461 slots) master
S: ef86021ee4fc53f5d643512604b41604f5065d1c 10.1.1.100:6379
   replicates a3f890d05a2ccb8648c46d8cabfe0752272a74ff
S: f7f8362b66a8ed1508c961bbd2a26044475f4877 10.1.1.101:6379
   replicates 221c3e7f661f31dc842b07377250ac75fa996ab3
S: d765c528867f297cda8e943bce5dae07954286a2 10.1.1.102:6379
   replicates acc3fe09d0217e72be41e2fd935779295986c10e
Can I set the above configuration? (type 'yes' to accept): >>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
......................................................................................................................................................................................................................................................................................................................................

It should be $(HOSTNAME) but not $(hostname)

It should be $(HOSTNAME) but not $(hostname) in both readinessProbe and livenessProbe part of redis-cluster.yml. I think $(hostname) will be resolved as empty string which makes the redis-cli command ends up as blocking in the interactive mode.

Storage class missing

First of all - THANKS! You saved me a bunch of time!

I had an issue with deploying the cluster initially and thought I'd share it. I don't have a complete understanding of deploying Kubernetes outside of EKS and local - so this could just be something that's assumed:

I deployed the cluster to a namespace (redis-example) on my local machine

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl get pods -n redis-example
NAME              READY     STATUS    RESTARTS   AGE
redis-cluster-0   0/1       Pending   0          38m

$ kubectl get pvc -n redis-example
NAME                   STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-redis-cluster-0   Pending                                                      38m

$ kubectl describe pvc/data-redis-cluster-0 -n redis-example
Name:          data-redis-cluster-0
Namespace:     redis-example
StorageClass:
Status:        Pending
Volume:
Labels:        app=redis-cluster
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type    Reason         Age                 From                         Message
  ----    ------         ----                ----                         -------
  Normal  FailedBinding  4m (x143 over 39m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

I saw the same thing when deploying this in EKS. I ended up setting the storage class in the yaml:

  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        name: redis-cluster
      namespace: redis-example
    spec:
      storageClassName: default
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

Now it deploys fine in EKS. Local still failed until I defined a StorageClass there (named default). For newbies it would be useful to add this to the README. Thanks again.
......

Issue with Initializing the cluster.

$ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas \

Unrecognized option or bad number of args for: '--cluster-replicas'
command terminated with exit code 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.