Giter VIP home page Giter VIP logo

la3rence / websocket-cluster Goto Github PK

View Code? Open in Web Editor NEW
126.0 2.0 47.0 693 KB

一致性哈希实现有状态应用集群。Scalable spring-cloud project for WebSocket cluster with consistent-hashing algorithm. (inactive)

Home Page: https://lawrenceli.me/blog/websocket-cluster

Makefile 0.34% Java 99.33% Dockerfile 0.33%
websocket websocket-cluster consistent-hashing springcloud hashring spring-cloud nacos docker rabbitmq redis gateway-microservice spring-cloud-gateway distributed-systems

websocket-cluster's Introduction

CI Maintainability Rating Quality Gate Status Reliability Rating

实战 Spring Cloud 的 WebSocket 集群

此项目是一个 WebSocket 集群的实践,基于 Spring Cloud。

English Doc

原理

我们利用一致性哈希算法,构造一个哈希环,网关监听 WebSocket 服务实例的上下线消息,根据实例的变动动态地更新哈希环。 每次有新服务上线时,添加其对应的虚拟节点,将需要变动的 WebSocket 客户端重新连接到新的实例上,这样的代价是最小的;当然也取决与虚拟节点的数量以及哈希算法的公平性。服务下线时,实现相对容易——只需要将当前实例的所有客户端断开就行,客户端始终会重新连接的。 同时,哈希环的核心作用体现在负载均衡上。网关做请求转发时,会经过我们重写的自定义负载均衡过滤器,根据业务上需要哈希的字段来实现真实节点的路由。

技术栈

  • Docker (开启 API 访问)
  • Redis
  • RabbitMQ
  • Nacos

本地开发

docker-compose.yml 创建一个专用网络:

docker network create compose-network

本地构建,并使用 docker compose 简单编排部署:

mvn clean
mvn install -pl gateway -am -amd
mvn install -pl websocket -am -amd
docker build -t websocket:1.0.0 websocket/.
docker build -t gateway:1.0.0 gateway/.
docker-compose up -d
docker ps

可以用 docker-compose scale websocket-server=3 命令来创建新的 Websocket 实例。实际上我给这个项目写了一个前端来展示。

别忘了开启 Docker 的 API 访问,用 docker -H tcp://0.0.0.0:2375 ps 来验证 API 是否开启成功。 尝试开启:

Linux 上开启 Docker API 访问

docker.service 文件中,将 -H tcp://0.0.0.0:2375 添加到 ExecStart 开头的那一行。

# cat /usr/lib/systemd/system/docker.service
ExecStart=...... -H tcp://0.0.0.0:2375
# after saved, restart the docker process
systemctl daemon-reload
systemctl restart docker

macOS 上访问 Docker API

最佳实践是用 alpine/socat 来暴露 TCP 套接字。参考 socat 的用法.

docker run -itd --name socat \
    -p 0.0.0.0:6666:2375 \
    -v /var/run/docker.sock:/var/run/docker.sock \
    alpine/socat \
    tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock

注意,Docker 的 macOS 客户端提供了一个 docker.for.mac.host.internal 的主机名,可以在容器内访问宿主网络。 我将这个地址用在了 application.yml 配置文件中,用来作为 redis、rabbitmq、nacos 服务端访问,因为他们都被我部署在容器里。如果要部署到服务器或自己本地开发,你可以把地址改掉。还有,我写了个 Makefile 用来帮自己在开发阶段更快地编译并重启服务,因为我并没有给这个项目配置一个持续集成流水线,请按需使用。

代码中,所有依赖注入都尽可能地在使用构造注入,并详细地打印了日志供分析。

前端

参见此 React 项目. 效果如图:

Demo

部署

修改 docker-compose.yml, 添加如下 host 解析,可以方便地替换掉服务调用地址而不用更改任何代码或 application.yaml,注意后面的 IP 建议使用服务器内网 IP 地址。唯一必要的修改是 Nacos 的 namespace.

 extra_hosts:
   - "docker.for.mac.host.internal:192.168.0.1"

所有必要环境就绪后,直接开启服务。请仔细参考 Makefile 中的内容使用,如:

make up

注意 make down 操作会删除所有 none 容器镜像以及 redis 中的内容。

贡献

若有帮助,欢迎 star 收藏。有问题请提交 Issue。贡献请 fork 此项目后提交 Pull Request.

websocket-cluster's People

Contributors

dependabot[bot] avatar k8s-ci-bot avatar la3rence avatar renovate-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

websocket-cluster's Issues

`LoadBalancerClientFilter` was deprecated

The gateway of this project is using WebFlux to realize reactive programming. The traditional class LoadBalancerClientFilter was deprecated due to the new one: ReactiveLoadBalancerClientFilter. I'll try to replace this filter implement with it.

服务端主动推送消息,请问是如何确定用户的长连接在那台服务器的?

您好,我看了下代码,里面是通过重写负载均衡器来实现请求定位到某台服务器(不对的话请指教),但是因为我们涉及到某个操作触发服务端主动像客户端推送消息,这时候后端通过feign去调用的发送信息接口,不通过网关,怎么定位该用户长连接存在的服务器,从而成功给用户推送消息?

NPE when clients connect before the WebSocket servers come online

问题

在 WebSocket 实例启动之前,创建几个客户端,这些客户端总是周期性地不断尝试重连。此时再启动 WebSocket 服务,抛出如下异常。

现象

org.springframework.data.redis.listener.adapter.RedisListenerExecutionFailedException: Listener method 'handleMessage' threw exception; nested exception is java.lang.NullPointerException
        at org.springframework.data.redis.listener.adapter.MessageListenerAdapter.invokeListenerMethod(MessageListenerAdapter.java:377) ~[spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at org.springframework.data.redis.listener.adapter.MessageListenerAdapter.onMessage(MessageListenerAdapter.java:308) ~[spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at org.springframework.data.redis.listener.RedisMessageListenerContainer.executeListener(RedisMessageListenerContainer.java:250) [spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at org.springframework.data.redis.listener.RedisMessageListenerContainer.processMessage(RedisMessageListenerContainer.java:240) [spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at org.springframework.data.redis.listener.RedisMessageListenerContainer.lambda$dispatchMessage$0(RedisMessageListenerContainer.java:987) [spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
Caused by: java.lang.NullPointerException: null
        at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) ~[na:1.8.0_212]
        at java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) ~[na:1.8.0_212]
        at me.lawrenceli.gateway.server.RedisSubscriber.handleMessage(RedisSubscriber.java:79) ~[classes!/:na]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_212]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_212]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_212]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_212]
        at org.springframework.data.redis.listener.adapter.MessageListenerAdapter$MethodInvoker.invoke(MessageListenerAdapter.java:142) ~[spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        at org.springframework.data.redis.listener.adapter.MessageListenerAdapter.invokeListenerMethod(MessageListenerAdapter.java:371) ~[spring-data-redis-2.3.7.RELEASE.jar!/:2.3.7.RELEASE]
        ... 5 common frames omitted

原因分析

在此哈希路由方法中,若哈希环是空的,无论被路由的值是什么,总会返回一个空的节点(也就是作为物理服务的真实节点):
https://github.com/Lonor/websocket-cluster/blob/1f167c4306587b6a9696be1ae656b7358bfa0e3e/common/src/main/java/me/lawrenceli/hashring/ConsistentHashRouter.java#L54-L57

有两处调用:

  1. 网关 LB Filter
  2. 实例上线时的消息消费,比较节点上线后同一 userId 的不同真实节点,即:

https://github.com/Lonor/websocket-cluster/blob/1f167c4306587b6a9696be1ae656b7358bfa0e3e/gateway/src/main/java/me/lawrenceli/gateway/server/RedisSubscriber.java#L73-L80

而这时遍历中的 oldServiceNode 始终是 null.

500 Server Error for HTTP GET "/discovery/naming" 接口调用失败

500 Server Error for HTTP GET "/discovery/naming"
没用docker, 在windows上直接运行nacos, redis, rabbitmq, 改了一些配置, 注释掉DockerController和DockerService两个类

image

image

image

项目都跑起来了, 这个接口调用失败.
折腾好久了, 之前用docker没搞成功, 才在本地跑的

2023-03-28 06:04:19.152 ERROR 20440 --- [ctor-http-nio-2] a.w.r.e.AbstractErrorWebExceptionHandler : [d929aa23-1]  500 Server Error for HTTP GET "/discovery/naming"

java.lang.NullPointerException: null
	at me.lawrenceli.gateway.discovery.DiscoveryController.getServerStatus(DiscoveryController.java:43) ~[classes/:na]
	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
Error has been observed at the following site(s):
	|_ checkpoint ⇢ org.springframework.web.cors.reactive.CorsWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ HTTP GET "/discovery/naming" [ExceptionHandlingWebHandler]
Stack trace:
		at me.lawrenceli.gateway.discovery.DiscoveryController.getServerStatus(DiscoveryController.java:43) ~[classes/:na]
		at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_333]
		at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_333]
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_333]
		at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_333]
		at org.springframework.web.reactive.result.method.InvocableHandlerMethod.lambda$invoke$0(InvocableHandlerMethod.java:148) ~[spring-webflux-5.2.13.RELEASE.jar:5.2.13.RELEASE]
		at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:151) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:153) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:67) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:76) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.innerNext(FluxConcatMap.java:274) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxConcatMap$ConcatMapInner.onNext(FluxConcatMap.java:851) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onNext(MonoPeekTerminal.java:173) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2393) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.request(MonoPeekTerminal.java:132) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2190) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2064) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onSubscribe(MonoPeekTerminal.java:145) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.Mono.subscribe(Mono.java:4252) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:441) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onSubscribe(FluxConcatMap.java:211) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:161) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.FluxIterable.subscribe(FluxIterable.java:86) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.Mono.subscribe(Mono.java:4252) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:172) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64) ~[reactor-core-3.3.14.RELEASE.jar:3.3.14.RELEASE]
		at reactor.netty.http.server.HttpServerHandle.onStateChange(HttpServerHandle.java:65) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at reactor.netty.ReactorNetty$CompositeConnectionObserver.onStateChange(ReactorNetty.java:537) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at reactor.netty.tcp.TcpServerBind$ChildObserver.onStateChange(TcpServerBind.java:278) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:475) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191) ~[reactor-netty-0.9.17.RELEASE.jar:0.9.17.RELEASE]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.59.Final.jar:4.1.59.Final]
		at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.59.Final.jar:4.1.59.Final]
		at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_333]

好棒

学习了,代码比我规范多了。

多个网关部署,怎么保证哈希环一致?

你好,想问个问题。
如果部署了多个网关,但是由于网关通信出现问题没有受到websocket-server实例上线的消息,过了一会之后通信又恢复了,那怎么保证网关的哈希环一致?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.