Giter VIP home page Giter VIP logo

jvm-libp2p's Introduction

jvm-libp2p

Gitter Build Status Discourse posts

Libp2p implementation for the JVM, written in Kotlin πŸ”₯

Components

List of components in the Libp2p spec and their JVM implementation status

Component Status
Transport tcp 🍏
quic πŸ…
websocket πŸ‹
webtransport
webrtc-browser-to-server
webrtc-private-to-private
Secure Communication noise 🍏
tls πŸ‹
plaintext πŸ‹
secio (deprecated) 🍏
Protocol Select multistream 🍏
Stream Multiplexing yamux πŸ‹
mplex 🍏
NAT Traversal circuit-relay-v2 πŸ‹
autonat πŸ‹
hole-punching
Discovery bootstrap
random-walk
mdns-discovery πŸ‹
rendezvous
Peer Routing kad-dht
Publish/Subscribe floodsub πŸ‹
gossipsub 🍏
Storage record
Other protocols ping 🍏
identify 🍏

Legend:

  • 🍏 - tested in production
  • πŸ‹ - prototype or beta, not tested in production
  • πŸ… - in progress

Gossip simulator

Deterministic Gossip simulator which may simulate networks as large as 10000 of peers

Please check the Simulator README for more details

Android support

The library is basically being developed with Android compatibility in mind. However we are not aware of anyone using it in production.

The examples/android-chatter module contains working sample Android application. This module is ignored by the Gradle build when no Android SDK is installed. To include the Android module define a valid SDK location with an ANDROID_HOME environment variable or by setting the sdk.dir path in your project's local properties file local.properties.

Importing the project into Android Studio should work out of the box.

Adding as a dependency to your project

Hosting of artefacts is graciously provided by Cloudsmith.

Latest version of 'jvm-libp2p' @ Cloudsmith

As an alternative, artefacts are also available on JitPack.

Using Gradle

Add the required repositories to the repositories section of your Gradle file.

repositories {
  // ...
  maven { url "https://dl.cloudsmith.io/public/libp2p/jvm-libp2p/maven/" }
  maven { url "https://jitpack.io" }  
  maven { url "https://artifacts.consensys.net/public/maven/maven/" }
}

Add the library to the implementation part of your Gradle file.

dependencies {
  // ...
  implementation 'io.libp2p:jvm-libp2p:X.Y.Z-RELEASE'
}

Using Maven

Add the required repositories to the dependencyManagement section of the pom file:

<repositories>
  <repository>
    <id>libp2p-jvm-libp2p</id>
    <url>https://dl.cloudsmith.io/public/libp2p/jvm-libp2p/maven/</url>
  </repository>
  <repository>
    <id>JitPack</id>
    <url>https://jitpack.io</url>
  </repository>
  <repository>
    <id>Consensys</id>
    <url>https://artifacts.consensys.net/public/maven/maven/</url>
  </repository>
</repositories>

Add the library to the dependencies section of the pom file:

<dependency>
  <groupId>io.libp2p</groupId>
  <artifactId>jvm-libp2p</artifactId>
  <version>X.Y.Z-RELEASE</version>
</dependency>

Building the project

To build the library you will need just

  • JDK (Java Development Kit) of version 11 or higher

For building a stable release version clone the master branch:

git clone https://github.com/libp2p/jvm-libp2p -b master

For building a version with the latest updates clone the develop (default) branch:

git clone https://github.com/libp2p/jvm-libp2p

To build the library from the jvm-libp2p folder, run:

./gradlew build

After the build is complete you may find the library .jar file here: jvm-libp2p/build/libs/jvm-libp2p-X.Y.Z-RELEASE.jar

Notable users

  • Teku - Ethereum Consensus Layer client
  • Nabu - minimal Java implementation of IPFS
  • Peergos - peer-to-peer encrypted global filesystem

(Please open a pull request if you want your project to be added here)

License

Dual-licensed under MIT and ASLv2, by way of the Permissive License Stack.

jvm-libp2p's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jvm-libp2p's Issues

Remove log4j2.xml from distributed jar

Currently jvm-libp2p includes a log4j2.xml config in src/main/resources/log4j2.xml which is then bundled into the production jar. Depending on the class path order, this config from jvm-libp2p may be picked up instead of the application configuration.

Since jvm-libp2p is intended to be used as a library by other applications, the logging configuration should be left to the embedding application, so no log4j config file should be included in the jar.

Stream data routed to the wrong handler

Issue Description

I've been testing teku's syncing logic and have run into an interesting issue related to stream management. For reference, our stream-related setup logic is here.

I've been seeing instances where we'll receive stream responses that seem to be routed to the wrong stream handler. We'll open a new stream to request a single block, then follow-up with a new stream (same peer, same protocol) to request 200 blocks and it looks like sometimes that follow-up data will be routed to the original stream handler resulting in errors.

addProtocolHandler() doesn't work as expected

I wrote a custom stream protocol by extending ProtocolHandler and created a strict binding to my id.

The problem.

It works when I add it while building the host
protocols{
+myprotocolbinding()
}

all stream ops are working as expected.

but when I attach it at runtime with the help of addProtocolhandler() -> Exception while protocol is negotiated. (while resolving controller future)

Am I missing something?

Revisit ByteBuf usages to find/fix buffer leaks

Netty ResourceLeakDetector sometimes dumps warnings on buffers leaks.
Will need to switch it to PARANOID mode and run all the tests.
It makes sense to setup PARANOID mode for all tests and fail tests if leaks detected

Can we fund development maybe?

I'm very interested in seeing jvm-libp2p built out to cover the full extent of the libp2p spec, but I seem to see very little incentive beyond what is needed for Teku development.
I'm willing to invest some ETH in getting it built out and I'm sure others are too.
Can we perhaps bring together a funding round or something?

`HostImpl#removeConnectionHandler` Logic Incorrect

Both HostImpl#addConnectionHandler and HostImpl#removeConnectionHandler have the same logic:

    override fun addConnectionHandler(handler: ConnectionHandler) {
        connectionHandlers += handler
    }

    override fun removeConnectionHandler(handler: ConnectionHandler) {
        connectionHandlers += handler
    }

removeConnectionHandler should remove the given handler from the connectionHandlers list, instead of attempting to add it.

Large packets fail to transmit with WS transport

The disabled test variants failed with exception

@DisabledIf("junitTags.contains('ws-transport')")

io.netty.handler.codec.CorruptedFrameException: Max frame length of 65536 has been exceeded.
	at io.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder.protocolViolation(WebSocket08FrameDecoder.java:412) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.http.websocketx.WebSocket08FrameDecoder.decode(WebSocket08FrameDecoder.java:277) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]

"SecIo handshake failed io.libp2p.security.InvalidRemotePubKey: null" Error

After locally making the change in the PR: #96, I am able to print the full stack trace of the error in the issue: #95

This is the stack trace:

2020-03-18 16:39:42.649+00:00 | nioEventLoopGroup-3-8 | ERROR | SecIoSecureChannel | SecIo handshake failed
io.libp2p.security.InvalidRemotePubKey: null
	at io.libp2p.security.secio.SecIoNegotiator.validateRemoteKey(SecIoNegotiator.kt:144) ~[jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.libp2p.security.secio.SecIoNegotiator.verifyRemoteProposal(SecIoNegotiator.kt:121) ~[jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.libp2p.security.secio.SecIoNegotiator.onNewMessage(SecIoNegotiator.kt:106) ~[jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.libp2p.security.secio.SecIoHandshake.channelRead0(SecIoSecureChannel.kt:61) ~[jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.libp2p.security.secio.SecIoHandshake.channelRead0(SecIoSecureChannel.kt:39) ~[jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:249) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:972) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:638) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:481) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:421) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.libp2p.multistream.Negotiator$GenericHandler.channelRead0(Negotiator.kt:101) [jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.libp2p.multistream.Negotiator$GenericHandler.channelRead0(Negotiator.kt:70) [jvm-libp2p-minimal-0.3.2-RELEASE.jar:?]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:426) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]

It's hard to pinpoint which wire logs lead to this error, however, here's some wire logs that come right before the error.
logs.txt

Support dns4 multiaddrs

The Prysm Eth2 testnet uses a dns4 Multiaddr for it's bootstrap node which isn't currently supported by jvm-libp2p.

Multiaddr: /dns4/prylabs.net/tcp/30001/p2p/16Uiu2HAm7Qwe19vz9WzD2Mxn7fXd1vgHHp4iccuyq7TxwRXoAGfc

Error:

io.libp2p.core.Libp2pException: Missing IP4/IP6/DNSADDR in multiaddress /dns4/prylabs.net/tcp/30001/p2p/16Uiu2HAm7Qwe19vz9WzD2Mxn7fXd1vgHHp4iccuyq7TxwRXoAGfc

	at io.libp2p.transport.tcp.TcpTransport.fromMultiaddr(TcpTransport.kt:165)
	at io.libp2p.transport.tcp.TcpTransport.dial(TcpTransport.kt:129)
	at io.libp2p.network.NetworkImpl.connect(NetworkImpl.kt:77)
	at io.libp2p.core.Network$DefaultImpls.connect(Network.kt:55)
	at io.libp2p.network.NetworkImpl.connect(NetworkImpl.kt:15)
...

Would be good if jvm-libp2p supported this.

'Duplicate stream for peer' exceptions

Got the series of exceptions. There are two kinds of them and they are most likely related:

2020-07-13 05:14:23.654+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.BadPeerException: Duplicate steam for peer 16Uiu2HAmGrmArgsbrgQ8rPygdZknQ7jqUW5Wgs2LQQVq9aEqPaZN. Closing it silently
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamAdded(P2PServiceSemiDuplex.kt:40) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:43) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:23.655+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamActive(P2PServiceSemiDuplex.kt:55) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:60) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:23.655+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService.streamDisconnected(P2PService.kt:149) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:67) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:25.209+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.BadPeerException: Duplicate steam for peer 16Uiu2HAmGrmArgsbrgQ8rPygdZknQ7jqUW5Wgs2LQQVq9aEqPaZN. Closing it silently
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamAdded(P2PServiceSemiDuplex.kt:40) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:43) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:25.209+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamActive(P2PServiceSemiDuplex.kt:55) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:60) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:25.210+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService.streamDisconnected(P2PService.kt:149) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:67) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:30.617+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.BadPeerException: Duplicate steam for peer 16Uiu2HAmGrmArgsbrgQ8rPygdZknQ7jqUW5Wgs2LQQVq9aEqPaZN. Closing it silently
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamAdded(P2PServiceSemiDuplex.kt:40) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:43) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:30.618+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamActive(P2PServiceSemiDuplex.kt:55) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:60) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:30.618+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService.streamDisconnected(P2PService.kt:149) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:67) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:33.266+03:00 | TimeTickChannel-0 | INFO  | teku-event-log | οΏ½[37mSlot Event  *** Slot: 97722, Block: 7aaa16..70db, Epoch: 3053, Finalized checkpoint: 3050, Finalized root: be0853..9ef0, Peers: 72οΏ½[0m
2020-07-13 05:14:42.540+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.BadPeerException: Duplicate steam for peer 16Uiu2HAmGrmArgsbrgQ8rPygdZknQ7jqUW5Wgs2LQQVq9aEqPaZN. Closing it silently
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamAdded(P2PServiceSemiDuplex.kt:40) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:43) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:42.540+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamActive(P2PServiceSemiDuplex.kt:55) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:60) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:42.540+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService.streamDisconnected(P2PService.kt:149) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:67) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:45.265+03:00 | TimeTickChannel-0 | INFO  | teku-event-log | οΏ½[37mSlot Event  *** Slot: 97723, Block:    ... empty, Epoch: 3053, Finalized checkpoint: 3050, Finalized root: be0853..9ef0, Peers: 72οΏ½[0m
2020-07-13 05:14:53.171+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.BadPeerException: Duplicate steam for peer 16Uiu2HAmGrmArgsbrgQ8rPygdZknQ7jqUW5Wgs2LQQVq9aEqPaZN. Closing it silently
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamAdded(P2PServiceSemiDuplex.kt:40) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:43) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$handlerAdded$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:53.172+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PServiceSemiDuplex.streamActive(P2PServiceSemiDuplex.kt:55) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:60) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelActive$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]
2020-07-13 05:14:53.172+03:00 | P2PService-event-thread-0 | WARN  | AbstractRouter | P2PService internal error on message null from peer null
io.libp2p.core.InternalErrorException: [peerHandler] not initialized yet
	at io.libp2p.etc.util.P2PService$StreamHandler.getPeerHandler(P2PService.kt:81) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService.streamDisconnected(P2PService.kt:149) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:67) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$StreamHandler$channelUnregistered$1.invoke(P2PService.kt:36) ~[jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at io.libp2p.etc.util.P2PService$runOnEventThread$1.run(P2PService.kt:220) [jvm-libp2p-minimal-0.6.0-RELEASE.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Unknown Source) [?:?]

Unclear project status

It's very difficult to tell what the status of this project. There's the warning "Heavy work in progress" but the roadmap has minimal phase as versions v0.x and that would put the current release 0.5.3 in the minimal phase but the Definition of Done has all items checked off.

This is a really cool project and I'd love to help out but it's somewhat confusing based on the README where things stand.

Implement async Netty pipeline

Problem

Netty API is solely callback based, and a next ChannelHandler should be added to the pipeline synchronously and immediately right after the handler complete some intermediate task (i.e. inside a ChannelHandler callback when the last event is received). Sometimes the next handler should be added even before the old handler is removed, since upon remove old handler may forward cached but unprocessed bytes toward pipeline and the next handler should already be in place to pick up those bytes.
This design causes quite inconvenient UX especially when we trying to deliver another API (libp2p in our case) based on Netty architecture. This limitation can't be workaround except with some dirty solutions like blocking ChannelHandler callbacks.

What we want

We want to add a next handler asynchronously when a previous one completes its task. For example the following code should be possible:

Future<Connection> connF = transport.dial(addr);
Future<SecureConnection> secConnF = connF
    .thenApply(conn -> conn.secure(new SecurityHandler()));
Future<MplexedConnection> mplexConnF = secConnF
    .thenApply(secConn -> secConn.multiplex(new MultiplexHandler()));
Future<Stream> streamF = mplexConnF
    .thenApply(mplexConn.createStream());
streamF.thenAccept(stream -> stream.getChannel().pipeline().addLast(applicatioHandler))

Possible solution

The main reason is that default Netty pipeline just drops unprocessed events when reaching the end of pipeline. Thus we can implement pipeline which doesn't drop events but queues them and when a new handler added replays all events to it. To prevent unbound data receiving the pipeline should setup AUTO_READ flag from the point when first unhandled data queued and the point when next handler added

"ERROR | SecIoSecureChannel | null" log message

When connecting to peers on Lighthouse's testnet 5, we get a number of unhelpful errors in the log:

2020-03-17 09:35:55.124+10:00 | nioEventLoopGroup-3-19 | ERROR | SecIoSecureChannel | null

Most likely a NullPointerException is being thrown somewhere. It would be much more helpful if the full stack trace was included with the error, instead of just the exception's message.

Skip validator duties instead of falling behind

When CPU is highly limited, the validator client may back up processing events from the beacon chain and wind up creating attestations multiple epochs after they were due. Instead, the validator should skip duties that are outdated in order to catch up with the beacon chain.

The basic premise being that it's better to skip some duties than to have all duties performed so late that they don't count anyway.

Gossip outbound stream is not yet opened on heartbeat

20:55:47.258 WARN  - Exception in gossipsub heartbeat
io.libp2p.core.InternalErrorException: Outbound gossip stream not initialized or protocol is missing
        at io.libp2p.pubsub.gossip.GossipRouterKt.getOutboundProtocol(GossipRouter.kt:40) ~[jvm-libp2p-minimal-0.5.8-RELEASE.jar:?]
        at io.libp2p.pubsub.gossip.GossipRouterKt.getOutboundGossipProtocol(GossipRouter.kt:42) ~[jvm-libp2p-minimal-0.5.8-RELEASE.jar:?]
        at io.libp2p.pubsub.gossip.GossipRouter.enqueuePrune(GossipRouter.kt:457) ~[jvm-libp2p-minimal-0.5.8-RELEASE.jar:?]
        at io.libp2p.pubsub.gossip.GossipRouter.prune(GossipRouter.kt:442) ~[jvm-libp2p-minimal-0.5.8-RELEASE.jar:?]
        at io.libp2p.pubsub.gossip.GossipRouter.heartbeat(GossipRouter.kt:371) [jvm-libp2p-minimal-0.5.8-RELEASE.jar:?]

While the WARN log was fixed the stacktrace is suspicious: why gossip outbound stream is not opened on heartbeat? Normally outbound stream can be not yet opened at some moments when gossip initiated by a remote peer and immediately sending some message (see #160). But the outbound stream should be opened immediately and it's unlikely that during heartbeat it's not opened yet.

See the issue comment: Consensys/teku#2836 (comment)

Test Negotiator on responder side

The Negotiator was checked only in initiator role
Need to add a test which checks its functionality in responder role and fix any found issues

Release 0.3.0 doesn't support java 11

I just tried pulling down the 0.3.0 release, but it seems to be incompatible with java 11:

java.lang.UnsupportedClassVersionError: crypto/pb/Crypto$KeyType has been compiled by a more recent version of the Java Runtime (class file version 56.0), this version of the Java Runtime only recognizes class file versions up to 55.0

except for 0.4.1,none of the project can be run on WIN10

it is a really surprising thing for me ,as like eveyone except for me cannot build and run this program successfully here .there are so many kinds of bugs,and the most terrible one is that the package cannot be found in list.yes ,that is the netty package in 0.5.0 ,I have to import the package of 0.4.1,witch is the only one can run ,and i have to change l line of code to make sure it .Please have l look at the publish code ,ok?no matter good or not ,at lease let the code run .
image
I try hard ,this is the only one
image

I have to do this ,or it will create bug

besides,is there so many people run this android program on linux or other OS?Please let me know about it ,thinks

Stream handler channelRead method invoked before channelActive

Issue Description

I've been testing teku's syncing logic and have noticed an issue with the ordering of events for stream handlers. For reference, our stream-related setup logic is here.

When a remote peer initiates a request stream with our node, we push a SimpleChannelInboundHandler to manage these requests and I've noticed that this handler's channelRead0 method is sometimes invoked before channelActive. From discussing with @Nashatyrev, it sounds like this may be a bug.

Connection issue

I use Builder JKt.hostJ to create two nodes, use netty's handler, and use network.connect(peer.getMultiaddr() to not trigger the handler. What is the reason. But I use dail to build a connection to activate the netty handler.

Implement Gossip `seen_ttl` parameter

Currently the seen message buffer is an LRU Set with hardcoded length 10000
We need more flexible configuration on this structure (ethereum/consensus-specs#1958):

  • seen_ttl: how many heartbeats the message ID shouldn't be pruned from the buffer
  • seen_max: should limit the buffer size (to avoid out of mem) and in case of overflow delete older entries overriding previous parameter behavior

Can this project be run at IDEA? Or does it have to run on android studio?

help!!!

Here's the error I ran on the idea:
A problem occurred evaluating project ':android-chatter'.

Failed to apply plugin [id 'com.android.application']
Minimum supported Gradle version is 5.4.1. Current version is 5.2.1. If using the gradle wrapper, try editing the distributionUrl in F:\apache-maven-3.6.3-bin\MavenRepository\daemon\5.2.1\gradle\wrapper\gradle-wrapper.properties to gradle-5.4.1-all.zip

GossipSub Peer Discovery without Kad-DHT?

Hi, I'm trying to implement a jvm-libp2p based application using GossipSub but I can't figure out how my app could find peers without Kad-DHT peer discovery... How should I do it?

Uncaught exception in Netty Pipeline

When a remote peer disconnects unexpectedly, AbstractMuxHandler can wind up attempting to read from a closed stream (so it no longer exists in streamMap. This results in throwing a Libp2pException but the mux handler is the last handler in the pipeline and does not override the exceptionCaught method so netty logs a warning as seen below.

Expected behaviour would be for a debug or trace level log message to be printed when a peer has unexpectedly disconnected.

My guess is that either AbstractMuxHandler or MuxHandler should override exceptionCaught but allowing it to differentiate expected failures like disconnected clients from unexpected ones likely requires throwing something more specific than just Libp2pException.

2020-05-06 11:23:23.489+10:00 | nioEventLoopGroup-3-2 | WARN  | DefaultChannelPipeline | An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.libp2p.core.Libp2pException: Channel with id MuxId(id=17, initiator=true) not opened
	at io.libp2p.etc.util.netty.mux.AbstractMuxHandler.childRead(AbstractMuxHandler.kt:46) ~[jvm-libp2p-minimal-0.4.0-SNAPSHOT.jar:?]
	at io.libp2p.mux.MuxHandler.channelRead(MuxHandler.kt:39) ~[jvm-libp2p-minimal-0.4.0-SNAPSHOT.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:682) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:617) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:534) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:906) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.36.Final.jar:4.1.36.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Gossip should not respond to GRAFT with GRAFT

As per chat with @vyzo we should not response with GRAFT when adding a peer to our mesh per its GRAFT request.

Probably should add some explicit note on that in the Gossip spec

Anton Nashatyrev, [03.12.20 13:49]
Sorry for off-topic, but would like to hear vyzo opinion on the question discussed in the neighbor channel. 

If Peer1 sends initial GRAFT to Peer2, should the Peer2 respond with GRAFT if it added Peer1 to its mesh?

Though I didn't fine any explicit statement on this in the spec, I think the Peer2 SHOULD respond with GRAFT in this case. Else the Peer1 would not be informed that it was added to the mesh of Peer2 

As the spec states: 

The GRAFT informs a peer that it has been added to the local router's mesh view for the included topic id.

vyzo, [03.12.20 13:53]
yeah no need to respond with graft

vyzo, [03.12.20 13:53]
it would be racey

Anton Nashatyrev, [03.12.20 13:58]
[In reply to vyzo]
Thanks for clarification! 
Is there any spec place which can help to clarify this?

vyzo, [03.12.20 13:59]
sure feel free to open pr!

Anton Nashatyrev, [03.12.20 14:06]
[In reply to vyzo]
Just make it clear: 

- MAY NOT respond with graft 
or 
- SHOULD NOT respond with graft

vyzo, [03.12.20 14:30]
SHOULD NOT

vyzo, [03.12.20 14:30]
if two peers graft each other concurrently they may both send graft

vyzo, [03.12.20 14:30]
and thats ok

Anton Nashatyrev, [03.12.20 14:33]
[In reply to vyzo]
Yep, though in this case you anyway would not send the second GRAFT. When you receive GRAFT the other peer would already be in your mesh and you have nothing to do

vyzo, [03.12.20 14:35]
right

Anton Nashatyrev, [03.12.20 14:42]
[In reply to vyzo]
I have actually mean the situation when you are sending GRAFT 'in response' to initial GRAFT. (that's actually not a response, but just an opposite notification).

Anton Nashatyrev, [03.12.20 14:43]
It still seems to me that sending the back GRAFT is more consistent behavior

vyzo, [03.12.20 15:20]
its not needed though

vyzo, [03.12.20 15:21]
and it would get in funny races with an immediate graft/ prune

Test coverage of MultiplexHandler/Channel

Cover the MultiplexHandler/Channel classes with unit tests. These are pretty central and non-primitive classes so they should be tested well.
Primary attention should paid to closing child channels, so once the channel is created it must be finally closed. Child channels shouldn't leak.
There are following close circumstances:

  • parent channel is closed
  • remote peer sends RESET signal
  • local peer sends RESET (on channel.close() invoked)
  • remote and local peers sends CLOSE (on channel.disconnect())

Child channels may be created with MultiplexHandler.createStream() asynchronously from any thread and prior to the parent channel is activated. Those cases should also be considered.

Gossip Message for indirect peers

Hello,

Interested in libp2p projects, i was experimenting the gossipsub.. When host is directly connected to the other host then only i am able to receive the message from the peer.

If connection as follows

A -> B and B->C
and A and C are subscribed to a topic.
A sends a message to the topic, not receiving at C, until unless B also subscribes to the same topic.

condion is floodPublish false and gossipPeerScoreParam isDirect= false

Tried with floodPublish= true and is_direct = true also.
But result is the same.
Is there something i am missing?
Any leads will be highly appreciated.

mplex: implement protocol Netty handler

Implement the codec which will encode/decode mplex frames to/from objects like below:

streamId: Long
flag: MplexFrameFlag
data: ByteBuf

Enforce the protocol rules like correct order of frame flags, writing to closed stream, duplicate Ids, etc

Add WebSocket transport

WebSocket transport layer.

Support for websockets (although not secure websockets) is complete and on my feature/ws-transport branch, and I'll raise a pull request in due course. I've verified by interoperating with the go libp2p implementation.

Adopt interop tests to run on Windows

As a follow up for #53 PR

Gradle interopTest task doesn't work on Windows
Below is the draft patch which made GoPingClient pass. However it would not work for JS

Index: src/test/kotlin/io/libp2p/core/ClientInterOpTest.kt
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
--- src/test/kotlin/io/libp2p/core/ClientInterOpTest.kt	(revision 22b32793dbe250a0ee966b4040c6fbb94c25f833)
+++ src/test/kotlin/io/libp2p/core/ClientInterOpTest.kt	(date 1573648103471)
@@ -30,7 +30,13 @@
 data class ExternalClient(
     val clientCommand: String,
     val clientDirEnvVar: String
-)
+) {
+    fun getCommand() = when {
+        System.getProperty("os.name").toLowerCase().contains("win") -> clientCommand + ".exe"
+        else -> clientCommand
+    }
+
+}
 
 val GoPingClient = ExternalClient(
     "./ping-client",
@@ -88,9 +94,10 @@
     }
 
     fun startClient(serverAddress: String) {
-        val command = "${external.clientCommand} $serverAddress"
+        val clientDir = File(System.getenv(external.clientDirEnvVar))
+        val command = File( clientDir, external.getCommand()).absoluteFile.canonicalPath + " " + serverAddress
         val clientProcess = ProcessBuilder(*command.split(" ").toTypedArray())
-            .directory(File(System.getenv(external.clientDirEnvVar)))
+            .directory(clientDir)
             .redirectOutput(ProcessBuilder.Redirect.INHERIT)
             .redirectError(ProcessBuilder.Redirect.INHERIT)
         println("Starting $command")
Index: build.gradle.kts
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
--- build.gradle.kts	(revision 22b32793dbe250a0ee966b4040c6fbb94c25f833)
+++ build.gradle.kts	(date 1573647916324)
@@ -108,7 +108,7 @@
 fun findOnPath(executable: String): Boolean {
     return System.getenv("PATH").split(File.pathSeparator)
         .map { Paths.get(it) }
-        .any { Files.exists(it.resolve(executable)) }
+        .any { Files.exists(it.resolve(executable)) || Files.exists(it.resolve("$executable.exe"))}
 }
 val goOnPath = findOnPath("go")
 val nodeOnPath = findOnPath("node")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.