Giter VIP home page Giter VIP logo

go-yamux's Introduction

Yamux

Yamux (Yet another Multiplexer) is a multiplexing library for Golang. It relies on an underlying connection to provide reliability and ordering, such as TCP or Unix domain sockets, and provides stream-oriented multiplexing. It is inspired by SPDY but is not interoperable with it.

Yamux features include:

  • Bi-directional streams
    • Streams can be opened by either client or server
    • Server-side push support
  • Flow control
    • Avoid starvation
    • Back-pressure to prevent overwhelming a receiver
  • Keep Alives
    • Enables persistent connections over a load balancer
  • Efficient
    • Enables thousands of logical streams with low overhead

Documentation

For complete documentation, see the associated Godoc.

Specification

The full specification for Yamux is provided in the spec.md file. It can be used as a guide to implementors of interoperable libraries.

Usage

Using Yamux is remarkably simple:

func client() {
    // Get a TCP connection
    conn, err := net.Dial(...)
    if err != nil {
        panic(err)
    }

    // Setup client side of yamux
    session, err := yamux.Client(conn, nil)
    if err != nil {
        panic(err)
    }

    // Open a new stream
    stream, err := session.Open()
    if err != nil {
        panic(err)
    }

    // Stream implements net.Conn
    stream.Write([]byte("ping"))
}

func server() {
    // Accept a TCP connection
    conn, err := listener.Accept()
    if err != nil {
        panic(err)
    }

    // Setup server side of yamux
    session, err := yamux.Server(conn, nil)
    if err != nil {
        panic(err)
    }

    // Accept a stream
    stream, err := session.Accept()
    if err != nil {
        panic(err)
    }

    // Listen for a message
    buf := make([]byte, 4)
    stream.Read(buf)
}

The last gx published version of this module was: 1.1.5: QmUNMbRUsVYHi1D14annF7Rr7pQAX7TNLwpRCa975ojKnw

go-yamux's People

Contributors

aarshkshah1992 avatar armon avatar asutorufa avatar erikdubbelboer avatar evanphx avatar fatih avatar filipochnik avatar grubernaut avatar hannahhoward avatar hsanjuan avatar kanishkatn avatar libp2p-mgmt-read-write[bot] avatar marcopolo avatar marten-seemann avatar preetapan avatar pymq avatar r0l1 avatar raulk avatar rnapier avatar slackpad avatar stebalien avatar stuartcarnie avatar terryding77 avatar vyzo avatar web-flow avatar web3-bot avatar whyrusleeping avatar willscott avatar wondertan avatar xtaci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-yamux's Issues

Re-enable write coalescing

Write coalescing was disabled in #24 due to some downstream issues. We need to add more testing, including benchmarks around write coalescing so that we can confidently reenable it.

Success Criteria

  • Write coalescing is well tested and benchmarked
  • Write coalescing is enabled

flaky TestManyStreams test

=== RUN   TestManyStreams
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:1 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:3 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:5 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:7 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:9 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:11 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:13 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:15 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:23 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:19 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:21 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:17 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:27 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:29 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:31 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:35 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:37 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:39 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:43 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:45 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:47 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:49 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:51 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:55 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:53 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:61 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:63 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:25 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:57 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:41 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:59 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:67 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:69 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:65 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:71 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:73 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:75 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:77 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:79 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:81 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:85 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:83 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:87 Length:0
2021/05/20 22:48:52 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:89 Length:0
--- PASS: TestManyStreams (0.31s)
=== RUN   TestManyStreams_PingPong
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:31 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:3 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:29 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:1 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:41 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:39 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:55 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:77 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:37 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:17 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:13 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:9 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:7 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:63 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:51 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:23 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:25 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:19 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:45 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:43 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:27 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:21 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:11 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:47 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:35 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:5 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:57 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:49 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:75 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:73 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:53 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:79 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:65 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:93 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:85 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:81 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:71 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:83 Length:0
2021/05/20 22:48:53 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:89 Length:0
2021/05/20 22:48:53 [ERR] yamux: keepalive failed: EOF
2021/05/20 22:48:53 [ERR] yamux: keepalive failed: connection write timeout
    session_test.go:746: err: stream reset
    session_test.go:746: err: stream reset
    session_test.go:795: err: stream reset
    session_test.go:795: err: stream reset
--- FAIL: TestManyStreams_PingPong (0.72s)

Data race on stream.memory access

We have a couple of test failures on a -race build due to yamux racing on stream.memory. Looking at the code sendWindowUpdate does mention that it should be called under a lock but since it is called from Read I'm not sure any of the existing users (incl. swarm) synchronize Read/Close with a separate lock.

What do you think is the best approach for solving this race?

WARNING: DATA RACE
Read at 0x00c05798a548 by goroutine 1474:
  github.com/libp2p/go-yamux/v3.(*Session).Close()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:299 +0x3e7
  github.com/libp2p/go-yamux/v3.(*Session).exitErr()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:317 +0xce
  github.com/libp2p/go-yamux/v3.(*Session).send()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:489 +0x44
  github.com/libp2p/go-yamux/v3.newSession.func2()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:159 +0x39

Previous write at 0x00c05798a548 by goroutine 3271:
  github.com/libp2p/go-yamux/v3.(*Stream).sendWindowUpdate()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/stream.go:234 +0x651
  github.com/libp2p/go-yamux/v3.(*Stream).Read()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/stream.go:123 +0x224
  github.com/libp2p/go-libp2p/p2p/muxer/yamux.(*stream).Read()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/muxer/yamux/stream.go:17 +0x4e
  github.com/libp2p/go-libp2p/p2p/net/swarm.(*Stream).Read()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/net/swarm/swarm_stream.go:55 +0x93
  bufio.(*Reader).Read()
      /opt/hostedtoolcache/go/1.18.5/x64/src/bufio/bufio.go:222 +0x2b5
  io.ReadAtLeast()
      /opt/hostedtoolcache/go/1.18.5/x64/src/io/io.go:331 +0xdd
  io.ReadFull()
      /opt/hostedtoolcache/go/1.18.5/x64/src/io/io.go:350 +0x268
  github.com/libp2p/go-msgio/protoio.(*uvarintReader).ReadMsg()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/protoio/uvarint_reader.go:82 +0x238
  github.com/libp2p/go-libp2p-pubsub.(*PubSub).handleNewStream()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/comm.go:66 +0x397
  github.com/libp2p/go-libp2p-pubsub.(*PubSub).handleNewStream-fm()
      <autogenerated>:1 +0x4d
  github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).SetStreamHandler.func1()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/host/basic/basic_host.go:568 +0x86
  github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).newStreamHandler.func1()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/host/basic/basic_host.go:411 +0x74

Goroutine 1474 (running) created at:
  github.com/libp2p/go-yamux/v3.newSession()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:159 +0xb24
  github.com/libp2p/go-yamux/v3.Server()
      /home/runner/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/mux.go:119 +0x231
  github.com/libp2p/go-libp2p/p2p/muxer/yamux.(*Transport).NewConn()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/muxer/yamux/transport.go:44 +0x84
  github.com/libp2p/go-libp2p/p2p/muxer/muxer-multistream.(*Transport).NewConn()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/muxer/muxer-multistream/multistream.go:74 +0x2f9
  github.com/libp2p/go-libp2p/p2p/net/upgrader.(*upgrader).setupMuxer.func1()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/net/upgrader/upgrader.go:206 +0x128

Goroutine 3271 (running) created at:
  github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).newStreamHandler()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/host/basic/basic_host.go:411 +0x82e
  github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).newStreamHandler-fm()
      <autogenerated>:1 +0x4d
  github.com/libp2p/go-libp2p/p2p/net/swarm.(*Conn).start.func1.1()
      /home/runner/go/pkg/mod/github.com/libp2p/[email protected]/p2p/net/swarm/swarm_conn.go:133 +0x102
==================

Bug Report: Uint32 Overflow in sendWindow Calculation

Description:

A critical issue has been identified in the handling of the sending window size in the go-yamux library. The sending window size is calculated based on the initialStreamWindow (256*1024) added to the remote send window size from the Windows Update message. However, in the function "incrSendWindow", the value is incremented using atomic.AddUint32 without checking whether the sum might overflow due to the limitations of the uint32 data type. This could potentially lead to the sending window size overflowing to 0, rendering the stream incapable of sending any messages.

image

Environment:

[email protected]

No close() function is called for recvNotifyCh in Stream?

The chan struct{} in Stream, such as recvNotifyCh, sendNotifyCh, readDeadline, writeDeadline, is never closed?
The problem is that: In server I call Host.NewStream (in go-libp2p-core) to get a stream, and write something to this stream, then close the stream. The client will receive this message and output it in the console, close too.
However, the memory continues growing, and I use pprof to test why this happens, and the memory occupied by the channels is large and still growing.
I try to output the size of session.streams, and it is 2 all the time. It means the stream is actually deleted in this map(id->Stream).
Then I try to add close(recvNotifyCh) in cleanup function in stream.go. However, one error occurs: send on closed channel, because asyncNotify is called after the cleanup function.
The channels in Stream seem not to be moved by GC. And I'd like to know how to fix it? What's the reason? or there are some problems in my code, and how should I use libp2p correctly?
Thanks for your apply!

Relationship to hashicorp/yamux?

How does go-yamux relate to the hashicorp one? What was the motivation not to just use that one, and how have the features diverged? Thx

flaky TestLotsOfWritesWithStreamDeadline

=== RUN   TestLotsOfWritesWithStreamDeadline
    session_test.go:1576: writes from the client should've expired; got: stream reset, bytes: []
--- FAIL: TestLotsOfWritesWithStreamDeadline (0.36s)

runtime error: racy use of timers

Got this while running IPFS, but I figured it probably belongs here:

panic: runtime error: racy use of timers

goroutine 247502 [running]:
time.resetTimer(0xc003d3ff48, 0x3fc2d5cfdd34)
        /usr/local/go/src/runtime/time.go:233 +0x35
time.(*Timer).Reset(0xc003d3ff40, 0x6fc23ac00, 0x0)
        /usr/local/go/src/time/sleep.go:126 +0x7d
github.com/libp2p/go-yamux.(*Session).startKeepalive.func1()
        /home/leo/go/pkg/mod/github.com/libp2p/[email protected]/session.go:345 +0x19e
created by time.goFunc
        /usr/local/go/src/time/sleep.go:168 +0x44

This is on go-ipfs commit 67741060c4332d27b6434559db3e6270444b1125, the daemon says this on startup:

go-ipfs version: 0.5.0-dev-67741060c
Repo version: 7
System version: amd64/linux
Golang version: go1.14

It's actually WSL, I built without CGO.

Here's another:

panic: runtime error: racy use of timers

goroutine 109005 [running]:
time.resetTimer(0xc002d589b8, 0x422c5ec82298)
        /usr/local/go/src/runtime/time.go:233 +0x35
time.(*Timer).Reset(0xc002d589b0, 0x6fc23ac00, 0xc002bd6300)
        /usr/local/go/src/time/sleep.go:126 +0x7d
github.com/libp2p/go-yamux.(*Session).recvLoop(0xc00393ef70, 0x0, 0x0)
        /home/leo/go/pkg/mod/github.com/libp2p/[email protected]/session.go:516 +0x302
github.com/libp2p/go-yamux.(*Session).recv(0xc00393ef70)
        /home/leo/go/pkg/mod/github.com/libp2p/[email protected]/session.go:483 +0x2b
created by github.com/libp2p/go-yamux.newSession
        /home/leo/go/pkg/mod/github.com/libp2p/[email protected]/session.go:129 +0x379

Dynamic Window Sizing

Currently, yamux uses a fixed-size window. Unfortunately, this means we can't account for variable latencies. For low-latency connections, the window is too large and we'll have to hold large buffers. For high-latency connections, the window is too small and we'll bottleneck on filling up the window.

Ideally, we'd extract the QUIC dynamic window sizing logic from https://github.com/lucas-clemente/quic-go and use it to figure out our window sizing.

Get CI passing

The tests are flaky to the point where I'm not sure if the tests are bad or there are issues in the code.

hasicorp/yamux vs go-yamux

Simple question, whats the different between the hashicorp project and this fork/implementation. Is there a recommendation on which version should be used

Improve QoS

Currently, when sending a lot of data, a single stream can fill the queue with 64*64*1024 = 4MiB of data to send. Ideally, we'd have per-stream QoS and a stream that just wants to send a few bytes would be able to jump this queue.

twice set for stream.readDeadline

go-yamux/stream.go

Lines 311 to 325 in 736c1cb

func (s *Stream) forceClose() {
s.stateLock.Lock()
switch s.state {
case streamClosed:
// Already successfully closed. It just hasn't been removed from
// the list of streams yet.
default:
s.state = streamReset
}
s.stateLock.Unlock()
s.notifyWaiting()
s.readDeadline.set(time.Time{})
s.readDeadline.set(time.Time{})
}

go-yamux/stream.go

Lines 328 to 333 in 736c1cb

func (s *Stream) cleanup() {
s.session.closeStream(s.id)
s.readDeadline.set(time.Time{})
s.readDeadline.set(time.Time{})
}

I find in Stream.forceClose and stream.cleanup functions, we set the readDeadline twice, is there any trick for deadline? or it just the wrong typo for writeDeadline?

Panic due to nil check on memory manager not succeeding

We are seeing the following panic with go-yamux 3.0.2:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x187da42]

goroutine 601154 [running]:
github.com/libp2p/go-libp2p-resource-manager.(*peerScope).ReserveMemory(0x0, 0x0, 0xa0)
        <autogenerated>:1 +0x22
github.com/libp2p/go-yamux/v3.(*Session).incomingStream(0xc00931ad80, 0x2)
        /home/estuary/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:765 +0x8c
github.com/libp2p/go-yamux/v3.(*Session).handleStreamMessage(0xc00931ad80, {0x0, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x2, 0x0, ...})
        /home/estuary/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:660 +0x86
github.com/libp2p/go-yamux/v3.(*Session).recvLoop(0xc00931ad80)
        /home/estuary/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:648 +0x148
github.com/libp2p/go-yamux/v3.(*Session).recv(0xc00314cf88)
        /home/estuary/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:602 +0x1e
created by github.com/libp2p/go-yamux/v3.newSession
        /home/estuary/go/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:157 +0x62f
exit status 2

It looks like this library already has code to deal with the memory manager being nil, but I'm guessing the nil check is failing due to interface conversion from network.PeerScope to MemoryManager that happens here along with go's somewhat bizarre logic around nil equality ( see https://codefibershq.com/blog/golang-why-nil-is-not-always-nil)

Suggested possible fixes:

  • use reflect in the nil check in this repo
  • in the go-libp2p-yamux repo, change the transport initialization code to something like this:
func (t *Transport) NewConn(nc net.Conn, isServer bool, scope network.PeerScope) (network.MuxedConn, error) {
	var s *yamux.Session
	var err error
	var mm yamux.MemoryManager = scope
	if scope == nil {
	  mm = nil
	}
	if isServer {
		s, err = yamux.Server(nc, t.Config(), mm)
	} else {
		s, err = yamux.Client(nc, t.Config(), mm)
	}
	return (*conn)(s), err
}

Memory leak in Session

inflight, in the Session isn't being cleared along with the stream. This is leading to memory leaks.

During our load testing, it had more than 150MB of data in the end.

Bug Report: Infinite Loop in yamux Write Function

Description:

An issue has been identified in the yamux library, specifically in the Write function. This issue arises when attempting to write a message to the remote peer using the "sendMsg" function. Before invoking the "sendMsg" function, the "write(b []byte)" function checks whether the "sendWindow" is equal to 0. However, in the scenario where the connection state is "halfOpen" and the "sendWindow" is zero due to an excessive number of communications on a single stream, it leads to an infinite loop between the "Write(b []byte)" and "write(b []byte)" functions.
image

Expected Outcome:

The Write function should handle the scenario of a zero "sendWindow" in the "halfOpen" connection state gracefully, without causing an infinite loop.

Actual Outcome:

The current implementation leads to an infinite loop in the described scenario.

Environment:

[email protected]

Avoid locking the receive buffer when reading from the network

Currently, we lock the receive buffer when reading from the network. However, this means that if we're reading a large message, the receiving stream can't read anything already in the buffer.

A simple solution would be to allocate a temporary buffer for reading from the network, then copy into the receive buffer. However, that's an extra copy.

Instead, I'd prefer keep a read offset and a write offset into the same buffer, updating them after reading/writing. Critically, we wouldn't hold the lock while reading.

consider removing / disabling keep-alives

NAT mappings are kept from expiring if the the NAT occasionally sees a connection being used.

Architecturally, sending keep-alives is a responsibility of the underlying transport, not of the stream multiplexer. For TCP, we can set the SO_KEEPALIVE socket option in go-tcp-transport, and the kernel will take care of keeping the connection alive. When running yamux on top of any other transport, we can't make any assumptions about the necessity and the frequency of keep-alive intervals anyway.

I suggest adding SO_KEEPALIVE support to go-tcp-transport and removing the respective code from go-yamux. Open question: Do we only deprecate Config.EnableKeepAlive and Config.KeepAliveInterval, or do we remove them?

WDYT, @Stebalien and @willscott?

Stream deadlock

I've got a deadlock using Stream

goroutine 1 took s.recvLock.Lock() and then s.stateLock.Lock()
goroutine 2 took s.stateLock.Lock() and then s.recvLock.Lock()

#	0x4431a6	sync.runtime_SemacquireMutex+0x46							/usr/local/go/src/runtime/sema.go:71
#	0x46f32b	sync.(*Mutex).lockSlow+0xfb								/usr/local/go/src/sync/mutex.go:138
#	0xa3e8a9	sync.(*Mutex).Lock+0xd9									/usr/local/go/src/sync/mutex.go:81
#	0xa3e80c	github.com/libp2p/go-yamux.(*Stream).sendFlags+0x3c					/go/pkg/mod/github.com/libp2p/[email protected]/stream.go:194
#	0xa3e92d	github.com/libp2p/go-yamux.(*Stream).sendWindowUpdate+0x6d				/go/pkg/mod/github.com/libp2p/[email protected]/stream.go:217
#	0xa3df49	github.com/libp2p/go-yamux.(*Stream).Read+0x2c9						/go/pkg/mod/github.com/libp2p/[email protected]/stream.go:109
#	0xe0e640	github.com/libp2p/go-libp2p-swarm.(*Stream).Read+0x60					/go/pkg/mod/github.com/libp2p/[email protected]/swarm_stream.go:64
#	0xd316e7	github.com/multiformats/go-multistream.(*lazyClientConn).Read+0xa7			/go/pkg/mod/github.com/multiformats/[email protected]/lazyClient.go:75
#	0xddeb31	github.com/libp2p/go-libp2p/p2p/host/basic.(*streamWrapper).Read+0x51			/go/pkg/mod/github.com/libp2p/[email protected]/p2p/host/basic/basic_host.go:765
...

#	0x4431a6	sync.runtime_SemacquireMutex+0x46					/usr/local/go/src/runtime/sema.go:71
#	0x46f32b	sync.(*Mutex).lockSlow+0xfb						/usr/local/go/src/sync/mutex.go:138
#	0xa3e09c	sync.(*Mutex).Lock+0x41c						/usr/local/go/src/sync/mutex.go:81
#	0xa3dd54	github.com/libp2p/go-yamux.(*Stream).Read+0xd4				/go/pkg/mod/github.com/libp2p/[email protected]/stream.go:84
#	0xe0e640	github.com/libp2p/go-libp2p-swarm.(*Stream).Read+0x60			/go/pkg/mod/github.com/libp2p/[email protected]/swarm_stream.go:64
#	0xd316e7	github.com/multiformats/go-multistream.(*lazyClientConn).Read+0xa7	/go/pkg/mod/github.com/multiformats/[email protected]/lazyClient.go:75
#	0xddeb31	github.com/libp2p/go-libp2p/p2p/host/basic.(*streamWrapper).Read+0x51	/go/pkg/mod/github.com/libp2p/[email protected]/p2p/host/basic/basic_host.go:765
#	0xd1ddb1	github.com/libp2p/go-libp2p-core/helpers.AwaitEOF+0xb1			/go/pkg/mod/github.com/libp2p/[email protected]/helpers/stream.go:46
#	0xd1dcd7	github.com/libp2p/go-libp2p-core/helpers.FullClose+0x97			/go/pkg/mod/github.com/libp2p/[email protected]/helpers/stream.go:30
...

incorrect use of buffer pool

In Read, we reslice the buffer:

s.b[0] = s.b[0][n:]

When returning this buffer to the pool, it will have a shorter length than the buffer we took from the buffer. Even worse, as we're using a pool, the bytes at the beginning of the buffer will be wasted eternally.

Send control messages on a control queue

Currently, all messages go through the same send queue. Unfortunately, this means that low-latency, low-bandwidth, high-priority control messages get stuck behind data. We should send many control messages on a high-priority, out of order queue:

  • Reset Just make sure we don't send this before we send the open message or bad things might happen.
  • Window updates. This is actually really important. At the moment, we could slow down receiving data because we're busy sending it. That's not good.
  • Ping.

Importantly, we should not send stream open/close requests on a separate channel as order matters.

flaky TestManyStreams_PingPong

=== RUN   TestManyStreams_PingPong
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:1 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:27 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:5 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:13 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:3 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:7 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:59 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:45 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:9 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:43 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:11 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:17 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:49 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:15 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:21 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:19 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:29 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:23 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:41 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:79 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:47 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:31 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:57 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:35 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:37 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:93 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:51 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:39 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:53 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:61 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:83 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:63 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:55 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:77 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:81 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:65 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:69 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:67 Length:0
2022/06/02 10:31:20 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:85 Length:0
    session_test.go:747: err: stream reset
2022/06/02 10:31:20 [ERR] yamux: keepalive failed: EOF
2022/06/02 10:31:20 [ERR] yamux: keepalive failed: connection write timeout
    session_test.go:796: err: stream reset
--- FAIL: TestManyStreams_PingPong (0.68s)

flaky TestSendData_VeryLarge

=== RUN   TestSendData_VeryLarge
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
    session_norace_test.go:86: err: stream reset
2022/04/16 21:36:58 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:0 StreamID:27 Length:131072
2022/04/16 21:36:58 [ERR] yamux: keepalive failed: EOF
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
    session_norace_test.go:114: err: stream reset
--- FAIL: TestSendData_VeryLarge (7.[57](https://github.com/libp2p/go-yamux/runs/6050045691?check_suite_focus=true#step:8:57)s)

inconsistent locking order

Running our test suite with https://github.com/sasha-s/go-deadlock yields the following potential deadlock:

=== RUN   TestCloseRead
POTENTIAL DEADLOCK: Inconsistent locking. saw this ordering in one goroutine:
happened before
stream.go:396 v3.(*Stream).processFlags { s.stateLock.Lock() } <<<<<
stream.go:395 v3.(*Stream).processFlags {  }
stream.go:437 v3.(*Stream).incrSendWindow { // Increase window, unblock a sender }
session.go:706 v3.(*Session).handleStreamMessage { stream.incrSendWindow(hdr, flags) }
session.go:666 v3.(*Session).recvLoop {  }
session.go:614 v3.(*Session).recv { func (s *Session) recv() { }

happened after
session.go:865 v3.(*Session).establishStream { s.streamLock.Lock() } <<<<<
session.go:864 v3.(*Session).establishStream { func (s *Session) establishStream(id uint32) { }
stream.go:403 v3.(*Stream).processFlags { } }
stream.go:437 v3.(*Stream).incrSendWindow { // Increase window, unblock a sender }
session.go:706 v3.(*Session).handleStreamMessage { stream.incrSendWindow(hdr, flags) }
session.go:666 v3.(*Session).recvLoop {  }
session.go:614 v3.(*Session).recv { func (s *Session) recv() { }

in another goroutine: happened before
session.go:294 v3.(*Session).Close { s.streamLock.Lock() } <<<<<
session.go:293 v3.(*Session).Close {  }
session.go:316 v3.(*Session).exitErr { s.Close() }
session.go:489 v3.(*Session).send { } }

happened after
stream.go:363 v3.(*Stream).forceClose { s.stateLock.Lock() } <<<<<
stream.go:362 v3.(*Stream).forceClose { func (s *Stream) forceClose() { }
session.go:299 v3.(*Session).Close { stream.forceClose() }
session.go:316 v3.(*Session).exitErr { s.Close() }
session.go:489 v3.(*Session).send { } }

Other goroutines holding locks:
goroutine 3013287 lock 0xc0008d1888
session.go:278 v3.(*Session).Close { s.shutdownLock.Lock() } <<<<<
session.go:277 v3.(*Session).Close { func (s *Session) Close() error { }
session_test.go:841 v3.TestCloseRead { } }

goroutine 3013287 lock 0xc0008d1828
session.go:294 v3.(*Session).Close { s.streamLock.Lock() } <<<<<
session.go:293 v3.(*Session).Close {  }
session_test.go:841 v3.TestCloseRead { } }

goroutine 3013290 lock 0xc0008d1768
session.go:278 v3.(*Session).Close { s.shutdownLock.Lock() } <<<<<
session.go:277 v3.(*Session).Close { func (s *Session) Close() error { }
session.go:316 v3.(*Session).exitErr { s.Close() }
session.go:489 v3.(*Session).send { } }

goroutine 3013287 lock 0xc0165bc348
deadline.go:33 v3.(*pipeDeadline).set { d.mu.Lock() } <<<<<
deadline.go:32 v3.(*pipeDeadline).set { func (d *pipeDeadline) set(t time.Time) { }
stream.go:374 v3.(*Stream).forceClose { s.readDeadline.set(time.Time{}) }
session.go:299 v3.(*Session).Close { stream.forceClose() }
session_test.go:841 v3.TestCloseRead { } }

goroutine 3013290 lock 0xc0008d1708
session.go:294 v3.(*Session).Close { s.streamLock.Lock() } <<<<<
session.go:293 v3.(*Session).Close {  }
session.go:316 v3.(*Session).exitErr { s.Close() }
session.go:489 v3.(*Session).send { } }

When I open many streams quickly, an error log appears: [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:517 Length:0

windows 10 x64
go version go1.21.6 windows/amd64
go-yamux: [email protected]

I'm trying to learn this library, but a lot of error logs like the title appear during testing.

I checked my code but didn't see any obvious errors since I'm just learning this library.

What should be the problem?
Thank you so much

package main

import (
	"context"
	"fmt"
	"github.com/libp2p/go-yamux/v4"
	"log/slog"
	"net"
)

func main() {
	ln, err := net.Listen("tcp", "127.0.0.1:0")
	if err != nil {
		panic(err)
	}
	defer ln.Close()

	go func() {
		for {
			c, err := ln.Accept()
			if err != nil {
				panic(err)
			}

			go func(c net.Conn) {
				defer c.Close()
				// Setup server side of yamux
				session, err := yamux.Server(c, nil, nil)
				if err != nil {
					panic(err)
				}

				for i := uint64(0); ; i++ {
					s, err := session.Accept()
					if err != nil {
						slog.Error("session.Accept", "err", err, "i", i)
						continue
					}

					//_, err = s.Write([]byte(fmt.Sprintf("%v", i)))
					//if err != nil {
					//	_ = s.Close()
					//	slog.Error("s.Write", "err", err, "i", i)
					//	continue
					//}

					err = s.Close()
					if err != nil {
						slog.Error("s.Close", "err", err, "i", i)
						continue
					}

					if i%1000 == 0 {
						slog.Info("session.Accepi", "i", i)
					}
				}
			}(c)
		}
	}()

	conn, err := net.Dial("tcp", ln.Addr().String())
	if err != nil {
		panic(err)
	}
	defer conn.Close()

	ctx, cancel := context.WithCancelCause(context.Background())
	defer cancel(fmt.Errorf("client.drfer"))

	sessionClient, err := yamux.Client(conn, nil, nil)
	if err != nil {
		panic(err)
	}
	for i := 0; ; i++ {

		// Open a new stream
		stream, err := sessionClient.OpenStream(ctx)
		if err != nil {
			slog.Error("sessionClient.OpenStream", "err", err, "i", i)
			continue
		}

		err = stream.Close()
		if err != nil {
			slog.Error("accept.Close", "err", err, "i", i)
			continue
		}

		if i%1000 == 0 {
			slog.Info("sessionClient.OpenStream", "i", i)
		}
	}
}

/*

2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4
 StreamID:517 Length:0
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2
 StreamID:519 Length:0
2024/01/30 20:19:07 [WARN] yamux: Discarding data for stream: 519
2024/01/30 20:19:07 INFO sessionClient.OpenStream i=15000
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:1
 StreamID:16773 Length:0
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4
 StreamID:16773 Length:0
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4
 StreamID:519 Length:0
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2
 StreamID:521 Length:0
2024/01/30 20:19:07 [WARN] yamux: Discarding data for stream: 521
2024/01/30 20:19:07 [WARN] yamux: backlog exceeded, forcing stream reset
2024/01/30 20:19:07 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4


....


2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33797 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33797 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33799 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33799 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33801 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33801 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33803 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33803 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33805 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33805 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33807 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33807 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33809 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33809 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33811 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33811 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:2 StreamID:33813 Length:0
2024/01/30 21:12:31 [WARN] yamux: frame for missing stream: Vsn:0 Type:1 Flags:4 StreamID:33813 Length:0



*/

Connections cut under heavy load

Under heavy load, yamux connections fail to ping fast enough and get cut. We should just stop pinging when we're under load (because we know the peer is alive).

BUG: slice bounds out of range

it panics on windows

libp2p/go-libp2p#1108

panic: runtime error: slice bounds out of range [4203:726]

goroutine 3916 [running]:
internal/poll.(*FD).Write(0xc012d70c80, 0xc004cda800, 0x2d6, 0x400, 0x0, 0x0, 0x0)
        C:/Program Files/Go/src/internal/poll/fd_windows.go:645 +0x490
net.(*netFD).Write(0xc012d70c80, 0xc004cda800, 0x2d6, 0x400, 0x400, 0xc004cda800, 0x2d6)
        C:/Program Files/Go/src/net/fd_posix.go:73 +0x56
net.(*conn).Write(0xc004c119a8, 0xc004cda800, 0x2d6, 0x400, 0x0, 0x0, 0x0)
        C:/Program Files/Go/src/net/net.go:194 +0x95
github.com/libp2p/go-libp2p-noise.(*secureSession).writeMsgInsecure(...)
        D:/goWork/pkg/mod/github.com/libp2p/[email protected]/rw.go:155
github.com/libp2p/go-libp2p-noise.(*secureSession).Write(0xc0033438c0, 0xc0077da000, 0x2c4, 0x400, 0x0, 0x0, 0x0)
        D:/goWork/pkg/mod/github.com/libp2p/[email protected]/rw.go:123 +0x288
github.com/libp2p/go-yamux/v2.(*Session).sendLoop(0xc0131fdc20, 0x0, 0x0)
        D:/goWork/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:548 +0x2b5
github.com/libp2p/go-yamux/v2.(*Session).send(0xc0131fdc20)
        D:/goWork/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:452 +0x32
created by github.com/libp2p/go-yamux/v2.newSession
        D:/goWork/pkg/mod/github.com/libp2p/go-yamux/[email protected]/session.go:133 +0x41d

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.