Comments (18)
@coadler I think its best that instead of modifying wsjson and wspb, we just modify the Read
method on the conn to use a buffer pool only if the received message is greater than 128 bytes.
from websocket.
And change the two packages to use it. The method I'm taking about is coming in #81
from websocket.
Will sync.Pool
can be passthrough to Dial?
gorilla/websocket
implementation with a private struct (https://github.com/gorilla/websocket/blob/6a67f44b69004415802d2e678e483065b8fff6c8/conn.go#L480)
I prefer pool.Get().([]byte)
to share with other io.CopyBuffer
.
from websocket.
I think it's unlikely I'll take an approach similar to gorilla/websocket. This is an internal implementation detail, I don't want to expose it unless absolutely necessary. At least until #65 is resolved.
from websocket.
In my case, I use websocket to transfer other protocol traffic. For the sake of maintainability, I keep nesting net.Conn
like Russian Doll. There are allocate []byte each level, so I want to share a pool with whole application.
from websocket.
I'm sorry I don't understand your use case. Can you elaborate on why you're nesting net.Conn and why you're allocating a []byte at each level?
from websocket.
Thoughts on https://github.com/valyala/bytebufferpool?
from websocket.
@coadler Would prefer to avoid the dependency on a library outside of the stdlib
See the benchmarks linked by that library https://omgnull.github.io/go-benchmark/buffer/
sync.Pool is only 4.5ns slower so its best to just use it instead.
The only buffers we can reuse are the bufio.Read/Writers and the buffers used by the wspb/wsjson subpackages used by client conns. The net/http hijacker always allocates for the server.
I've already tackled reuse of the bufio read/writers in #81
@coadler if you have time, would appreciate you implementing reuse of buffers for wspb/wsjson
from websocket.
The write side for wsjson already reuses buffers because it uses json.NewEncoder which uses a sync.Pool underneath. We'd have to switch json.NewDecoder to use json.Unmarshal if we want buffer reuse and for the wspb package, we'll have to use the proto.Buffer type for reuse.
from websocket.
Definitely a good idea to have some benchmarks as well.
from websocket.
Will tackle this tonight
from websocket.
Lol, we can't do that, my bad. When we return from Read(), the buffer is in the hands of application code, so there is no way to put it back into the pool as we can't tell when the app is done with it.
from websocket.
note: I've added this feature to the docs but I didn't actually implement it.
from websocket.
I'm confused why we would only want to buffer messages over 128 bytes. As far as I can tell there's not a way for us to know the message length before reading it in wsjson/wspb.
from websocket.
That's fair. I can't remember where but I remember reading sync.Pool isn't free and can be more expensive for small items than reallocating. We would need benchmarks to be sure. You'd read the first 128 bytes into a slice and if you don't get a EOF from the reader, then you'd know you need to read the rest into a buffer from the sync.Pool.
from websocket.
👍
from websocket.
So the overhead of sync.Pool is in fact very low:
$ go test -bench=SyncPool -run=^$
goos: darwin
goarch: amd64
pkg: nhooyr.io/websocket
BenchmarkSyncPool/2/allocate-8 100000000 10.6 ns/op 2 B/op 1 allocs/op
BenchmarkSyncPool/2/pool-8 100000000 15.6 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/16/allocate-8 100000000 18.2 ns/op 16 B/op 1 allocs/op
BenchmarkSyncPool/16/pool-8 100000000 15.1 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/32/allocate-8 100000000 20.3 ns/op 32 B/op 1 allocs/op
BenchmarkSyncPool/32/pool-8 100000000 15.1 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/64/allocate-8 50000000 24.1 ns/op 64 B/op 1 allocs/op
BenchmarkSyncPool/64/pool-8 100000000 15.0 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/128/allocate-8 50000000 30.5 ns/op 128 B/op 1 allocs/op
BenchmarkSyncPool/128/pool-8 100000000 15.2 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/256/allocate-8 30000000 44.4 ns/op 256 B/op 1 allocs/op
BenchmarkSyncPool/256/pool-8 100000000 15.1 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/512/allocate-8 20000000 71.7 ns/op 512 B/op 1 allocs/op
BenchmarkSyncPool/512/pool-8 100000000 15.3 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/4096/allocate-8 3000000 461 ns/op 4096 B/op 1 allocs/op
BenchmarkSyncPool/4096/pool-8 100000000 15.1 ns/op 0 B/op 0 allocs/op
BenchmarkSyncPool/16384/allocate-8 1000000 1361 ns/op 16384 B/op 1 allocs/op
BenchmarkSyncPool/16384/pool-8 100000000 15.4 ns/op 0 B/op 0 allocs/op
PASS
ok nhooyr.io/websocket 27.759s
func BenchmarkSyncPool(b *testing.B) {
sizes := []int{
2,
16,
32,
64,
128,
256,
512,
4096,
16384,
}
for _, size := range sizes {
b.Run(strconv.Itoa(size), func(b *testing.B) {
b.Run("allocate", func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
buf := make([]byte, size)
_ = buf
}
})
b.Run("pool", func(b *testing.B) {
b.ReportAllocs()
p := sync.Pool{}
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf := p.Get()
if buf == nil {
buf = make([]byte, size)
}
p.Put(buf)
}
})
})
}
}
Not worth shaving nanoseconds off at lower sizes to not use it, we should just use it all the time.
from websocket.
Going to get this in rn so the docs reflect the state of the lib.
from websocket.
Related Issues (20)
- Can't activate CORS for examples /chat/ HOT 2
- Reader() can only read one message, but Read() can read all messages HOT 5
- Error: panic: unaligned 64-bit atomic operation on arm32-bit while establishing websocket connection HOT 2
- Invalid readme example HOT 3
- Goroutine spike when closing connections in v1.8.10 (regression) HOT 3
- Transmit only single frame in Write when compression enabled too HOT 6
- Data race on `Conn.closeErr` HOT 3
- websocket.Conn.CloseNow() panics in WaitGroup.Wait HOT 2
- failed to get reader: use of closed network connection HOT 3
- Inline part of the full examples HOT 7
- Add custom ping message option to Ping function HOT 1
- failed to WebSocket dial: response body is not a io.ReadWriteCloser: io.nopCloserWriterTo HOT 1
- unsupported permessage-deflate parameter: "client_max_window_bits=15" from client HOT 8
- Update docs and mention explicit close required change in v1.8.11 HOT 16
- panic: sync: WaitGroup is reused before previous Wait has returned HOT 1
- Return net.ErrClosed from Read methods when connection is closed by us HOT 6
- failed: Close received after close HOT 6
- Document wsjson.Read closes the connection with the appropriate close status and error message if bad JSON is read
- Should use path.Match instead of filepath.Match HOT 4
- Compatibility issues HOT 14
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from websocket.