Comments (4)
@gregw I'm unsure about this issue.
Pat suggests to perform 16 4K writes rather than 1 64K write.
However, that is 16x more system calls to write()
.
Right now for large writes we generate all the frames that fit into the flow control window, and then we write them in a single write()
, so it is indeed 1 64K write.
Perhaps we need a parameter that stops the generation: rather than generating 4 16K frames like we do now (because 16K is the default max frame size) and then write them all in a single write (so that the write is 64K), we stop the generation when reaching the value of this new parameter.
If the new parameter is valued at 4K, we generate just one 4K frame and then write it, and so forth until the whole thing is written or the flow control window is full.
Note that browsers do enlarge the flow control window to speed up downloads, so the flow control window may be way larger than 64K.
Should this parameter should be a function of the flow control window ? Rather than a bytes value like 4K, be a numeric value like 1/16 of the flow control window ?
from jetty.project.
I'd say that we have no where near enough data on this to decide. Currently the application can decide to push or not... if they decide to push, then we have to trust that they will make a good decision to push resources that are most likely wanted. If they are wanted, then we want to transfer them fast.
If they are not wanted, then best to just not push them rather than push them slowly.
I'd say do nothing on this until we have data that indicates we are wasting resources pushing streams that are early closed (perhaps the push filter should collect that info and learn from it?).
from jetty.project.
@gregw whether to write larger chunks or not applies not only to pushes, but also to large downloads.
Pat's worry is that the when the implementation writes, it has to finish that write no matter how long it takes.
If the write takes a long time to finish, then the system may react slowly to other writes.
However, the only case where the write takes a long time to finish is when it is TCP congested.
But in HTTP/2 I would say that TCP congestion is a rare case, that can only be triggered by flow control windows that are larger than the bandwidth-delay product for that connection.
And if the connection is TCP congested, other writes may not be written anyway.
In summary, I think writing smaller chunks is worse in that increases the overhead (few more bytes written, more system calls) and I don't see the benefit if not in rare cases.
from jetty.project.
In Jetty 9.4.x stream interleaving has been improved (#360) so that now the unit of interleaving is the frame size. This takes care of making the prioritization of the frames fair with respect to writes.
Closing the issue since it's basically undecided, and we will keep improving HTTP/2 anyway in future.
from jetty.project.
Related Issues (20)
- Broken HTTP/3 tests
- PUT request over plain text connection with chunk encoded body fails when upgrading to HTTP/2 HOT 1
- Get :authority field when using Websockets over http/2 with the jakarta websocket-api HOT 4
- Jetty Releases 12.0.8 HOT 3
- Why is HttpResponse.getReason() always null? HOT 2
- UTF-8 NFC/NFD tests fail on Macos HOT 3
- Client sending RST_STREAM led to exceeding maxEventsPerSecond(128) threshold HOT 4
- Sharing threads between HttpClient instances HOT 1
- Document Request Customizers
- IllegalStateException in IteratingCallback.iterate during async request when client has died HOT 2
- Connections maxing out for requests HOT 8
- Breaking change in class DosFilter from 9.4.53 to 9.4.54 HOT 7
- java.lang.NullPointerException: Cannot invoke "org.eclipse.jetty.io.ArrayByteBufferPool$Buffer.acquire()" because "buffer" is null HOT 1
- WebSocket sendBlocking timeout = -1 Does that make sense? HOT 3
- Jetty 12.0.6 upgrade issue: org.eclipse.jetty.client request.header("KEY", "VALUE") not available HOT 5
- Jetty Releases 9.4.x, 10.0.y, 11.0.y
- Reopening #11431: NPE in error handling leading to 100% CPU also in 12.0.7 HOT 1
- Jetty temp directory used while resolving spring config HOT 2
- Jetty12.0.8 cannot run my war successfully, but jetty-9.4.48.v20220622 can. HOT 1
- Socks5Proxy does not support IP addresses with IP segments above 127
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jetty.project.