Giter VIP home page Giter VIP logo

Comments (4)

sbordet avatar sbordet commented on May 18, 2024

@gregw I'm unsure about this issue.

Pat suggests to perform 16 4K writes rather than 1 64K write.
However, that is 16x more system calls to write().

Right now for large writes we generate all the frames that fit into the flow control window, and then we write them in a single write(), so it is indeed 1 64K write.

Perhaps we need a parameter that stops the generation: rather than generating 4 16K frames like we do now (because 16K is the default max frame size) and then write them all in a single write (so that the write is 64K), we stop the generation when reaching the value of this new parameter.
If the new parameter is valued at 4K, we generate just one 4K frame and then write it, and so forth until the whole thing is written or the flow control window is full.

Note that browsers do enlarge the flow control window to speed up downloads, so the flow control window may be way larger than 64K.

Should this parameter should be a function of the flow control window ? Rather than a bytes value like 4K, be a numeric value like 1/16 of the flow control window ?

from jetty.project.

gregw avatar gregw commented on May 18, 2024

I'd say that we have no where near enough data on this to decide. Currently the application can decide to push or not... if they decide to push, then we have to trust that they will make a good decision to push resources that are most likely wanted. If they are wanted, then we want to transfer them fast.

If they are not wanted, then best to just not push them rather than push them slowly.

I'd say do nothing on this until we have data that indicates we are wasting resources pushing streams that are early closed (perhaps the push filter should collect that info and learn from it?).

from jetty.project.

sbordet avatar sbordet commented on May 18, 2024

@gregw whether to write larger chunks or not applies not only to pushes, but also to large downloads.

Pat's worry is that the when the implementation writes, it has to finish that write no matter how long it takes.
If the write takes a long time to finish, then the system may react slowly to other writes.

However, the only case where the write takes a long time to finish is when it is TCP congested.
But in HTTP/2 I would say that TCP congestion is a rare case, that can only be triggered by flow control windows that are larger than the bandwidth-delay product for that connection.

And if the connection is TCP congested, other writes may not be written anyway.

In summary, I think writing smaller chunks is worse in that increases the overhead (few more bytes written, more system calls) and I don't see the benefit if not in rare cases.

from jetty.project.

sbordet avatar sbordet commented on May 18, 2024

In Jetty 9.4.x stream interleaving has been improved (#360) so that now the unit of interleaving is the frame size. This takes care of making the prioritization of the frames fair with respect to writes.

Closing the issue since it's basically undecided, and we will keep improving HTTP/2 anyway in future.

from jetty.project.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.