Giter VIP home page Giter VIP logo

readahead's Introduction

readahead

Asynchronous read-ahead for Go readers

This package will allow you to add readhead to any reader. This means a separate goroutine will perform reads from your upstream reader, so you can request from this reader without delay.

This is helpful for splitting an input stream into concurrent processing, and also helps smooth out bursts of input or output.

This should be fully transparent, except that once an error has been returned from the Reader, it will not recover. A panic will be caught and returned as an error.

The readahead object also fulfills the io.WriterTo interface, which is likely to speed up io.Copy and other code that use the interface.

See an introduction: An Async Read-ahead Package for Go

Go Reference Go

usage

To get the package use go get -u github.com/klauspost/readahead.

Here is a simple example that does file copy. Error handling has been omitted for brevity.

input, _ := os.Open("input.txt")
output, _ := os.Create("output.txt")
defer input.Close()
defer output.Close()

// Create a read-ahead Reader with default settings
ra := readahead.NewReader(input)
defer ra.Close()

// Copy the content to our output
_, _ = io.Copy(output, ra)

settings

You can finetune the read-ahead for your specific use case, and adjust the number of buffers and the size of each buffer.

The default the size of each buffer is 1MB, and there are 4 buffers. Do not make your buffers too small since there is a small overhead for passing buffers between goroutines. Other than that you are free to experiment with buffer sizes.

contributions

On this project contributions in terms of new features is limited to:

  • Features that are widely usable and
  • Features that have extensive tests

This package is meant to be simple and stable, so therefore these strict requirements.

license

This package is released under the MIT license. See the supplied LICENSE file for more info.

readahead's People

Contributors

igungor avatar klauspost avatar mjgarton avatar xmister avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

readahead's Issues

Allow deferring a recover

Hello,

In case the underlying io.Reader panics, the host program will enter an unrecoverable state.

Would you consider adding an option to allow recovery of a panic?

Question: Partial downloads to an xml decoder?

I wonder, would this be useful for a case like:

Input: Partial downloads of say 16K chunks every iteration
ReadAhead: Buffers, say 32K chunks (or any configurable size)
xml.Decoder: Consumes the buffers to populate a struct.

Thanks.

Reads waiting until buffer is full

Hi!

I'm currently seeing that reads are blocking until a buffer is completely full. This doesn't fit well with my use case—mocking a TLS handshake over connections from net.Pipe—most reads will never fill an entire buffer, and the interleaving reads and writes of the handshake protocol mean that a read failing to immediately return deadlocks the connection. This behavior seems expected as a result of #16, however this is directly contradictory to the article introducing the library:

Finally, there is the case where the input reader does not deliver as many bytes as we request. That is fairly common, and can indicate a lot of things. But to keep input->output latency at a minimum we forward this to output at once, even though you might have requested bigger buffers.

I have resorted to one-byte buffers to make this work; I obviously know this is less than ideal and that performance is suffering as a result, however it does now fulfill the package's goal. Is this behavior intended, and if so, is there any other workaround that you can see without knowing a minimum read size?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.