Giter VIP home page Giter VIP logo

async-compression's Introduction

async-compression

crates.io version build status downloads docs.rs docs MIT or Apache 2.0 licensed dependency status

This crate provides adaptors between compression crates and Rust's modern asynchronous IO types.

Development

When developing you will need to enable appropriate features for the different test cases to run, the simplest is cargo test --all-features, but you can enable different subsets of features as appropriate for the code you are testing to avoid compiling all dependencies, e.g. cargo test --features tokio,gzip.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above, without any additional terms or conditions.

async-compression's People

Contributors

ahornby avatar anatawa12 avatar bors[bot] avatar daxpedda avatar decathorpe avatar dependabot-preview[bot] avatar dependabot[bot] avatar eeeeeta avatar fairingrey avatar g2p avatar garypen avatar github-actions[bot] avatar gnosek avatar hdhoang avatar indygreg avatar joshtriplett avatar lseelenbinder avatar mati865 avatar maxburke avatar miam-miam avatar nemo157 avatar nl5887 avatar nobodyxu avatar nyurik avatar paolobarbolini avatar robjtede avatar sfackler avatar svenrademakers avatar taiki-e avatar wesleywiser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

async-compression's Issues

Panic (slice indexes) when encoding with brotli

We just started getting that last night:

panic: slice index starts at 15827 but ends at 8192

Thread 0 Crashed:
0   core                            0x558771457352      core::slice::slice_index_order_fail (mod.rs:2758)
1   core                            0x558770cec47b      [inlined] core::ops::range::Range<T>::index (mod.rs:2917)
2   core                            0x558770cec47b      [inlined] core::ops::range::RangeFrom<T>::index (mod.rs:2996)
3   core                            0x558770cec47b      [inlined] core::slice::<T>::index (mod.rs:2732)
4   brotli                          0x558770cec47b      brotli::enc::compress_fragment::BrotliCompressFragmentFastImpl (compress_fragment.rs:979)
5   brotli                          0x558770ceafe1      brotli::enc::compress_fragment::BrotliCompressFragmentFast
6   brotli                          0x558770d36dee      brotli::enc::encode::BrotliEncoderCompressStreamFast (encode.rs:2762)
7   async_compression               0x558770c7ec25      [inlined] async_compression::codec::brotli::encoder::BrotliEncoder::encode (encoder.rs:39)
8   async_compression               0x558770c7ec25      async_compression::codec::brotli::encoder::BrotliEncoder::encode (encoder.rs:68)
9   async_compression               0x558770cefa53      [inlined] async_compression::stream::generic::encoder::Encoder<T>::poll_next::{{closure}} (encoder.rs:93)
10  async_compression               0x558770cefa53      async_compression::stream::generic::encoder::Encoder<T>::poll_next (encoder.rs:73)
11  async_compression               0x558770cecca6      [inlined] async_compression::stream::BrotliEncoder<T>::poll_next (encoder.rs:64)
12  futures_core                    0x558770cecca6      [inlined] futures_core::stream::TryStream::try_poll_next (stream.rs:193)
13  futures_util                    0x558770cecca6      [inlined] futures_util::stream::try_stream::into_stream::IntoStream<T>::poll_next (into_stream.rs:40)
14  futures_util                    0x558770cecca6      [inlined] futures_util::stream::stream::map::Map<T>::poll_next (map.rs:61)
15  futures_util                    0x558770cecca6      [inlined] futures_util::stream::try_stream::MapOk<T>::poll_next (lib.rs:118)
16  futures_core                    0x558770cecca6      [inlined] futures_core::stream::TryStream::try_poll_next (stream.rs:193)
17  futures_util                    0x558770cecca6      [inlined] futures_util::stream::try_stream::into_stream::IntoStream<T>::poll_next (into_stream.rs:40)
18  futures_util                    0x558770cecca6      futures_util::stream::stream::map::Map<T>::poll_next (map.rs:61)
19  futures_util                    0x558770db5419      futures_util::stream::try_stream::MapErr<T>::poll_next (lib.rs:118)
20  hyper                           0x5587712b128a      hyper::body::body::Body::poll_inner (body.rs:291)
21  hyper                           0x5587712b15a7      [inlined] hyper::body::body::Body::poll_eof (body.rs:253)
22  hyper                           0x5587712b15a7      hyper::body::body::Body::poll_data (body.rs:323)
23  fly_proxy                       0x558770db3f54      fly_proxy::connect::body::LoadGuardedBody::poll_data (body.rs:168)
24  hyper                           0x5587709357f5      hyper::proto::h2::PipeToSendStream<T>::poll (mod.rs:162)
25  hyper                           0x558770adba20      [inlined] hyper::proto::h2::server::H2Stream<T>::poll2 (server.rs:419)
26  hyper                           0x558770adba20      hyper::proto::h2::server::H2Stream<T>::poll (server.rs:437)
27  tokio                           0x558770b3f589      [inlined] tokio::util::trace::Instrumented<T>::poll (trace.rs:26)
28  tokio                           0x558770b3f589      [inlined] tokio::runtime::task::core::Core<T>::poll::{{closure}} (core.rs:173)
29  tokio                           0x558770b3f589      tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (unsafe_cell.rs:14)
30  tokio                           0x558770b123f3      [inlined] tokio::runtime::task::core::Core<T>::poll (core.rs:158)
31  tokio                           0x558770b123f3      [inlined] tokio::runtime::task::harness::Harness<T>::poll::{{closure}} (harness.rs:107)
32  core                            0x558770b123f3      [inlined] core::ops::function::FnOnce::call_once (function.rs:232)
33  std                             0x558770b123f3      std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
34  std                             0x5587708879df      [inlined] std::panicking::try::do_call (panicking.rs:297)
35  std                             0x5587708879df      [inlined] std::panicking::try (panicking.rs:274)
36  std                             0x5587708879df      [inlined] std::panic::catch_unwind (panic.rs:394)
37  tokio                           0x5587708879df      tokio::runtime::task::harness::Harness<T>::poll (harness.rs:89)
38  tokio                           0x558771310bf0      [inlined] tokio::runtime::task::raw::RawTask::poll (raw.rs:66)
39  tokio                           0x558771310bf0      [inlined] tokio::runtime::task::Notified<T>::run (mod.rs:169)
40  tokio                           0x558771310bf0      [inlined] tokio::runtime::thread_pool::worker::Context::run_task::{{closure}} (worker.rs:374)
41  tokio                           0x558771310bf0      [inlined] tokio::coop::with_budget::{{closure}} (coop.rs:127)
42  std                             0x558771310bf0      [inlined] std::thread::local::LocalKey<T>::try_with (local.rs:263)
43  std                             0x558771310bf0      std::thread::local::LocalKey<T>::with (local.rs:239)
44  tokio                           0x5587713315f2      [inlined] tokio::coop::with_budget (coop.rs:120)
45  tokio                           0x5587713315f2      [inlined] tokio::coop::budget (coop.rs:96)
46  tokio                           0x5587713315f2      tokio::runtime::thread_pool::worker::Context::run_task (worker.rs:352)
47  tokio                           0x558771330fbf      tokio::runtime::thread_pool::worker::Context::run (worker.rs:331)
48  tokio                           0x55877131f733      [inlined] tokio::runtime::thread_pool::worker::run::{{closure}} (worker.rs:309)
49  tokio                           0x55877131f733      tokio::macros::scoped_tls::ScopedKey<T>::set (scoped_tls.rs:63)
50  tokio                           0x5587713307f6      tokio::runtime::thread_pool::worker::run (worker.rs:306)
51  tokio                           0x5587713248d0      [inlined] tokio::runtime::thread_pool::worker::Launch::launch::{{closure}} (worker.rs:285)
52  tokio                           0x5587713248d0      [inlined] tokio::runtime::blocking::task::BlockingTask<T>::poll (task.rs:41)
53  tokio                           0x5587713248d0      [inlined] tokio::runtime::task::core::Core<T>::poll::{{closure}} (core.rs:173)
54  tokio                           0x5587713248d0      tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut (unsafe_cell.rs:14)
55  tokio                           0x5587713343f0      [inlined] tokio::runtime::task::core::Core<T>::poll (core.rs:158)
56  tokio                           0x5587713343f0      [inlined] tokio::runtime::task::harness::Harness<T>::poll::{{closure}} (harness.rs:107)
57  core                            0x5587713343f0      [inlined] core::ops::function::FnOnce::call_once (function.rs:232)
58  std                             0x5587713343f0      std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
59  std                             0x558771323073      [inlined] std::panicking::try::do_call (panicking.rs:297)
60  std                             0x558771323073      [inlined] std::panicking::try (panicking.rs:274)
61  std                             0x558771323073      [inlined] std::panic::catch_unwind (panic.rs:394)
62  tokio                           0x558771323073      tokio::runtime::task::harness::Harness<T>::poll (harness.rs:89)
63  tokio                           0x558771313bc8      [inlined] tokio::runtime::task::raw::RawTask::poll (raw.rs:66)
64  tokio                           0x558771313bc8      [inlined] tokio::runtime::task::Notified<T>::run (mod.rs:169)
65  tokio                           0x558771313bc8      tokio::runtime::blocking::pool::Inner::run (pool.rs:230)
66  tokio                           0x5587713199fd      [inlined] tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}::{{closure}} (pool.rs:210)
67  tokio                           0x5587713199fd      tokio::runtime::context::enter (context.rs:72)
68  tokio                           0x55877131c4a6      [inlined] tokio::runtime::handle::Handle::enter (handle.rs:76)
69  tokio                           0x55877131c4a6      [inlined] tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}} (pool.rs:209)
70  std                             0x55877131c4a6      std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:130)
71  std                             0x5587713266bb      [inlined] std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} (mod.rs:475)
72  std                             0x5587713266bb      [inlined] std::panic::AssertUnwindSafe<T>::call_once (panic.rs:318)
73  std                             0x5587713266bb      [inlined] std::panicking::try::do_call (panicking.rs:297)
74  std                             0x5587713266bb      [inlined] std::panicking::try (panicking.rs:274)
75  std                             0x5587713266bb      [inlined] std::panic::catch_unwind (panic.rs:394)
76  std                             0x5587713266bb      [inlined] std::thread::Builder::spawn_unchecked::{{closure}} (mod.rs:474)
77  core                            0x5587713266bb      core::ops::function::FnOnce::call_once{{vtable.shim}} (function.rs:232)
78  alloc                           0x558771433d4a      [inlined] alloc::boxed::Box<T>::call_once (boxed.rs:1076)
79  alloc                           0x558771433d4a      [inlined] alloc::boxed::Box<T>::call_once (boxed.rs:1076)
80  std                             0x558771433d4a      std::sys::unix::thread::Thread::new::thread_start (thread.rs:87)
81  <unknown>                       0x7f070b8dc6db      start_thread
82  <unknown>                       0x7f070b3eda3f      __clone
83  <unknown>                       0x0                 <unknown>

I don't have much details about the actual body that's causing these issues. We're running a reverse-proxy with varying responses and workloads.

zstd encoder default level says 0, while it's usually 3

hello, Level::Default enum for zstd returns 0i32, however zstd commandline default as well as zstd crate default constant is 3i32.

in commit 62557bf we lost a comment explaining that

     algos!(@algo zstd ["zstd"] ZstdDecoder ZstdEncoder<$inner> {
      /// The `level` argument here can range from 1-21. A level of `0` will use zstd's default, which is `3`

With 0 being outside the range 1-21, can we change it to the libzstd constant instead?

Thanks for your time!

proposal: Export codec and `Encode`/`Decode` trait

We are using our runtime in databend which will:

  • Running async task on async runtime (tokio here), for example, IO task.
  • Running sync task on sync runtime, for example, CPU bound task like decompress.

So we want to control the underlying behavior of async-compression:

  • Make IO happen on async runtime: poll the futures
  • Make decode happen on sync runtime

Thus, we have to directly access the Encode/Decode trait in codec and build our own XzDecoder.

Does this idea make sense to you? I'm willing to start a PR for it.


Also address: #141

'Decoder reached invalid state' panic from Reqwest crate

The async version of reqwest uses this crate for asynchronous Gzip decoding. I am receiving random 'Decoder reached invalid state' panics in the decoder's stream. Reading the code, I don't understand how that can happen unless the poll_next method is being invoked simultaneously from different threads...? Is that accurate? Or is there another condition that could cause this problem? Thanks in advance!

thread 'main' panicked at 'Decoder reached invalid state', /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/async-compression-0.2.0/src/stream/generic/decoder.rs:129:35
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /Users/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:84
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1025
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:193
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:210
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:471
  11: bytes::abort
  12: <async_compression::stream::generic::decoder::Decoder<S,D> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.10.1/<::std::macros::panic macros>:3
  13: <async_compression::stream::GzipDecoder<S> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/async-compression-0.2.0/src/stream/macros/decoder.rs:68
  14: <reqwest::async_impl::decoder::imp::Decoder as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.10.1/src/async_impl/decoder.rs:176
  15: futures_util::stream::stream::StreamExt::poll_next_unpin
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/mod.rs:1184
  16: <futures_util::stream::stream::next::Next<St> as core::future::future::Future>::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/next.rs:35
  17: std::future::poll_with_tls_context
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:99
  18: reqwest::async_impl::response::Response::chunk::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.10.1/src/async_impl/response.rs:288
  19: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  20: std::future::poll_with_tls_context
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:99
  21: kube::client::APIClient::request_events::{{closure}}::{{closure}}::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.25.0/src/client/mod.rs:180
  22: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  23: <futures_util::stream::unfold::Unfold<T,F,Fut> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/unfold.rs:111
  24: <futures_util::stream::stream::map::Map<St,F> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/map.rs:92
  25: <futures_util::stream::stream::flatten::Flatten<St> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/flatten.rs:96
  26: <core::pin::Pin<P> as futures_core::stream::Stream>::poll_next
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-core-0.3.4/src/stream.rs:121
  27: futures_util::stream::stream::StreamExt::poll_next_unpin
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/mod.rs:1184
  28: <futures_util::stream::stream::next::Next<St> as core::future::future::Future>::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.4/src/stream/stream/next.rs:35
  29: std::future::poll_with_tls_context
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:99
  30: kube::api::reflector::Reflector<K>::single_watch::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.25.0/src/api/reflector.rs:214
  31: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  32: std::future::poll_with_tls_context
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:99
  33: kube::api::reflector::Reflector<K>::poll::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.25.0/src/api/reflector.rs:120
  34: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  35: std::future::poll_with_tls_context
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:99
  36: k8s_aws_nlb_proxy_protocol_operator::poll_services::{{closure}}
             at src/main.rs:67
  37: <std::future::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/future.rs:43
  38: tokio::task::core::Core<T>::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/core.rs:128
  39: tokio::task::harness::Harness<T,S>::poll::{{closure}}::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/harness.rs:120
  40: core::ops::function::FnOnce::call_once
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/ops/function.rs:232
  41: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:318
  42: std::panicking::try::do_call
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:292
  43: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:78
  44: std::panicking::try
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:270
  45: std::panic::catch_unwind
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:394
  46: tokio::task::harness::Harness<T,S>::poll::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/harness.rs:101
  47: tokio::loom::std::causal_cell::CausalCell<T>::with_mut
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/loom/std/causal_cell.rs:41
  48: tokio::task::harness::Harness<T,S>::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/harness.rs:100
  49: tokio::task::raw::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/raw.rs:162
  50: tokio::task::raw::RawTask::poll
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/task/raw.rs:113
  51: time::format::parse_items::try_parse_fmt_string
  52: core::ptr::swap_nonoverlapping_bytes
  53: tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/basic_scheduler.rs:142
  54: tokio::runtime::Runtime::block_on::{{closure}}
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/mod.rs:411
  55: tokio::runtime::context::enter
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/context.rs:72
  56: tokio::runtime::handle::Handle::enter
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/handle.rs:34
  57: tokio::runtime::Runtime::block_on
             at /Users/jarred.nicholls/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/runtime/mod.rs:408
  58: k8s_aws_nlb_proxy_protocol_operator::main
             at src/main.rs:23
  59: std::rt::lang_start::{{closure}}
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/rt.rs:67
  60: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  61: std::panicking::try::do_call
             at src/libstd/panicking.rs:292
  62: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:78
  63: std::panicking::try
             at src/libstd/panicking.rs:270
  64: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  65: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  66: std::rt::lang_start
             at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/rt.rs:67
  67: k8s_aws_nlb_proxy_protocol_operator::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Make nightly build test against updated dependencies

#122 should have been picked up automatically when zstd 0.8.2 was released; there should be another job that runs nightly (but using the stable compiler), updates all dependencies, then runs the tests, and opens an issue if there is a failure.

tokio Zlib Encoder doesn't work and maybe others as well

using tokio:write:::ZlibEncoder doesn't result in a valid zlib data.
i made sure by using python and flate2 crate
here is a sample code:

use async_compression::tokio_02::write::ZlibEncoder;
use flate2::write::ZlibEncoder as Flate2ZlibEncoder;
use flate2::Compression;
use std::io::Write;
use tokio::io::AsyncWrite;
use tokio::io::AsyncWriteExt;

#[tokio::main]
async fn main() {
   let x = b"example";
   let mut e = ZlibEncoder::new(Vec::new());
   e.write_all(x).await.unwrap();
   e.flush().await.unwrap();
   let temp = e.into_inner();
   println!("{:?}", temp);
   // [120, 156, 74, 173, 72, 204, 45, 200, 73, 5, 0, 0, 0, 255, 255]
   // import zlib; print(zlib.decompress(bytes([120, 156, 74, 173, 72, 204, 45, 200, 73, 5, 0, 0, 0, 255, 255])))
   // fail with:
   // Traceback (most recent call last):
   //   File "test.py", line 25, in <module>
   //     print(zlib.decompress(bytes([120, 156, 74, 173, 72, 204, 45, 200, 73, 5, 0, 0, 0, 255, 255])))
   // zlib.error: Error -5 while decompressing data: incomplete or truncated stream

   // Working code with flate2
   let mut encoder = Flate2ZlibEncoder::new(Vec::new(), Compression::Default);
   encoder.write_all(x).unwrap();
   encoder.flush().unwrap();
   let temp = encoder.finish().unwrap();
   println!("{:?}", temp);
   // [120, 1, 0, 7, 0, 248, 255, 101, 120, 97, 109, 112, 108, 101, 0, 0, 0, 255, 255, 3, 0, 11, 192, 2, 237]
   // import zlib; print(zlib.decompress(bytes([120, 1, 0, 7, 0, 248, 255, 101, 120, 97, 109, 112, 108, 101, 0, 0, 0, 255, 255, 3, 0, 11, 192, 2, 237])))
   // prints b'example`
}

my deps:

[dependencies]
tokio = { version = "0.2", features = ["full"] }
flate2 = { version = "0.2", features = ["tokio"] }
async-compression = { version = "0.3.5", features = ["tokio-02","deflate","stream","zlib"] }

Support vectored input

If a user attempts a vectored read/write it should be possible to more efficiently process this than the default implementation provided in futures-io.

Brotli implementation choice

Currently the brotli2 crate is used for brotli compression, which has a -sys dependency. These crates can be inconvenient to deal with because of links=, and brotli-sys also bundles a relatively outdated version of google/brotli. Would it be reasonable to support the brotli crate instead of or in addition to brotli2?

bytes 1.0 support

bytes 1.0 came out back in December. It would be great to have support for it, such that this crate can be used with the latest versions of a number of crates.

no output when using GZipEncoder

This is likely user error, but i'm hoping for some assistance here

the follow compresses a local directory and outputs to a .tar archive successfully

use async_std::{fs::File, path::Path};
use async_tar::Builder;
use futures::io::AsyncWrite;

async fn write_archive<W>(writer: W, src_directory: impl AsRef<Path>) -> std::io::Result<()>
where
    W: AsyncWrite + Unpin + Send + Sync,
{
    let mut archive_builder = Builder::new(writer);
    archive_builder.append_dir_all("", src_directory).await?;
    archive_builder.finish().await
}

#[async_std::main]
async fn main() {
    let file = File::create("foo.tar.gz").await.unwrap();
    write_archive(file, "sample-directory").await.unwrap();
}

i'm trying to add gzip compression and create a .tar.gz archive

this is what I have

use async_compression::futures::write::GzipEncoder;
use async_std::{fs::File, path::Path};
use async_tar::Builder;
use futures::io::AsyncWrite;

async fn write_archive<W>(writer: W, src_directory: impl AsRef<Path>) -> std::io::Result<()>
where
    W: AsyncWrite + Unpin + Send + Sync,
{
    let mut archive_builder = Builder::new(writer);
    archive_builder.append_dir_all("", src_directory).await?;
    archive_builder.finish().await
}

#[async_std::main]
async fn main() {
    let file = File::create("foo.tar.gz").await.unwrap();
    let encoder = GzipEncoder::new(file);
    write_archive(encoder, "sample-directory").await.unwrap();
}

this completes without error, but results in an empty archive.

what am I doing wrong?

(async-compression = {version = "0.3.6", features = ["futures-io", "gzip"] })

GzipDecoder read_lines terminated early, but fixes on deleting an empty line?!

I am having a weird interaction with async_compression::tokio::bufread::GzipDecoder and tokio::io::AsyncBufReadExt. This example shows that when I use AsyncBufReadExt to read the number of lines in an unzipped file I get 413484 lines, but if I use a Bufreader wrapped around a GzipDecoder on a gzipped version of the same file I only get 65654 lines. I can fix this error by removing an empty line somewhere before the divergence point, at which point both files will report 413483 lines. This makes me think there is some edge-case with the various buffers that cause the GzipDecoder read_lines to terminate early, and any small change (removing that one empty line) manages to get things working again. I can't share the files but would be happy to diagnose further if anyone has suggestions.

EDIT: This error does not occur if I use the synchronous flate2 decompression by the way, so it is something specific to the tokio/async_compression interactions.

Hide implementation details

Currently the flate2 and brotli2 (soon to be brotli, see #70) libraries are exposed publicly through configuring the constructors of the Encoders using them. This should be hidden so that changing which library is used is not a breaking change.

Main idea would be to replace the re-exports with locally defined configuration parameters, then map those to the underlying implementation internally.

Add `http-body` support

I'd like to introduce http-body support to async-compression under a feature flag. This feature would provide a Body type that wraps another body type and will compress each chunk of data pulled from the body. This is to allow things like a tower service that can compress an entire http request via Content-Type.

cc @Nemo157

`write::XzDecoder` fails when passed data piecewise

use tokio::io::{AsyncReadExt, AsyncWriteExt};

#[tokio::main(flavor = "current_thread")]
async fn main() -> Result<(), tokio::io::Error> {
	let mut result = Vec::new();
	tokio::fs::File::open(r"bela_image_v0.3.8b.img.xz")
		.await?
		.read_to_end(&mut result)
		.await?;
	println!("{}", result.len());
	let file = tokio::fs::File::create("bela_image_v0.3.8b.img").await?;
	let mut decoder = async_compression::tokio::write::XzDecoder::new(file);
	//for chunk in (&result).chunks(123478) {
	//	decoder.write(chunk).await?;
	//}
	decoder.write_all(&result).await?;
	decoder.shutdown().await?;
	Ok(())
}

While the above write_all succeeds, the commented-out "chunked" version fails with Error: Custom { kind: Other, error: Data }. I believe it should work

Investigate internal abstraction over codecs

Since all codecs are basically bytes -> bytes transforms, with optional header and trailer, it should be possible to have an internal abstraction that the different stream/bufread/write implementations use. That would greatly simplify implementing #21 as the only thing to do would be to wrap the abstract implementation in differently named concrete wrappers (alternatively the internal abstraction could be exposed and users could see the generic implementations, but I think it would be better to get a release out using it internally first under the same sort of layout before thinking about exposing it).

Multiple zstd frames within a single input stream not supported

The zstd spec allows for multiple frames within a single file: https://tools.ietf.org/id/draft-kucherawy-dispatch-zstd-00.html#comp_frames . This means that it is perfectly allowable to concatenate zstd files together e.g.:

echo "Hello World 1" >> file1
echo "Hello World 2" >> file2
zstd file1
zstd file2
cat file1.zst file2.zst > file3.zst

Based on my reading of the code, async-compression's zstd decoder uses the results of the underlying zstd::stream::raw::Operation::run_on_buffers (which just proxies to zstd::stream::raw::Operation::run). The return value of 0 from run is a "hint" that could indicate the end of a frame, but not necessarily the end of the input (i.e. EOF). In reading zstd files with mutliple frames, async-compression (using async_compression::tokio::bufread::ZstdDecoder) only reads the first zstd frame and closes the output stream, as it seems to use this 0 return value from run as indicating the end of the input.

I'm new to Rust, but if there is a workaround, I'd love to hear it!

References:

Gzip file created with async compression not decodable

Hello,

I try to create a stream compressed gzip file on the fly while receiving chunks of data.
My issue is that the file cannot be decoded from gzip/gunzip :-(

I stripped it down to a test encoding that just encodes "test" to see what the difference is compared to a Node.js based test encoding that works. But I do not get why there is a difference, and if this is really a bug inside the library, or an issue of how i use the async file io.

Here is my sample code:

use tokio::fs::File;
use async_compression::tokio_02::write::{ GzipEncoder };

let mut file = File::create(„test.txt.gz“).await?;
let writer = GzipEncoder::new(file);
writer.write("test".as_bytes()).await?;

Result:

xxd test.txt.gz
00000000: 1f8b 0800 0000 0000 00ff 2a49 2d2e 0100  ..........*I-...
00000010: 0000 ffff                                ....

This file is not decodable! Gunzip says it is corrupt!

When I create the file via this small node.js script

var gz = zlib.createGzip(); // createGzip
gz.pipe(fs.createWriteStream(„test_node.txt.gz“));
gz.write(„test“);
gz.end()

the resulting file has the following content:

00000000: 1f8b 0800 0000 0000 0013 2b49 2d2e 0100  ..........+I-...
00000010: 0c7e 7fd8 0400 0000                      .~......

The issue is not the missing checksum and file length:
I added checksum and length via gzip-header create. The file is still not decodable via gunzip

The interesting bit seems to be the encoded stream, they differ between async-compression and the working Node.js:
RUST: 2a49 2d2e 0100 0000 ffff
Node: 2b49 2d2e 0100

Why are there 4 tailing bytes 0000 ffff ?
And why is the first byte different?

Moving the repository to a new home

The rustasync org has officially been sunset, and all projects are moving to a new home. As the maintainers of the project you have a choice where to move it to.

I'd like to offer the option to join the http-rs org, joining many of the other network-related projects that used to be part of the rustasync org. Let me know, and I'll send you both an invite. But in the end, the choice is up to you.

Thanks for having been part of the Rust Async Ecosystem WG; it's been a pleasure!

cc/ @Nemo157 @fairingrey

Change the crate name?

Current name is literally first thing that came to my mind. There may be something better, there may not, ideas welcome.

Move bufread/write under futures module

Until the whole split ecosystem around futures-io and tokio-io traits is sorted out it'd make sense to support both. The current bufread and write modules can be moved under a futures module (and a #[doc(hidden)] #[deprecated] re-export added) to allow adding tokio::{bufread, write} modules once a non-alpha tokio-io 0.2 release is available (and maybe some preview modules before that).

Brotli compression performace

Hello, I started to use tower-http compression layer, and I notice a significant slowdown in my server.
After some measurements, I found out that, the Brotli encoder is able to compress my file (300Kb) in 10ms, but after compression, the encoder state is changed to BROTLI_OPERATION_FLUSH which takes 3.5s.

Add examples

hi,
examples will be a valuable asset for newcomers.
I am intending to add an example and open A PR

Benchmarks!

Should be pretty easy to throw together some benchmarks comparing the different async variants against their synchronous counterparts to see how much overhead the async state machines introduce.

ZstdDecoder is !Sync

ZstdDecoder is currently !Sync because zstd::stream::raw::Decoder is !Sync. This is limiting because some crates like hyper want + Send + Sync for their bytes streams.

As far as I know, it should be sound to add unsafe impl<S> Sync for ZstdDecoder<S> where S: Sync because we don't access the decoder in any &self method (except maybe Debug which would have to be checked).

If that is sound, it would probably be safer to wrap the decoder in a type that only gives unique access like:

struct Unshared<T>(T);

impl<T> Unshared<T> {
    pub fn new(inner: T) -> Self { Unshared(inner) }

    pub fn get_mut(&mut self) -> &mut T { &mut self.0 }

    pub fn into_inner(self) -> T { self.0 }
}

unsafe impl<T> Sync for Unshared<T> {}

Move header/footer handling to gzip codec

Don't know why I didn't think of this when implementing #34, blame sleep-deprivation during the first big hack night to get it working.

The adaptors don't need to know anything about the header/footer handling if the gzip decoder internally handled buffering up the header, checking it, moving to use the zlib decoder, then buffering up the footer and checking it. And similarly the gzip encoder should output the header, move to the zlib encoder, then output the footer.

Will make the gzip codec slightly more complicated, but will greatly simplify all the adaptor implementations and move the handling to just one location instead of being per-adaptor.

Add tokio-io trait support

Add tokio::{bufread, write} modules based on traits from tokio-io 0.2. Until that has a non-alpha release the modules should be extra gated behind some unstable flag.

Rename `bzip` feature back to `bzip2`

I should really read the history of formats before making arbitrary changes like that. Will have to keep the feature until 0.2, but can deprecate it and have it just activate the correct bzip2 feature, and rename it everywhere in the tests.

Update to `brotli2`.

The usage of the brotli crate creates an ABI incompatibility with brotli2, which in turn creates ABI incompatibility with actix-web unless compression is disabled. Alternatively, one of the two projects could mangle C names of statically linked library.

[gzip] CRC computed does not match

Thanks for this library, it's exactly what I was looking for!

Yet, it would seem decompressing gzip has some issue. Consider this code:

use async_compression::bufread::GzipDecoder;
use async_std::fs::File;
use async_std::io::{self, stdout, BufReader};
use async_std::prelude::*;
use async_std::task;
use std::env::args;

const LEN: usize = 16 * 1024; // 16 Kb

fn main() -> io::Result<()> {
    let path = args().nth(1).expect("missing path argument");

    task::block_on(async {
        println!("Attempting to read {}", path);
        let reader = BufReader::with_capacity(4096, File::open(&path).await?);
        let mut stdout = stdout();
        let mut buf = vec![0u8; LEN];
        let mut decompressor = GzipDecoder::new(reader);

        loop {
            // Read a buffer from the file.
            let n = decompressor.read(&mut buf).await?;

            // If this is the end of file, clean up and return.
            if n == 0 {
                stdout.flush().await?;
                return Ok(());
            }

            // Write the buffer into stdout.
            stdout.write_all(&buf[..n]).await?;
        }
    })
}

I get:

$ gzip -k Cargo.lock
$ cargo run -- Cargo.lock.gz 
[...]
Attempting to read Cargo.lock.gz
Error: Custom { kind: InvalidData, error: "CRC computed does not match" }

Corresponding Cargo.toml

[package]
name = "async-test"
version = "0.1.0"
edition = "2018"

[dependencies]
async-std = "0.99.5"
async-compression = "0.1.0-alpha.5"

Replacing GZip with ZStandard in the code, I can decompress without any issue. Any idea what might be wrong?

async_compression::tokio::bufread::ZstdDecode: read_buf return 0 after first read

I am trying to use streaming compression and decompression using tokio. Compression works fine but while streaming decompression, it only reads size of buffer one time and returns 0 for subsequent read.

  1. How do I enable streaming read where it reads file content, decompress and allow to process uncompressed data?
  2. tokio::io:BufReader supports read_line() method but not able to use with async_compression::tokio::bufread::ZstdDecoder
#[tokio::test]
async fn streaming_zstd_test() {

    let lru_content = include_bytes!("lru.txt");
    println!("Content size of LRU file:{}", lru_content.len());

    let file_path = "/tmp/compression_lru";
    if let Ok(file) = File::create(file_path).await {
        let data = BufWriter::with_capacity(1024, file);
        let mut enc = async_compression::tokio::write::ZstdEncoder::new(data);
        tokio::io::AsyncWriteExt::write_all(&mut enc, lru_content).await.unwrap();
        enc.flush().await.unwrap();
    }
    if let Ok(file) = File::open(file_path).await {
        if let Ok(meta) = file.metadata().await {
            println!("Size of compressed file:{} : {}", file_path, meta.len());
        }
        let  data = BufReader::with_capacity(4096, file);
        let mut dec = async_compression::tokio::bufread::ZstdDecoder::new(data);
        let mut buffer = bytes::BytesMut::with_capacity(4096);
        let mut expcted_bytes = 0;
        while let Ok(num_bytes) = dec.read_buf(&mut buffer).await {
            println!("num_bytes:{} read", num_bytes);
            expcted_bytes += num_bytes;
             buffer.clear();
            if num_bytes == 0 {
                println!("num_bytes read zero. exiting");
                println!("num_bytes:{} read, total expcted_bytes:{}", num_bytes, expcted_bytes);

                break;
            }
        }
        assert_eq!(lru_content.len(), expcted_bytes);
    }
    tokio::fs::remove_file(file_path).await.unwrap();
}

Here is the output:

Content size of LRU file:11356
Size of compressed file:/tmp/compression_lru : 4076
num_bytes:4096 read
num_bytes:0 read
num_bytes read zero. exiting
num_bytes:0 read, total expcted_bytes:4096
thread 'cache_updater::streaming_zstd_test' panicked at 'assertion failed: `(left == right)`
  left: `11356`,
 right: `4096`', kanudo/src/cache_updater.rs:352:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test cache_updater::streaming_zstd_test ... FAILED

BrotliDecoder type is very large

I was trying to use reqwest, which has a dependency on async-compression, and was getting stack overflows on Windows with some very simple code:

async fn get_bytes_from_url(&self, url: &str) -> Option<Bytes> {
    let response = reqwest::get(url).await.ok()?.error_for_status().ok()?;
    response.bytes().await.ok()
}

After investigating, I discovered that bytes() returns a future which contains an enum that holds the appropriate decoder for the current protocol is. That enum is over 2kb in size because BrotliDecoder is 2592 bytes by itself:

image

Which is significantly larger than any of the other decoders offered by this crate:

Type Size (bytes)
BrotliDecoder 2592
GzipDecoder 120
ZlibDecoder 32

Would it be possible to Box the decoder state to shrink the size of BrotliDecoder? I know BrotliDecoder really just wraps the brotli_decompressor crate but it doesn't seem very active so I thought I would start by asking here first. 🙂

GzipDecoder chokes on extra headers

It seems that some gzip files will not decode with this library, even though the underlying flate2 library handles them correctly.

The trouble seems to be an extra field, which has tripped up other languages in the past as well:

https://forum.crystal-lang.org/t/error-when-read-extra-field-gzip-compressed-data/1840/9
https://bugs.python.org/issue17681

Cribbing from the first link, here is an easy way to make a simple gzip file that behaves improperly:

echo -n 'H4sIBAAAAAAA/wYAQkMCADIAS0ksTuNKSyxO4UqtSC4pSuQqLilNS+MCAI56o3cXAAAAH4sIBAAAAAAA/wYAQkMCABsAAwAAAAAAAAAAAA==' | base64 -d > test.gz

If you run gunzip on this or use the original flate2 library, it decodes correctly, including checksum verification.

But with this library, the reproducer program prints unexpected end of file. Other files I can't share also report deflate decompression error.

Reproducer program:

use std::io::Read;

use anyhow::Context;
// features: gzip tokio
use async_compression::tokio::bufread::GzipDecoder;
use tokio::fs::File;
use tokio::io::{AsyncBufRead, AsyncReadExt, BufReader};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let input_file = File::open("test.gz").await.context("failed to open file")?;
    let reader = BufReader::new(input_file);
    let mut decoder = GzipDecoder::new(reader);

    let mut buf = vec![0; 8192];
    let n = decoder.read(&mut buf).await.context("decompress failed")?;
    buf.truncate(n);

    println!("{}", String::from_utf8_lossy(&buf));

    Ok(())
}

Encoder does not support intermediate flushes before the reader has ended

disclaimer: this is about fixing apollographql/router#1572, for which I'm under tight timing constraints, so right now this issue's goal (and the related PR) is discussing and finding a quick fix that we can use directly in the router, and then I'll help find a proper solution.

I'm encountering an issue with the following setup:

  • I use axum with tower-http's CompressionLayer, which uses async-compression
  • the router sends a HTTP response with multipart
  • one multipart element is sent right away, the next one is sent after 5s
  • if the response is compressed, the router will wait for 5s to send both parts at the same time, while without compression, the first part comes immediately, and the second one after 5s

I tracked that down to async-compression, where in the tokio based Encoder, due to the use of ready!(), whenever the underlying reader returns Poll::Pending, it is transmitted directly to the caller, so there is no flush until the entire data has been read:

https://github.com/Nemo157/async-compression/blob/ada65c660bcea83dc6a0c3d6149e5fbcd039f739/src/tokio/bufread/generic/encoder.rs#L63-L74

I would like the encoder to send the data it already compressed when the reader returned Poll::Pending, and let the upper layer decide on buffering or sending the HTTP chunk directly.

Handle concatenated `.gz` files

I wanted to process a huge .gz file without downloading it which actually was concatenation of multiple .gz files. This is rather common practice to be able to utilzie multiple cores while compressing big amount of data.

GzipDecoder failed to handle it properly - it stopped after the first "chunk" and didn't even output an error.

gzip utility decompresses the file without any problems. I guess, so should do GzipDecoder.

`crate::stream::GzipDecoder` fails with "CRC computed does not match" error

use std::io;

use async_compression::stream::GzipDecoder;
use bytes::Bytes;
use flate2::bufread::GzDecoder;
use futures::*;

fn main() {
    let payload: &[u8] = include_bytes!("user_timeline.gz");

    io::copy(&mut GzDecoder::new(payload), &mut io::sink())
        .expect("flate2"); // OK (as expected)

    futures::executor::block_on({
        let input = stream::iter(payload.windows(1).map(|b| Ok(Bytes::from_static(b))));
        GzipDecoder::new(input).try_for_each(|_| future::ok(()))
    })
    .expect("async-compression (split)"); // OK (as expected)

    futures::executor::block_on({
        let input = stream::iter(Some(Ok(Bytes::from_static(payload))));
        GzipDecoder::new(input).try_for_each(|_| future::ok(()))
    })
    .expect("async-compression (single chunk)"); // ERROR (unexpected)
}

Test data: user_timeline.gz (a response body from https://api.twitter.com/1.1/statuses/user_timeline.json?count=4 with Content-Encoding: gzip)

The above code panics with the following message:

thread 'main' panicked at 'async-compression (single chunk): Custom { kind: InvalidData, error: "CRC computed does not match" }', src/libcore/result.rs:1084:5

Test WASM support in CI

Hello,

It would be nice, if possible to bump the zstd dependency to 0.11 to be able to use this crate on WASM targets. To be honest I'm not entirely sure this is possible, or whether the bump would be enough. I'll probably try eventually in some point in the future, I'm mostly making this issue to raise awareness

Should all features be default active

My reason for doing this is to make the experience when adding the crate as a dependency and initially trying it out better. It could make sense to instead have everything disabled by default (as the documentation is about to recommend for dependents to do) and force the user to active them, there could be a set of all-wrappers, all-algorithms, all features that active everything to make testing easier.

Supporting additional compression formats

Just an idea that was fishing around in my head lately, but I think it would be awesome to support additional crates for use between the newest async types/traits.

One in particular that stands out to me (and could be useful for the near future) is the zstd crate. In particular, we could apply the same kind of byte-stream encoding/decoding logic that we use for Deflate/Zlib to the zstd raw compresser/decompressers here. Also another thing to note is while it does support AsyncRead/AsyncWrite through a feature gate tokio-io, that is still on futures 0.1.

For zstd, it shouldn't take too long -- I might actually start work on it soon!

Please support enabling zstd long-distance matching mode

async-compression only allows passing a numeric level. I'd love to enable the zstd "long-distance matching mode", which substantially improves compression on large files that may have similarity at long distances.

I'd be happy to supply a PR implementing this. Would you prefer to extend Level to contain internal flags for things like this, or would you prefer to add a method (or additional constructor) directly to the ZstdEncoder to set this parameter?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.