Giter VIP home page Giter VIP logo

quinn's People

Contributors

aochagavia avatar biagiofesta avatar bmwill avatar demi-marie avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar devsnek avatar djc avatar est31 avatar evolix1 avatar flub avatar geieredgar avatar gretchenfrage avatar imp avatar jean-airoldie avatar kwantam avatar lijunwangs avatar link2xt avatar liwenjiequ avatar matthias247 avatar nemethf avatar ralith avatar sizumita avatar stammw avatar stormshield-damiend avatar stygianlightning avatar thekuwayama avatar thombles avatar timonpost avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quinn's Issues

Dynamic stream ID flow control

Currently, the application must specify a maximum number of concurrently operating remotely-initiated streams at endpoint construction time. This is more strict than necessary. A connection's streams can be handled in the same manner as an endpoint's connections are: a small, configurable window of new streams is buffered by the implementation, advancing whenever the application consumes an entry to begin performing I/O on it. This would allow applications to dynamically and implicitly control the amount of stream-level concurrency they want, for example using FuturesUnordered.

Legit clippy::bad_bit_mask complain

error: incompatible bit mask: `_ | 128` can never be equal to `0`
   --> quinn-proto/src/tests.rs:407:17
    |
407 |         assert!(packet[0] | 0x80 != 0);
    |                 ^^^^^^^^^^^^^^^^^^^^^
    |
    = note: #[deny(clippy::bad_bit_mask)] on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#bad_bit_mask

So, this one is the only clippy lint that is flagged as an error.
This assert appears to be effectively assert!(true).

If the intention here was to check for the long header, perhaps it should be instead

assert_ne!(packet[0] & 0x80, 0);

Fix failure of test_encoded_handshake

The test started failing because it depends on a fixed number of handshake packets exchanged before the handshake terminates. Can probably continue exchanging messages until a short header packet is sent.

Benchmark throughput

Quantitative, reproducible benchmarks are needed to judge the performance impact of changes, and to guide us towards supporting efficient high-bandwidth communications. A good start would be a criterion benchmark that uses the high-level quinn API to pass a blob of data end-to-end through two quinn endpoints and the host UDP stack.

Improve handling of Initial packet retransmission

Right now, the server does not keep track of the random destination connection ID (DCID) contained in the Initial packet. This means that retransmitted Initial packets are perceived as new Initial packets attempting to start a new connection. It probably makes sense to have a separate map for these handshake DCIDs to the related ConnectionStates, which could then be cleaned up once it's confirmed that the client knows about the server-chosen connection ID for that Initial packet.

Connection panics on drop

I'm trying to make a unit test for a thing that runs a server, starts a client to talk to it, and then shuts down. For some reason the client panics on an Option::unwrap() deep in quinn when it is dropped and it tries to close the connection. When I run basically the same thing as a standalone program it appears to work? It might just not even attempt to shut down correctly when I ctrl-C it.

This code was ported from quicr so I may be doing something wrong now that quicr allowed. I haven't cut it down to a minimal reproduction next but hope to soon; maybe you can suggest something while I do. The code is here: https://github.com/icefoxen/WorldDat/blob/7ee3babc58ea6e8584b7d1095c7c084a98140fb3/src/peer.rs#L276 , just run cargo test to reproduce.

Edit: oh yeah, adding the backtrace might help.

---- peer::tests::test_client_connection stdout ----
thread 'peer::tests::test_client_connection' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:221
   4: std::panicking::rust_panic_with_hook
             at libstd/panicking.rs:475
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:390
   6: rust_begin_unwind
             at libstd/panicking.rs:325
   7: core::panicking::panic_fmt
             at libcore/panicking.rs:77
   8: core::panicking::panic
             at libcore/panicking.rs:52
   9: <core::option::Option<T>>::unwrap
             at /checkout/src/libcore/macros.rs:20
  10: quinn_proto::connection::Connection::make_close
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/connection.rs:2056
  11: quinn_proto::connection::Connection::close
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/connection.rs:2082
  12: quinn_proto::endpoint::Endpoint::close
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-proto-0.1.0/src/endpoint.rs:934
  13: <quinn::ConnectionInner as core::ops::drop::Drop>::drop
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/quinn-0.1.0/src/lib.rs:887
  14: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  15: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  16: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  17: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  18: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  19: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  20: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  21: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  22: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  23: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  24: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  25: core::mem::drop
             at /checkout/src/libcore/mem.rs:795
  26: tokio_current_thread::scheduler::release_node
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.3/src/scheduler.rs:386
  27: <tokio_current_thread::scheduler::Scheduler<U> as core::ops::drop::Drop>::drop
             at /home/icefox/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.3/src/scheduler.rs:419
  28: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  29: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  30: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  31: core::ptr::drop_in_place
             at /checkout/src/libcore/ptr.rs:59
  32: worlddat::peer::tests::test_client_connection
             at src/peer.rs:290
  33: worlddat::__test::TESTS::{{closure}}
             at src/peer.rs:276
  34: core::ops::function::FnOnce::call_once
             at /checkout/src/libcore/ops/function.rs:223
  35: <F as alloc::boxed::FnBox<A>>::call_box
             at libtest/lib.rs:1451
             at /checkout/src/libcore/ops/function.rs:223
             at /checkout/src/liballoc/boxed.rs:642
  36: __rust_maybe_catch_panic
             at libpanic_unwind/lib.rs:105

Random feedback on examples

Random notes on things I find in the examples that are kind of hard to parse out, as one only moderately familiar with QUIC and tokio/futures:

  • A single-line description at the top of each would be nice, even if all it says is "see the readme for how to run the client and server"
  • The server example breaks out pieces into handle_connection() and handle_request(), which is nice, but the client example shoves everything into one big long gnarly future, which is hard to figure out. When trying to understand how futures work, I often find breaking them up so you can actually see the type signatures to be really useful.
  • Sometimes the server uses tokio_current_thread::spawn() and sometimes runtime.spawn() which, since you use the single-threaded runtime, I think are equivalent? It's not entirely clear though.
  • Also it appears that you must use the current_thread tokio Runtime, which sort of makes some sense I think but again isn't obvious to someone who doesn't know this. Maybe it's documented somewhere, but I didn't find it.

These are all just the sorts of things that are invisible to someone who knows what they're doing and really opaque to someone who doesn't. I'll semi-happily update the examples to try to improve some of these things, if you guys want.

Cache Sealing and Opening keys in secrets

The crypto module currently has a PacketKey type that reconstructs the relevant SealingKey and OpeningKey for every seal or open operation. There should be a smarter way to structure the types in the crypto module so that a Secret has direct access to long-lived Key instances.

Process TLS messages in a more regular way

Currently, ConnectionState::process_tls() handles stream 0 content directly, both reading it from the incoming stream frame and queueing new outgoing messages. This should be changed to be more generic:

  1. Make sure the Streams object has a stream 0
  2. Move incoming TLS messages into the received buffer for that stream
  3. Change process_tls() to read and write from/into that stream object

(Writing into the stream object depends on also having #3 implemented.)

Improve ease-of-use for untrusted communications

Currently, it is only possible to connect to a server whose certificate is signed by a trusted authority. While this is ideal for web services, for some applications there is no expectation of or even feasible mechanism for this level of trust. Examples include tests, P2P applications with transient peers, and systems where trust is managed externally. Quinn should support these gracefully, without undermining security for applications where useful certificate authorities exist.

To accomplish this, we need:

  • Support for selectively disabling trust checks on outgoing connections (#57)
  • Helpers to generate a self-signed (or unsigned?) certificate for a server
    This API should make it convenient, but optional, to cache generated certificates in persistent storage, for convenience in systems which can take advantage of a persistent identity, such as in pseudonymous or trust-on-first-use models. Blocked by rustls missing support for generating certificates.

Factor out common error handling code

There is a lot of repetition in error handling code in methods of the Connection impl. There are multiple ways this could be improved, in increasing order of complexity:

  • Factor them out into one or more utility methods
  • Create a nice error type that this returned to Endpoint so it do the proper handling in one place
  • In doing so, make it so that the Connection no longer has to know its ConnectionHandle

@Ralith does that make sense to you? @twilco this is a little more open-ended, but I think you can ramp it up by taking it one step at a time to gain some familiarity with the issues.

Dependabot couldn't find a Cargo.toml for this project

Dependabot couldn't find a Cargo.toml for this project.

Dependabot requires a Cargo.toml to evaluate your project's current Rust dependencies. It had expected to find one at the path: /rustls/Cargo.toml.

If this isn't a Rust project, or if it is a library, you may wish to disable updates for it from within Dependabot.

You can mention @dependabot in the comments below to contact the Dependabot team.

Implement event-based packet sending for server side

Currently, Quinn only sends packet synchronously in response to received packets. This has been shown to fail on the server side, where it should send multiple Handshake packets in response to an Initial packet. I think the solution here should be, instead of storing a ConnectionState directly in the Server::connections map, there should be a Connection type (in server) that holds the ConnectionState and is connected to the server by means of some channels (futures::sync::mpsc?). In particular, the channel from Connection to Server should share the receiver across all the connections to prevent having to poll.

Implement proper packetization

This means changing how packets make it into the ConnectionState send queue. There needs to be a control Frame VecDeque both in ConnectionState (for connection-global control frames) and Streams (for stream-related control frames). ConnectionState::queued() should then be changed to pull frames from these as well as all sending streams in the Streams instance, filling up a packet to maximum capacity. That should be 1232 bytes minus the size of the header -- which will especially differ based on whether we're sending long or short headers.

Coalesce outgoing packets

During handshakes, it's common for multiple small QUIC packets to be transmitted in rapid succession. These can be concatenated into a single UDP packet to reduce overhead. This could be done by deferring the actual UDP transmit until at least a full MTU of QUIC packets is available to send or no more need to be immediately sent.

Once 0-RTT support is implemented, this could further reduce overhead by allowing outgoing connections with 0-RTT data to use that data in place of the bulk of the Initial packet's padding.

Change `Codec::decode()` to return `QuicResult<T>`

Decodes can obviously fail on invalid input. Currently some of them don't handle this at all; others sometimes panic. The explicit panics should definitely be converted to returning QuicError; there should also be some minimal checking that the input is sane (for example, length of input slice).

Create C/CPP bindings

Hi. Thanks for quinn. I want to call QUINN from C/CPP. How can I create a C/CPP bindings?

Implement version negotiation

This is explained in section 6.2 of the draft 11 transport spec:

https://tools.ietf.org/html/draft-ietf-quic-transport-11#section-6.2

The code processing incoming handshake packets in ConnectionState::handle_packet() should check that the version in the Initial packet matches what's supported per QUIC_VERSION. If it doesn't match, it should queue an appropriate response using ConnectionState::build_long_packet() (or something similar if version negotiation has slightly different needs).

Adopt tokio_timer::DelayQueue

We're currently using an awkward heavyweight hack to schedule timers; see the quinn crate's EndpointInner struct's timer member for details. DelayQueue was literally purpose-built for our requirements, so let's put it to work.

Detect sequential key updates and abort connection on this condition

When the KEY_PHASE is flipped twice in two consecutive received packet, the connection must be aborted. From draft-ietf-quic-tls-16:

An endpoint does not always need to send packets when it detects that
its peer has updated keys. The next packet that it sends will simply
use the new keys. If an endpoint detects a second update before it
has sent any packets with updated keys, it indicates that its peer
has updated keys twice without awaiting a reciprocal update. An
endpoint MUST treat consecutive key updates as a fatal error and
abort the connection.

Duplicate data in stream frames can lead to unbounded resource consumption

Quinn performs stream assembly lazily: incoming data is not flattened into a single linear buffer until read. Additionally, flow control credit is issued on read. This leads to two problems:

  • When the application is slow, a sender transmitting stream frames containing duplicate data can cause an arbitrarily large amount of memory to be used for buffering without incurring transport-level backpressure.
  • read_unordered skips stream assembly, and hence may issue flow control credit for the same data multiple times. This can lead to arbitrarily large flow control windows, which can lead to arbitrarily large buffer resource consumption and impair performance.

These issues can be fixed by eagerly tracking which bytes of a stream have already been received, and discarding duplicate bytes immediately upon receipt.

Stateless retry support

Blocked on rustls/rustls#151

edit: On review, it looks like the involvement of the TLS stack in stateless retries has been factored out of recent drafts, so this may no longer be blocked.

Audit uses of `FnvHashMap` for collision attack potential

We currently use FnvHashMap in a number of places, presumably because it is faster than the std HashMap. However, the reason FnvHashMap is not the default, as I understand it, is because it is possible for attackers to generate input data that will cause denial of service through worst-case performance of hash map algorithms.

In Endpoint, we currently use FnvHashMap for (1) connection_ids_initial, (2) connection_ids, and (3) connection_remotes. (1) seems definitely vulnerable to this style of attack for server endpoints, since clients can randomly pick initial CIDs for their packets, thus triggering collisions. (2) seems safe, since local CIDs are generated by our own code. For (3), I'm not sure how easy it is to spoof IP addresses these days.

Investigate panic on decoding of TransportParameters

My test server has been crashing from this panic:

thread 'main' panicked at 'invalid transport parameter tag 65280', src/parameters.rs:187:22

The byte sequence leading to this problem (from printing the contents of sub) is this:

[0, 0, 0, 4, 0, 0, 64, 0, 0, 1, 0, 4, 0, 0, 128, 0, 0, 2, 0, 2, 0, 1, 0, 8, 0, 2,
0, 1, 0, 3, 0, 2, 0, 10, 255, 0, 0, 2, 255, 0, 255, 1, 0, 2, 255, 1, 255, 2, 0, 2, 255, 2, 255, 3, 0, 2, 255, 3, 255, 4, 0, 2, 255, 4, 255, 5, 0, 2, 255, 5, 255, 6, 0, 2, 255, 6, 255, 7, 0, 2, 255, 7, 255, 8, 0, 2, 255, 8, 255, 9, 0, 2, 255, 9, 255, 10, 0, 2, 255, 10, 255, 11, 0, 2, 255, 11, 255, 12, 0, 2, 255, 12, 255, 13, 0, 2, 255, 13, 255, 14, 0, 2, 255, 14, 255, 15, 0, 2, 255, 15]

Make sure any bugs in the TransportParameters::decode() routine are fixed.

Refactor `Connection::transmit_handshake()` to take care of TLS writing

transmit_handshake() is currently used in a number of places that go like this:

let mut outgoing = Vec::new();
self.tls.write_tls(&mut outgoing).unwrap();
self.transmit_handshake(&outgoing);

We should reevaluate the API to see if transmit_handshake() can take the to-be-transmitted handshake bytes directly out of the TlsSession.

Design API for 0-RTT and 0.5-RTT data

0-RTT data can be replayed by an attacker, and 0.5-RTT data (data sent by the server using 1-RTT keys before the client's TLS FIN is received) can be intercepted by a MITM. For some applications (e.g. fetching public data), these possibilities are harmless, and the reduction in latency versus 1-RTT is desirable. For others (e.g. performing non-idempotent operations or fetching private data), these are dangerous security vulnerabilities.

A good API should support applications like the former, while making it difficult for applications like the latter to inadvertently be insecure. The simplest solution would be to not support 0/0.5-RTT data, but hopefully we can do better. Perhaps separate feature-gated APIs?

Support for async/await

Hello, thanks for quinn!

Is there a way to use the new async/await facilities in quinn? I was trying to make a basic proof-of-concept, but it seems quinn's types are !Send:

#![feature(await_macro, async_await, futures_api, pin)]

extern crate quinn;
extern crate tokio;

use tokio::prelude::*;

fn main() {
    tokio::run_async(async move {
        let mut builder = quinn::Endpoint::new();
        let (endpoint, driver, incoming) = builder.bind("0.0.0.0:9393").unwrap();

        while let Some(Ok(conn)) = await!(incoming.next()) {
            while let Some(byte_stream) = await!(conn.incoming.next()) {
                match byte_stream {
                    Ok(quinn::NewStream::Bi(byte_stream)) => {
                        println!("byte stream!");
                    },
                    Ok(quinn::NewStream::Uni(_)) => {
                        // config.max_remote_uni_streams is defaulted to 0
                        unreachable!();
                    }
                    Err(err) => {
                        eprintln!("error: {}", err);
                    }
                }
            }
        }
    });
}

Fails with:

   Compiling quinn-minimal v0.1.0 (/Users/yusuf/projects/quinn-minimal)
error[E0277]: `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>` cannot be sent between threads safely
 --> src/main.rs:9:5
  |
9 |     tokio::run_async(async move {
  |     ^^^^^^^^^^^^^^^^ `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>` cannot be sent between threads safely
  |
  = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::RefCell<quinn::EndpointInner>>`
  = note: required because it appears within the type `quinn::Endpoint`
  = note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
  = note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
  = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
  = note: required because it appears within the type `impl std::future::Future`
  = note: required by `tokio::run_async`

error[E0277]: `std::rc::Rc<quinn::ConnectionInner>` cannot be sent between threads safely
 --> src/main.rs:9:5
  |
9 |     tokio::run_async(async move {
  |     ^^^^^^^^^^^^^^^^ `std::rc::Rc<quinn::ConnectionInner>` cannot be sent between threads safely
  |
  = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<quinn::ConnectionInner>`
  = note: required because it appears within the type `quinn::Connection`
  = note: required because it appears within the type `quinn::NewConnection`
  = note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
  = note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
  = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
  = note: required because it appears within the type `impl std::future::Future`
  = note: required by `tokio::run_async`

error[E0277]: `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>` cannot be sent between threads safely
 --> src/main.rs:9:5
  |
9 |     tokio::run_async(async move {
  |     ^^^^^^^^^^^^^^^^ `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>` cannot be sent between threads safely
  |
  = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::RefCell<futures::unsync::mpsc::Shared<quinn::NewConnection>>>`
  = note: required because it appears within the type `futures::unsync::mpsc::State<quinn::NewConnection>`
  = note: required because it appears within the type `futures::unsync::mpsc::Receiver<quinn::NewConnection>`
  = note: required because it appears within the type `futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>`
  = note: required because it appears within the type `for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}`
  = note: required because it appears within the type `[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]`
  = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:9:33: 29:6 for<'r, 's, 't0> {quinn::EndpointBuilder<'r>, quinn::Endpoint, quinn::Driver, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>, tokio_async_await::stream::next::Next<'s, futures::unsync::mpsc::UnboundedReceiver<quinn::NewConnection>>, (), std::option::Option<std::result::Result<quinn::NewConnection, ()>>, quinn::NewConnection, tokio_async_await::stream::next::Next<'t0, quinn::IncomingStreams>}]>`
  = note: required because it appears within the type `impl std::future::Future`
  = note: required by `tokio::run_async`

error: aborting due to 3 previous errors

I checked if I could just swap out Rc's for Arc's, but there's of course more non-Send types than just that...

Panic when trying to shutdown stream

My code is actually almost well modularized now, so it's easier to post issue reports. Huzzah!

This panics with "thread 'main' panicked at 'unknown stream', libcore/option.rs:989:5". This appears to be happening in the tokio::io::shutdown(stream) line.

    fn receive_message(stream: quinn::Stream) -> impl Future<Item = (), Error = ()> {
        quinn::read_to_end(stream, 1024 * 64)
            .map_err(|e| warn!("failed to read response: {}", e))
            .and_then(move |(stream, req)| {
                let msg: ::std::result::Result<Message, rmp_serde::decode::Error> =
                    rmp_serde::from_slice(&req);
                debug!("Got message: {:?}", msg);
                let to_do_next: Box<dyn Future<Item = quinn::Stream, Error = ()>> = match msg {
                    Ok(Message::Ping { id }) => {
                        info!("Got ping, trying to send pong");
                        let message = Message::Pong { id };
                        let to_send = rmp_serde::to_vec(&message)
                            .expect("Could not serialize message; should never happen!");
                        Box::new(
                            tokio::io::write_all(stream, to_send)
                                .map_err(|e| warn!("Failed to send request: {}", e))
                                .map(|(stream, _vec)| stream),
                        )
                    }
                    Ok(val) => {
                        info!("Got message: {:X?}, not doing anything with it", val);
                        Box::new(future::ok(stream))
                    }
                    Err(e) => {
                        info!("Got unknown message: {:X?}, error {:?}", &req, e);
                        Box::new(future::ok(stream))
                    }
                };
                to_do_next
                    .and_then(|stream| {
                        trace!("Trying to shut down stream");
                        tokio::io::shutdown(stream)
                            .and_then(|v| {
                                trace!("Done!");
                                future::ok(v)
                            }).map_err(|e| warn!("Failed to shut down stream: {}", e))
                    }).map(move |_| info!("request complete"))
            })
    }

This appears to correspond to this code in quinn-proto/src/connection.rs:

    pub fn finish(&mut self, id: StreamId) {
        let ss = self
            .streams
            .get_mut(&id)
            .expect("unknown stream")
            .send_mut()
            .expect("recv-only stream");
        ....
    }

Keepalive

Quinn should provide an option to automatically send PING frames whenever the connection has been idle for more than some large fraction of the negotiated idle timeout, allowing time for retransmission if lost. Per the guidance in draft 15 §7.9, this should be supported for both incoming and outgoing connections.

Discuss how TLS `ClientConfig` should be passed around

@Ralith made the argument that this should be an argument to connect() instead of a global configuration setting in Config.

I'm actually not quite convinced. We currently change the following things:

  • Toggle key log
  • Trust anchors
  • TLS protocol version
  • ALPN protocol

I would argue that it makes sense to set these things on a per-Endpoint basis.

Congestion Control

Hi,
do you already have congestion control implemented?

In general, what would you say, how far is your implementation?

Implement separate types for client and server TransportParameters

The current API is pretty confusing because it requires passing in a Side, but it's not obvious if that is the API user's side or the message originator's side. My original implementation had explicit ClientTransportParameters and ServerTransportParameters types, which is more obvious, and can also directly implement the Value/Codec trait.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.