Giter VIP home page Giter VIP logo

tokio-rustls's People

Contributors

benesch avatar briansmith avatar cpu avatar ctz avatar djc avatar erickt avatar fanatid avatar ffuugoo avatar hawkw avatar jarema avatar jayvdb avatar jbr avatar jjnicola avatar jwodder avatar lastlightsith avatar luciofranco avatar neolegends avatar nickelc avatar noah-kennedy avatar paolobarbolini avatar pzread avatar quininer avatar schien avatar selyatin avatar sergiobenitez avatar stevefan1999-personal avatar taiki-e avatar tottoto avatar w-henderson avatar zh-jq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tokio-rustls's Issues

Support for transparent SNI proxying

This is more of a PR than an issue, since I've implemented the code already, but it's in a separate server:

https://code.betamike.com/micropelago/tokio-rustls/commit/18fd688b335430e17e054e15ff7d6ce073db2419

Implement TransparentConfigAcceptor

The goal of the TransparentConfigAcceptor is to support an SNI-based
reverse-proxy, where the server reads the SNI and then transparently
forwards the entire TLS session, ClientHello included, to a backend
server, without terminating the TLS session itself.

This isn't possible with the current LazyConfigAcceptor, which only
allows you to pick a different ServerConfig depending on the SNI, but
will always terminate the session.

The TransparentConfigAcceptor will buffer all bytes read from the
connection (the ClientHello) internally, and then replay them if the
user decides they want to hijack the connection.

The TransparentConfigAcceptor supports all functionality that the
LazyConfigAcceptor does, but due to the internal buffering of the
ClientHello I did not want to add it to the LazyConfigAcceptor, since
it's possible someone wouldn't want to incur that extra cost.

I'm very much open to feedback on these changes. I'm relatively new to rust, and would not be surprised if I overcomplicated this a bit. In particular the types I added to common seem like stuff that should already exist in some standard crate somewhere, but I couldn't find them, and probably my implementation is lacking. Also the API I've introduced generally seems kind of ugly to me... there's probably some way to clean it up some.

Acceptor timeout config

Is there a way to limit how much time the acceptor is going to wait for the TLS handshake to take place from the client side? If a malicious client never sends its TLS handshake request part, will this block forever?

If yes then probably the example should be updated to use tokio's timeout.

Crypto provider support

Given that rustls/rustls#1405 is closer to completion, we need to think about crypto provider support.

We could still provide ring provider out of the box (I think rustls would in the end include ring as part of the default options to prevent chaos), but this would render custom crypto provider useless due to tokio-rustls usually being a transient dependency (for example axum_server usestokio-rustls) and I don't think there is a way to patch the transient dependency of axum_server without forking it.

SECURITY: please cut new release

Is it possible to create a new release?

The current release uses a rustls version that still uses the vulnerable unmaintained webpki crate instaed of rustls-webpki.

Version tags

It would be really convenient if this repository had the tags corresponding to each release on crates.io since it's a lot easier to read source code here. Is it feasible to push the tags?

Triggering renegotiation

I plan to use tokio_rustls for some long running task that should never shut down. Specifically there is one persistent connection from the client to the server, so technically speaking the session key will be the same the entire time and I will lose forward secrecy. I've seen some clues that maybe rustls has renegotiation built in, so I was wondering if there is a way to trigger it through this crate.

Error: badRecordMac

Hello
After a few poll_write I get a badRecordMac alert message in poll_read from server side
This error only happens on the Apple platform and the program works correctly on Windows

Consider providing a changelog and git tags

Hi! Thank you for this crate.

I have this crate in my dependency tree and was just about to upgrade it from 0.23 to 0.24. When upgrading dependencies I always check what has changed, in case there is something important I should know about. But either I'm unable to find this information or it's not available for this crate. There are no git tags marking which commit is actually the releases on crates.io, nor any changelog file outlining the changes. Could you consider adding these things? I believe it improves the overall quality of a crate quite significantly.

does TLSAcceptor read early data?

hi, I was wondering if TLSAcceptor is able to read TLS1.3 early data? it looks like the early data test does not use TLSAcceptor
if so, is there a reason why?

Override `poll_write_vectored`?

At present, poll_write_vectored is not overridden:

I think it would make sense to do so though. At the moment, the implementation of poll_write will perform IO as soon as the buffer is written (which makes sense). However, if the application would have more data to send that sits in different buffers, performing IO after every poll_write might lead to multiple TCP packets being sent.

The only way around this (I think?) would be to re-design the upper layer to be able to provide a single byte buffer. That is not always feasible and/or desirable. Additionally, that is exactly what poll_write_vectored is for. A proper implementation of would send all packets down to the rustls session and only perform IO once that buffer is either full or we've written all buffers passed to us.

I've created a basic draft in #25.

Proposal: Share code between tokio-rustls and futures-rustls

Currently, tokio-rustls and futures-rustls share very similar code duplicated across two different repositories. Every PR that is opened to one of them must be manually duplicated in other in order to ensure parity. As it currently stands, the two test suites have diverged and futures-rustls has significantly less coverage.

It seems like it would make maintenance easier if the two crates shared as much implementation code as possible. I could imagine a few ways of doing this, and would happily contribute any of them.

The easiest path forward would probably be introducing a third crate that has feature flags for tokio and futures. The two existing crates would reexport everything from the third crate, allowing users to continue depending on tokio-rustls and futures-rustls as is, but they would be light shims on top of the unified crate.

If the smol team were willing to part with it, the common crate could take the name async-rustls and the shim crates of tokio-rustls and futures-rustls could be deprecated in preference to just setting the desired feature flags on the new common-implementation async-rustls.

If all three crates were in a shared workspace, as an additional distinct step from the above, CI could ensure that every change to the shared implementation crate worked correctly with both flavors of async trait. The challenge here would be retaining history, but it seems like the histories of the two repositories are very similar, so someone looking into the past could find answers just as well by looking at tokio-rustls' history if the futures-rustls repository were archived

Detailed Error Messages

Hi, I hope this time I'm in right place for right subject.

If problems can be analysed in much more precise way under the hood, please expose it to developer.

What I mean is, we get errors like bad certificate, unknown ca, etc. These are global errors I know, other languages and TLS systems also use them. But if you are able to give at least little details with these error messages for difference between same error messages but different problems, this would be cool.

Because there are a lot of situation can be combined in one error message and sometimes it's hard to find what is the exact reason of error.

For example in my situation, I got bad certificate error and I thought there is a problem with my server certificate, but no. Only problem is I should connect with domain address not the IP address in client side. It cost me couple of days, apparently this is because I'm not experienced enough.

If there is any chance to show errors in more detailed way, please add this feature. There are new learners like me out there.

How to get peer certificates as server?

I currently accept the connection with TlsAcceptor.accept and then call get_ref() which gets me a reference to ServerConnection which has everything I need on it, the problem is this can return None for peer_certificates/sni/alpn until the handshake is finished, and provides complete_io() to finish the handshake, but that's not async, is there an async version of this or should this be done another way?

I need to validate the client certificate before I do any IO on it, and don't see another way to guarantee the handshake is complete.

Large percentage of cpu time in memset when reading with a larger buffer

Reproduced using a slightly modified version of examples/client.rs to allow specifying a path, and reading into a user-provided buffer. The problem was originally noticed via rusoto s3 client, which has a configuration for a read buffer size, which gets mapped to hyper http1_read_buf_exact_size. Since aws s3 recommends fetching large ranges, using a larger buffer seemed like a good idea and was not expected to cause any cpu overhead.

flamegraph

use std::fs::File;
use std::io;
use std::io::BufReader;
use std::net::ToSocketAddrs;
use std::path::PathBuf;
use std::sync::Arc;

use argh::FromArgs;
use tokio::io::{split, AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
use tokio_rustls::TlsConnector;

/// Tokio Rustls client example
#[derive(FromArgs)]
struct Options {
    /// host
    #[argh(positional)]
    host: String,

    /// path
    #[argh(positional)]
    path: Option<String>,

    /// port
    #[argh(option, short = 'p', default = "443")]
    port: u16,

    /// domain
    #[argh(option, short = 'd')]
    domain: Option<String>,

    /// cafile
    #[argh(option, short = 'c')]
    cafile: Option<PathBuf>,
}

#[tokio::main]
async fn main() -> io::Result<()> {
    let options: Options = argh::from_env();

    let addr = (options.host.as_str(), options.port)
        .to_socket_addrs()?
        .next()
        .ok_or_else(|| io::Error::from(io::ErrorKind::NotFound))?;
    let domain = options.domain.unwrap_or(options.host);
    let path = options.path.as_ref().map(|p| p.as_str()).unwrap_or("/");
    let content = format!(
        "GET {} HTTP/1.0\r\nConnection: close\r\nHost: {}\r\n\r\n",
        path, domain
    );

    let mut root_cert_store = rustls::RootCertStore::empty();
    if let Some(cafile) = &options.cafile {
        let mut pem = BufReader::new(File::open(cafile)?);
        for cert in rustls_pemfile::certs(&mut pem) {
            root_cert_store.add(cert?).unwrap();
        }
    } else {
        root_cert_store.extend(webpki_roots::TLS_SERVER_ROOTS.iter().cloned());
    }

    let config = rustls::ClientConfig::builder()
        .with_root_certificates(root_cert_store)
        .with_no_client_auth(); // i guess this was previously the default?
    let connector = TlsConnector::from(Arc::new(config));

    let stream = TcpStream::connect(&addr).await?;

    let domain = pki_types::ServerName::try_from(domain.as_str())
        .map_err(|_| io::Error::new(io::ErrorKind::InvalidInput, "invalid dnsname"))?
        .to_owned();

    let mut stream = connector.connect(domain, stream).await?;
    stream.write_all(content.as_bytes()).await?;

    let (mut reader, _writer) = split(stream);

    let mut buffer = Vec::with_capacity(4 * 1024 * 1024);
    let mut total_len = 0_usize;

    loop {
        match reader.read_buf(&mut buffer).await {
            Ok(len) => {
                total_len += len;
                buffer.clear();
                if len == 0 {
                    break;
                }
            }
            Err(e) => {
                eprintln!("{:?}", e);
                break;
            }
        }
    }

    println!("Size: {}", total_len);

    Ok(())
}

Error `NotConnected` when serving a `TlsStream<TcpStream>` when using `hyper`

When serving a TlsStream<TcpStream> using a hyper server, the stream's poll_shutdown method sometimes returns a io::ErrorKind::NotConnected. This happens when calling the server using curl with HTTP/1.1, but not with HTTP/1.0. Also, it happens when calling the server using openssl s_client and pressing ctrl+c instead of ctrl+d.

It seems that when a client doesn't explicitly close the connection, but simply hangs up, the stream ends with an error.

This only happens with tokio_rustls and not with tokio_native_tls.

Here is a somewhat minimal reproduction of the error:

use std::{
	fs::File,
	io::BufReader,
	net::{Ipv4Addr, SocketAddrV4},
	sync::Arc,
};

use hyper::{Request, Response};
use hyper_util::rt::TokioIo;
use tokio::net::TcpListener;
use tokio_native_tls::native_tls::Identity;
use tokio_rustls::{rustls::pki_types::CertificateDer, TlsAcceptor};

#[tokio::main]
async fn main() {
	let bind_addr = SocketAddrV4::new(Ipv4Addr::new(127, 0, 0, 1), 8080);
	let listener = TcpListener::bind(bind_addr).await.unwrap();
	let service = hyper::service::service_fn(move |_request: Request<_>| async {
		Result::<_, std::io::Error>::Ok(Response::new("".to_string()))
	});

	println!("Listening on {bind_addr}");
	loop {
		let (stream, _addr) = listener.accept().await.unwrap();
		tokio::spawn(async move {
			let tls_acceptor = get_tls_acceptor();
			// Uncomment to use native_tls
			// let tls_acceptor = get_tls_acceptor_native();

			let tls_stream = tls_acceptor.accept(stream).await.unwrap();
			hyper::server::conn::http1::Builder::new()
				.preserve_header_case(true)
				.title_case_headers(true)
				.serve_connection(TokioIo::new(tls_stream), service)
				.with_upgrades()
				.await
				.unwrap()
		});
	}
}

fn get_tls_acceptor_native() -> tokio_native_tls::TlsAcceptor {
	let key = std::fs::read("certs/key.pem").unwrap();
	let cert = std::fs::read("certs/certs.pem").unwrap();
	let identity = Identity::from_pkcs8(&cert, &key).unwrap();
	let acceptor = tokio_native_tls::native_tls::TlsAcceptor::builder(identity).build().unwrap();
	tokio_native_tls::TlsAcceptor::from(acceptor)
}

fn get_tls_acceptor() -> tokio_rustls::TlsAcceptor {
	let mut key_reader = BufReader::new(File::open("certs/key.pem").unwrap());
	let mut cert_reader = BufReader::new(File::open("certs/certs.pem").unwrap());

	let key = rustls_pemfile::private_key(&mut key_reader).unwrap().unwrap();
	let certs = rustls_pemfile::certs(&mut cert_reader)
		.map(|cert| cert.map(CertificateDer::from))
		.collect::<std::result::Result<_, std::io::Error>>()
		.unwrap();

	let config = tokio_rustls::rustls::ServerConfig::builder()
		.with_no_client_auth()
		.with_single_cert(certs, key)
		.unwrap();

	TlsAcceptor::from(Arc::new(config))
}

To reproduce using curl:

curl -k -v https://localhost:8080 --http1.0 gives no error.
curl -k -v https://localhost:8080 --http1.1 gives a NotConnected error.

To reproduce using openssl s_client:

openssl s_client -connect localhost:8080 then Ctrl+D gives no error.
openssl s_client -connect localhost:8080 then Ctrl+C gives a NotConnected error.

Test failure in common::test_stream::stream_handshake_regression_issues_77

First seen in a scheduled CI run:

https://github.com/rustls/tokio-rustls/actions/runs/8843164500

I was able to reproduce it locally -- the test passed at first, but failed after cargo update:

djc-2021 main tokio-rustls $ cargo update
    Updating crates.io index
    Updating aho-corasick v1.1.2 -> v1.1.3
    Updating autocfg v1.1.0 -> v1.2.0
    Updating aws-lc-fips-sys v0.12.5 -> v0.12.7
    Updating aws-lc-rs v1.6.2 -> v1.7.0
    Updating aws-lc-sys v0.13.2 -> v0.15.0
    Updating backtrace v0.3.69 -> v0.3.71
    Updating base64 v0.21.5 -> v0.22.0
    Removing bitflags v1.3.2
    Removing bitflags v2.4.2
      Adding bitflags v2.5.0
    Updating bytes v1.5.0 -> v1.6.0
    Updating cc v1.0.83 -> v1.0.95
    Updating either v1.10.0 -> v1.11.0
    Updating futures-core v0.3.29 -> v0.3.30
    Updating futures-macro v0.3.29 -> v0.3.30
    Updating futures-task v0.3.29 -> v0.3.30
    Updating futures-util v0.3.29 -> v0.3.30
    Updating getrandom v0.2.11 -> v0.2.14
    Updating gimli v0.28.0 -> v0.28.1
    Updating hermit-abi v0.3.3 -> v0.3.9
      Adding jobserver v0.1.31
    Updating libc v0.2.150 -> v0.2.153
    Updating libloading v0.8.1 -> v0.8.3
    Updating lock_api v0.4.11 -> v0.4.12
    Updating log v0.4.20 -> v0.4.21
    Updating memchr v2.6.4 -> v2.7.2
    Updating miniz_oxide v0.7.1 -> v0.7.2
    Updating mio v0.8.9 -> v0.8.11
    Updating object v0.32.1 -> v0.32.2
    Updating parking_lot v0.12.1 -> v0.12.2
    Updating parking_lot_core v0.9.9 -> v0.9.10
    Updating pin-project-lite v0.2.13 -> v0.2.14
    Updating prettyplease v0.2.16 -> v0.2.19
    Updating proc-macro2 v1.0.78 -> v1.0.81
    Updating quote v1.0.35 -> v1.0.36
    Updating redox_syscall v0.4.1 -> v0.5.1
    Updating regex v1.10.3 -> v1.10.4
    Updating regex-automata v0.4.5 -> v0.4.6
    Updating regex-syntax v0.8.2 -> v0.8.3
    Updating ring v0.17.5 -> v0.17.8
    Updating rustix v0.38.28 -> v0.38.34
    Updating rustls v0.23.1 -> v0.23.5
    Updating rustls-pemfile v2.0.0 -> v2.1.2
    Updating rustls-pki-types v1.3.1 -> v1.5.0
    Updating rustls-webpki v0.102.2 -> v0.102.3
    Updating serde v1.0.193 -> v1.0.198
    Updating serde_derive v1.0.193 -> v1.0.198
    Updating signal-hook-registry v1.4.1 -> v1.4.2
    Updating smallvec v1.11.2 -> v1.13.2
    Updating socket2 v0.5.5 -> v0.5.6
    Updating syn v2.0.52 -> v2.0.60
    Updating tokio v1.34.0 -> v1.37.0
    Updating webpki-roots v0.26.0 -> v0.26.1
    Updating windows-targets v0.52.4 -> v0.52.5
    Updating windows_aarch64_gnullvm v0.52.4 -> v0.52.5
    Updating windows_aarch64_msvc v0.52.4 -> v0.52.5
    Updating windows_i686_gnu v0.52.4 -> v0.52.5
      Adding windows_i686_gnullvm v0.52.5
    Updating windows_i686_msvc v0.52.4 -> v0.52.5
    Updating windows_x86_64_gnu v0.52.4 -> v0.52.5
    Updating windows_x86_64_gnullvm v0.52.4 -> v0.52.5
    Updating windows_x86_64_msvc v0.52.4 -> v0.52.5

Sending plaintext response for non-TLS connection attempts

This is my current attempt among others. Both print statements print. I have had intermittent success... its just not stable. Whats the correct way to approach this within the lib itself?


pub struct TlsListener(
    Vec<CertificateDer<'static>>,
    PrivateKeyDer<'static>,
    TlsAcceptor,
    TcpListener,
);

impl TlsListener {
    pub async fn bind(address: SocketAddr) -> Self {
        let listener = TcpListener::bind(address).await.unwrap();

        let certs = Path::new(CERTS_PATH);
        let cert = load_certs(&certs).unwrap();

        let key = Path::new(KEY_PATH);
        let key = load_keys(&key).unwrap();

        let config = rustls::ServerConfig::builder()
            .with_no_client_auth()
            .with_single_cert(cert.clone(), key.clone_key())
            .map_err(|err| io::Error::new(io::ErrorKind::InvalidInput, err))
            .unwrap();
        let acceptor = TlsAcceptor::from(Arc::new(config));

        TlsListener(cert, key, acceptor, listener)
    }

    pub async fn redirect(&self) -> io::Result<()> {
        println!("Redirecting");
        let (mut stream, _peer_addr) = self.3.accept().await?;
        let redirect =
            b"HTTP/1.1 301 Moved Permanently\r\nLocation: https://localhost:4010/\r\n\r\n";
        stream.write(redirect).await?;
        stream.flush().await?;
        println!("Redirected");
        Ok(())
    }

    pub async fn accept(&self) -> io::Result<(TlsStream<TcpStream>, SocketAddr)> {
        let listener = &self.3;
        let (stream, peer_addr) = listener.accept().await?;

        match self.2.accept(stream).await {
            Ok(stream) => Ok((stream, peer_addr)),
            Err(error) => {
                self.redirect().await?;
                Err(error)
            }
        }
    }
}

README and examples use unavailable API

The README has this example code:

tokio-rustls/README.md

Lines 22 to 23 in 63b8d6f

let mut root_cert_store = RootCertStore::empty();
root_cert_store.add_trust_anchors(webpki_roots::TLS_SERVER_ROOTS.0.iter().map(|ta| {

This API was added in rustls 0.21.6. However, Cargo.toml (for the latest published release) only require rustls 0.21.0, leading to compilation errors if someone updates tokio-rustls without also updating the rustls point release (and the error message does not make it obvious that this is the solution).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.