http-rs / http-client Goto Github PK
View Code? Open in Web Editor NEWTypes and traits for http clients
License: Apache License 2.0
Types and traits for http clients
License: Apache License 2.0
I was trying to use surf on tokio 1.0 executor. It seems the tokio library family needs to be upgraded to 1.0, including hyper 0.14 for this library.
For instance, the following code doesn't print the full expected content of https://raw.githubusercontent.com/http-rs/surf/6627d9fc15437aea3c0a69e0b620ae7769ea6765/LICENSE-MIT
let url = "https://raw.githubusercontent.com/http-rs/surf/6627d9fc15437aea3c0a69e0b620ae7769ea6765/LICENSE-MIT";
let req = http_types::Request::new(http_types::Method::Get, Url::parse(url).unwrap());
let mut res: http_types::Response = http_client::isahc::IsahcClient::new().send(req).await?;
println!("{}", res.body_string().await?);
The cause of the problem is here:
Lines 51 to 52 in 749e374
As specified in https://docs.rs/isahc/0.9.13/isahc/struct.Body.html#method.len :
Since the length may be determined totally separately from the actual bytes, even if a value is returned it should not be relied on as always being accurate, and should be treated as a "hint"
The code shouldn't rely on it to read the response's Body.
Since HttpClient
s are themselves not clone-able, it would be reasonable to wrap them in reference counting smart pointers, but the wrapped type does not impl HttpClient
.
ttl
(Does not appear to be accessible from hyper)max_connections
- a slightly haphazard api by the h1 client
Should be done before http-rs/surf#310
"unstable-config"
should be added to the "docs"
feature, but also everything that is behind that flag should have the feature attr thingy applied to them.
Hi! any chance you might want to expose the mod fetch
publicly or make it a tiny crate? I started doing something similar and later I just plain copy-pasted the module since it does everything I need, I'm getting some requests that I want to convert to http_types::Request
, same with responses and potentially converting the other way around from the http_types to JS equivalent. Even the WindowOrWorker
is useful.
I added more config options for the hyper backend (more builder and connector options).
https://github.com/wv0m56/http-client/blob/main/src/hyper.rs#L19-L48
Please let me know if I'm off-base. Otherwise, I'll create a PR and possibly add some https tests.
At the moment it seems like the WASM backend is only receiving headers, but all the headers in the request are discarded. Additionally the body is discarded too.
IsahcClient::new()
creates a client with a timeout set to 60 seconds. However, this client does not actually return a timeout error after 60 seconds. IsahcClient created with Config and try_into does not have this problem and returns a timeout error after 60 seconds.
Here are some additional explanations.
let client: Client = Config::new()
.try_into()
.unwrap();
client.get("...").send().await;
surf::get("...").send().await;
use async_std::task;
use futures::prelude::*;
use http_client::{isahc, Config, HttpClient};
use std::time::Duration;
#[async_std::main]
async fn main() {
let mut tasks = vec![];
tasks.push(task::spawn(async {
let client: isahc::IsahcClient = Config::new()
.try_into()
.unwrap();
assert_eq!(client.config().timeout, Some(Duration::from_secs(60)));
run_test(client, "isahc: created with config").await;
}));
tasks.push(task::spawn(async {
let client = isahc::IsahcClient::new();
assert_eq!(client.config().timeout, Some(Duration::from_secs(60)));
run_test(client, "isahc: created by new()").await;
}));
println!("start tasks");
future::join_all(tasks).await;
}
async fn run_test(client: impl HttpClient, process_name: &str) {
let timeout = client.config().timeout.unwrap();
let sleep_duration = timeout + Duration::from_secs(1);
let result = communicate(client, sleep_duration).await;
let timeout_secs = timeout.as_secs();
let sleep_secs = sleep_duration.as_secs();
println!(
"{process_name}: config.timeout={timeout_secs} sleep={sleep_secs} result={:?}",
result
);
}
// Make a request to the server. Then read the body after the specified time has elapsed. Created to cause a timeout.
// If communication is successful, return Ok. Otherwise (including a timeout), return Err.
async fn communicate(client: impl HttpClient, sleep_duration: Duration) -> Result<(), std::io::Error> {
// If the body is to be read internally to the end, the size must be increased. Otherwise, it will not time out.
const BODY_BYTES: usize = 80 * 1024;
let url = format!("http://httpbin.org/bytes/{BODY_BYTES}");
let url = http_types::Url::parse(&url).unwrap();
let req = http_types::Request::get(url);
let mut res = client.send(req).await.unwrap();
let mut body = res.take_body().into_reader();
task::sleep(sleep_duration).await;
let mut total = vec![];
let read_bytes = body.read_to_end(&mut total).await?;
assert_eq!(read_bytes, BODY_BYTES);
Ok(())
}
isahc: created with config: config.timeout=60 sleep=61 result=Err(Kind(TimedOut))
isahc: created by new(): config.timeout=60 sleep=61 result=Ok(())
When I try to compile this for the WASM backend I get the following error:
cargo check --features wasm_client --target wasm32-unknown-unknown
Checking http-client v1.0.0 (/Users/pwoolcock/code/http-client)
error: implementation of an `unsafe` trait
--> src/wasm.rs:63:1
|
63 | unsafe impl Send for InnerFuture {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: lint level defined here
--> src/lib.rs:7:11
|
7 | #![forbid(unsafe_code, future_incompatible, rust_2018_idioms)]
| ^^^^^^^^^^^
error: usage of an `unsafe` block
--> src/wasm.rs:72:9
|
72 | unsafe { Pin::new_unchecked(&mut self.fut).poll(cx) }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to 2 previous errors
error: could not compile `http-client`.
To learn more, run the command again with --verbose.
Ref http-rs/surf#110, so folks can downcast if desired.
Version: 2.0.0
Rust version: 1.42 stable
Report:
Hi! Currently when used in wasm context, it's only possible to fire up request under windowGlobalScope.
let window = web_sys::window().expect("failed at non window scope");
However the fetch api should be usable under workerGlobalScope as well.
I've created a PR #23 to relaxed this by choose the global scope at runtime. This should enable http-client to be used in webWorker or serviceWorker such as cloudflare workers.
Would like to get feedback on how can I make it better. Thanks.
this is also an async-h1 client issue, but ideally would be exposed at both the surf and http-client levels as well โ users may want to upgrade a request made with http-client to a websocket. this is especially relevant in a reverse-proxy scenario, but there may be other use cases as well
See #53
I think Ideally it would be like to feature flags for the async-h1 client: h1-client
and native-tls
or rustls
.
This should be acceptable because by the time of the next major, it seems likely we may be switching to using async-h1 by default anyways: http-rs/surf#223 (comment)
http-rs/surf#111 is a patch to port the client code over to [email protected]
which supports [email protected]
. We should apply the same patch here.
It would be nice if this crate implemented the streaming Body
trait as provided by http-body
.
For context, I'm in the midst of writing a client for a specific REST API, and I would like it to be agnostic to the HTTP client implementation; E.g. any consumer of my crate should be able to use any HTTP client they want. Of all the HTTP client crates I've checked, only isahc
uses the http
crate as part of its public API, meaning it's the only generic client that a user could choose. ๐ญ The REST API needs to stream both request and response bodies.
I'm hoping efforts like this crate can help push the ecosystem in the direction of standardizing on a minimal set of types for making HTTP requests so that end users are not tied directly to a specific style (async or sync), client implementation (isahc
vs reqwest
vs surf
vs ... ), or executor framework (tokio
vs async-std
vs std::thread
vs ... )
See #35 (comment)
Instead of using
InnerFuture
we should probably use send_wrapper instead. I think we can then convert thesend
method's body to be along the lines of:async fn send(&self, req: Request) -> Result<Response, Error> { let fut = SendWrapper::new(async move { // perform the `fetch` call here }); fut.await }
If a webside has an invalid response, the line https://github.com/http-rs/http-client/blob/main/src/isahc.rs#L58 will crash.
See http-rs/surf#224 for more info.
My project is using the http-client v6.5.3. As of yesterday, compiler is issuing the deprecation warning.
I've traced the problem to the http-client using an old version of the deadpool crate (v0.7.0).
Would it be possible to upgrade the deadpoool crate version to something up to date?
My mistake, easy to fix but just needs actually doing.
For Surf 3.0 I am almost certainly going to merge http-client back into it. The feature flag mess currently is hellish and maintaining it is a bad experience. Most of this would be solved by not having surf set downstream feature flags.
If you use http-client
for a client other than surf you either need to step up and help me maintain this or lose it. Thanks.
For testing Surf it'd be great if we could create a mock backend to automatically handle http responses. We could in turn for example use tide
with the mock frontend to create mock responses.
The benefit of this is that unit tests should become really fast, and we can provide extra methods to assert values, as seen in http-rs/tide#273. Thanks!
I believe the timeout that can be set through http_client refers to the connection timeout. And I think the process shown below sets a timeout on isahc. According to the isahc documentation, it is not the connection timeout that can be set with timeout(). The connection timeout can be set with connect_timeout().
Line 90 in 3c0ed9a
Line 118 in 3c0ed9a
I've made the following reproducible sample code using mockito (though the problems happens happens with other servers too) that gives me the Connection refused (os error 111)
error:
#[async_std::main]
async fn main() {
use http_client::http_types::*;
use http_client::HttpClient;
let client = http_client::h1::H1Client::new();
let _mock_guard = mockito::mock("GET", "/report")
.with_status(200)
.expect_at_least(2)
.create();
// Skips the initial "http://127.0.0.1:" to get only the port number
let mock_port = &mockito::server_url()[17..];
let url = &format!("http://0.0.0.0:{}/report", mock_port);
let req = Request::new(Method::Get, Url::parse(url).unwrap());
dbg!(client.send(req.clone()).await.unwrap());
let url = &format!("http://localhost:{}/report", mock_port);
let req = Request::new(Method::Get, Url::parse(url).unwrap());
dbg!(client.send(req.clone()).await.unwrap());
}
[package]
name = "test-h1"
version = "0.1.0"
authors = ["Jonathas-Conceicao <[email protected]>"]
edition = "2018"
[dependencies]
async-std = { version = "1", features = ["attributes"] }
http-client = { version = "6", default-features = false, features = ["h1_client"] }
mockito = "0.29"
The same example does work as expected if I swap to curl_client
feature with the NativeClient
.
Running a wget
to the running mockito
server game this output:
$ wget -O- http://localhost:1234/report
--2021-03-08 16:13:12-- http://localhost:1234/report
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:1234... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:1234... connected.
HTTP request sent, awaiting response... 200 OK
Maybe the h1-client is not trying to resolve the hostname to an ipv4 after it fails to resolve to an ipv6?
Internal error. message: io::copy failed Caused by: An established connection was aborted by the software in your host machine. (os error 10053), error_type: Some("std::io::error::Error"),
See https://docs.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2 (search for 10053)
Software caused connection abort.
An established connection was aborted by the software in your host computer, possibly due to a data transmission time-out or protocol error.
I suspect this is an issue with the newly merged connection pooling support for the async-h1 client. I think it needs proper recycle
checks.
When I try to compile the WASM backend I get the following error:
cargo check --features wasm_client --target wasm32-unknown-unknown
Checking http-client v1.0.0 (/Users/pwoolcock/code/http-client)
error[E0432]: unresolved import `wasm_bindgen_futures::futures_0_3`
--> src/wasm.rs:79:31
|
79 | use wasm_bindgen_futures::futures_0_3::JsFuture;
| ^^^^^^^^^^^ could not find `futures_0_3` in `wasm_bindgen_futures`
warning: unused import: `wasm_bindgen::JsCast`
--> src/wasm.rs:78:9
|
78 | use wasm_bindgen::JsCast;
| ^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
error: could not compile `http-client`.
To learn more, run the command again with --verbose.
The following code uses the same H1Client
instance to perform twice the same request (http).
The second request hangs for ~5 seconds and then returns an Err with "connection closed".
The same code using IsahcClient
doesn't fail.
use http_client::h1::H1Client as DefaultClient;
// use http_client::isahc::IsahcClient as DefaultClient;
use http_client::HttpClient;
use http_client::http_types::*;
use async_std::task;
fn main() {
task::block_on(async {
env_logger::init();
let url = "http://httpd.apache.org/";
let req = Request::new(Method::Get, Url::parse(url).unwrap());
let client = DefaultClient::new();
println!("GET {} #1", url);
let mut res: Response = client.send(req.clone()).await.unwrap();
if res.status() != StatusCode::Ok {
println!("ERROR in result: {:?}", res);
panic!();
}
let msg = res.body_bytes().await.unwrap();
let msg = String::from_utf8_lossy(&msg);
if !msg.contains("The Apache HTTP Server Project") {
panic!();
}
println!("#1 OK");
println!("GET {} #2", url);
let mut res: Response = client.send(req.clone()).await.unwrap();
if res.status() != StatusCode::Ok {
println!("ERROR in result: {:?}", res);
panic!();
}
let msg = res.body_bytes().await.unwrap();
let msg = String::from_utf8_lossy(&msg);
if !msg.contains("The Apache HTTP Server Project") {
panic!();
}
println!("#2 OK");
});
}
Cargo.toml:
[package]
name = "http-client-test"
version = "0.0.1"
edition = "2018"
[dependencies]
http-client = { version = "6.3.3", default-features = false, features = ["h1_client"] }
# http-client = { version = "6.3.3", default-features = false, features = ["curl_client"] }
async-std = "1.9.0"
env_logger = "0.8"
[[bin]]
name = "test"
From: http-rs/surf#237
Right now the
Client::with_http_client
method has the following signature:pub fn with_http_client(http_client: Arc<dyn HttpClient>) -> Self;This means that even if a backend we pass implements
Clone
, we must wrap it in anotherArc
. This leads to constructs such as:let mut app = tide::new(); let mut client = Client::with_http_client(Arc::new(app)); client.set_base_url("http://example.com/");Instead I'd like us to change the signature to:
pub fn connect_with<C>(http_client: C) -> crate::Result<Self> where C: HttpClient + Clone;Which will enable us to write:
let mut app = tide::new(); let mut client = Client::connect_with("http://example.com", app)?;
Following on from #53 being merged what would be the most ergonomic way of supporting a customizable rustls::ClientConfig
I'm looking todo something similar to: async-tls/example
async fn connector_for_ca_file(cafile: &Path) -> io::Result<TlsConnector> {
let mut config = ClientConfig::new();
let file = async_std::fs::read(cafile).await?;
let mut pem = Cursor::new(file);
config
.root_store
.add_pem_file(&mut pem)
.map_err(|_| io::Error::new(io::ErrorKind::InvalidInput, "invalid cert"))?;
Ok(TlsConnector::from(Arc::new(config)))
}
The wasm backend has a bunch of unwraps where proper propagation of the errors should be happening.
In https://github.com/http-rs/surf/blob/8cf05e347404abbb091b80f809a5e1a354c6c735/src/http_client/hyper.rs we have a hyper 0.12 backend that unfortunately doesn't quite work. I fumbled something up during release, and never quite fixed it. But now it seems there may still be a few issues to resolve.
It'd be nice if we could port it back here, and restore it under the hyper-client
flag. Note that this should be based on Hyper 0.12, and not any of the alphas as a lot of it still seems to be in flux. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.