Giter VIP home page Giter VIP logo

multipart's Introduction

Multipart Build Status On Crates.io

Client- and server-side abstractions for HTTP file uploads (POST requests with Content-Type: multipart/form-data).

Supports several different (synchronous API) HTTP crates. Asynchronous (i.e. futures-based) API support will be provided by multipart-async.

Minimum supported Rust version: 1.36.0
Maintenance Status: Passive

As the web ecosystem in Rust moves towards asynchronous APIs, the need for this crate in synchronous API form becomes dubious. This crate in its current form is usable enough, so as of June 2020 it is now in passive maintenance mode; bug reports will be addressed as time permits and PRs will be accepted but otherwise no new development of the existing API is taking place.

Look for a release of multipart-async soon which targets newer releases of Hyper.

Integrations

Example files demonstrating how to use multipart with these crates are available under examples/.

via the hyper feature (enabled by default).

Note: Hyper 0.9, 0.10 (synchronous API) only; support for asynchronous APIs will be provided by multipart-async.

Client integration includes support for regular hyper::client::Request objects via multipart::client::Multipart, as well as integration with the new hyper::Client API via multipart::client::lazy::Multipart (new in 0.5).

Server integration for hyper::server::Request via multipart::server::Multipart.

via the iron feature.

Provides regular server-side integration with iron::Request via multipart::server::Multipart, as well as a convenient BeforeMiddleware implementation in multipart::server::iron::Intercept.

Nickel returning to multipart in 0.14!

via the nickel feature.

Provides server-side integration with &mut nickel::Request via multipart::server::Multipart.

via the tiny_http feature.

Provides server-side integration with tiny_http::Request via multipart::server::Multipart.

Direct integration is not provided as the Rocket folks seem to want to handle multipart/form-data behind the scenes which would supercede any integration with multipart. However, an example is available showing how to use multipart on a Rocket server: examples/rocket.rs

⚡ Powered By ⚡

Customizable drop-in std::io::BufReader replacement, created to be used in this crate. Needed because it can read more bytes into the buffer without the buffer being empty, necessary when a boundary falls across two reads. (It was easier to author a new crate than try to get this added to std::io::BufReader.)

Fast, zero-copy HTTP header parsing, used to read field headers in multipart/form-data request bodies.

Fast string and byte-string search. Used to find boundaries in the request body. Uses SIMD acceleration when possible.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

multipart's People

Contributors

abonander avatar atouchet avatar azyobuzin avatar bachue avatar chpio avatar coolreader18 avatar dariost avatar erickt avatar fauxfaux avatar hatsunearu avatar iamsebastian avatar iptq avatar jessevermeulen123 avatar jseyfried avatar kardeiz avatar little-bobby-tables avatar little-dude avatar llogiq avatar lukaskalbertodt avatar mbme avatar mitsuhiko avatar nicolas-cherel avatar nipunn1313 avatar oherrala avatar pseitz avatar puhrez avatar realcundo avatar robinst avatar spk avatar white-oak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multipart's Issues

MultipartData is not composable due to lifetime restrictions

Hello!

I'm considering using your library in Rocket, but the lifetime restrictions imposed by MultipartData make any kind of meaningful composition or abstraction impossible. For instance, it is not possible to call a function that creates a Multipart structure, reads a MultipartData entry from a stream, and returns a MultipartFile. This same restriction means that MultipartData cannot be abstracted away in any meaningful manner.

The issue is caused by the lifetime in MultipartData being derived from the read_entry method in Multipart, making it impossible for a MultipartData object to outlive its Multipart parent object. There are two references in MultipartData:

  • A reference to a str in the Text variant.
  • A reference to a BoundaryReader<B> in MultipartFile in the File variant.

The first can be easily removed by using a String instead of an &str. The &str seems wholly unnecessary, as does line_buf in Multipart.

The reference to the reader is more interesting. There are many ways to avoid this reference, but one particularly viable solution is to have an alternate API that moves the stream into the MultipartField. You can accomplish this by having a MultipartField::next() method that consumes the current object and moves the internal stream to an optionally newly created one. Using this API to iterate through all fields would look something like:

let mut multipart = Multipart::with_body(data, boundary);

let mut field = match multipart.field();
loop {
    /* use `field` here */

    match field.next() {
        Some(field) => field = field,
        None => break
    }
}

This is easy to use and is significantly more flexible. This allows for an extract_reader method to extract the underlying reader, which can be necessary.

Please consider this API or other solutions that allow your library to be used in more contexts.

Support async Hyper

This might necessitate creating a new crate architected around async requests.

Different behavior from `curl -T`

Hello,

I have a curl request which works (uploading jars to remote Artifactory repo):

curl -i -k -X PUT my_url -T path/to/my/file

With hyper and multipart, with the following code, I get a response back from Artifactory (HTTP 201) but the file is never uploaded:

extern crate multipart;

use hyper;
use hyper::client::Request;
use hyper::method::Method;
use hyper::net::Streaming;

use self::multipart::client::Multipart;

use std::path::PathBuf;

header! { (XJfrogArtApi, "X-Jfrog-Art-Api") => [String] }

pub fn upload_jar(base_url: &str, api_key: &str, jar_path: PathBuf) {
    let url = base_url.parse().unwrap();
    let mut request = Request::new(Method::Put, url).unwrap();

    {
    let headers = request.headers_mut();
    headers.set(XJfrogArtApi(api_key.to_string()));
    }

    let mut multipart = Multipart::from_request(request).unwrap();

    write_body(&mut multipart, &mut jar_path.to_string_lossy().into_owned()).unwrap();

    let _response = multipart.send().unwrap();

    println!("{:?}", _response);
}

fn write_body(multi: &mut Multipart<Request<Streaming>>, jar_path: &str) -> hyper::Result<()> {
    multi.write_file("content", jar_path) 
        .and(Ok(()))
}

So, have I missed a header that curl -T sets?

Thanks.

Implement nested boundaries

This is a feature I've been putting off for a while because I wasn't sure how to approach it. Still not quite sure, but I needed a reminder somewhere so I don't forget it.

First entry is being skipped

Hi there,

I was trying out multipart on a form with a single file field, but I was not being able to get the result back.

I've created an example project to reproduce the issue. On this project, there is 3 file fields that I submit on the request, but the first is getting skipped.
https://github.com/bltavares/multipart-skipping-field-bug-report

I was trying to pinpoint why it was happening, and the closer I got was that we were reaching this line.
https://github.com/cybergeek94/multipart/blob/master/src/server/mod.rs#L166

I'm not so sure why it is getting there, but I would be open to help with the fix if you could give me some guidance.

Cheers (:

Is this dead?

Hi there,

it seems like this repo died a few months ago. Any plans to continue this project? Or are there any good alternatives?

HTTPS/TLS support

I know it's not an issue directly connected to multipart, but it's chain's closing ring...

I need to send a multipart/form-data submit to an https address, I searched for a working example without luck, so I tried to merge different examples from hyper_native_tls and multipart.
I haven't beed able to find a method to specify a call method (GET, POST, PUT, ecc...).

You can find my code here: https://github.com/nappa85/Rustegram/blob/master/client_lib/src/lib.rs
The test is everything you need to make it work, or fail, because it fails with
thread 'tests::it_works' panicked at 'Error sending multipart request: Io(Error { repr: Custom(Custom { kind: WriteZero, error: StringError("failed to write whole buffer") }) })', src/libcore/result.rs:906:4

Maybe my code could be helpful to create a working example about multipart over HTTPS

Support returning MultiPart as a server reply

I have a server that needs to return JSON metadata and several related binary blobs (Float32Arrays that are directly passed on to the GPU) in one reply. So far I return the blobs as base64 encoded strings in the JSON metadata, but decoding base64 into an array in the browser is slow.

I am thinking that through the fetch API I can interpret a response as formData() which could allow me to get the binary data directly as arrays. For this I need to return multipart/* in my server.

I tried doing this by using multipart::client::Multipart, but I cannot construct it without a request - which I do not have, since this is the reply in the server. Why can I not create a Multipart, add my binary data to it and ask it for the encoded body as a string?

Hyper client vs lazy Hyper client issue

I have the following 'lazy' code:

let ssl = NativeTlsClient::new().map_err(|e| ::hyper::Error::Ssl(Box::new(e)))?;
let connector = HttpsConnector::new(ssl);
let client = Client::with_connector(connector);

let mut res = Multipart::new().add_text("chat_id", "220614325")
   .add_file("document", "/home/steven/hello.jpg")
   .client_request(&client, &url)?;

println!("{:?}", &res);

which returns a 400 error.

And the following 'non-lazy' code:


let ssl = NativeTlsClient::new().map_err(|e| ::hyper::Error::Ssl(Box::new(e)))?;
let connector = HttpsConnector::new(ssl);

let request = hyper::client::Request::with_connector(Method::Post,
                                                             Url::parse(&url).unwrap(),
                                                             &connector)?;

let mut multipart = Multipart::from_request(request)?;

multipart.write_text("chat_id", "220614325")?;
multipart.write_file("document", "/home/steven/hello.jpg")?;

let mut res = multipart.send()?;

println!("{:?}", &res);

which works fine.

So what am I doing wrong with the 'lazy' version?

Incompatibility between 0.13.4 and 0.13.3

The recent release of 0.13.4 changes the API for interacting with an entries struct:

As per Cargo docs, crates with major version 0 are expected to be compatible across patch versions:

^0.2.3 := >=0.2.3 <0.3.0

This change has temporarily broken the rust2 variant of swagger-codegen. Should I fix codegen to use 0.13.4's API (entries.fields.into_map().get("field_name")) or is this an accidental incompatibility you're going to revert and I should stick with 0.13.3's (entries.fields.get("field_name"))?

examples/iron.rs fails to build with with rustc 1.12.1 (d4f39402a 2016-10-19)

The build fails with this error:

error[E0277]: the trait bound &mut iron::Request<'_, '_>: multipart::server::HttpRequest is not satisfied
--> src/main.rs:17:11
|
17 | match Multipart::from_request(request) {
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= note: required by <multipart::server::Multipart<()>>::from_request

error: aborting due to previous error

compiling multipart v0.5.0 fails

I was trying to get this working and I'm having issues compiling multipart. Have you seen this before?

~/D/h/src (master) $ cargo run
   Compiling multipart v0.5.0
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:227:46: 227:88 error: type mismatch resolving `<fn() -> mime::Mime {mime_guess::octet_stream} as core::ops::FnOnce<()>>::Output == mime::Mime`:
 expected struct `mime::Mime`,
    found a different struct `mime::Mime` [E0271]
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:227         let content_type = Some(content_type.unwrap_or_else(::mime_guess::octet_stream));
                                                                                                                                                    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:227:46: 227:88 help: run `rustc --explain E0271` to see a detailed explanation
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:264:6: 264:18 error: mismatched types:
 expected `mime::Mime`,
    found `mime::Mime`
(expected struct `mime::Mime`,
    found a different struct `mime::Mime`) [E0308]
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:264     (content_type, filename)
                                                                                                            ^~~~~~~~~~~~
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:264:6: 264:18 help: run `rustc --explain E0308` to see a detailed explanation
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:264:6: 264:18 note: Perhaps two different versions of crate `mime` are being used?
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/client/mod.rs:264     (content_type, filename)
                                                                                                            ^~~~~~~~~~~~
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/server/mod.rs:301:5: 301:70 error: mismatched types:
 expected `mime::Mime`,
    found `mime::Mime`
(expected struct `mime::Mime`,
    found a different struct `mime::Mime`) [E0308]
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/server/mod.rs:301     cont_type.parse().ok().unwrap_or_else(::mime_guess::octet_stream)
                                                                                                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/server/mod.rs:301:5: 301:70 help: run `rustc --explain E0308` to see a detailed explanation
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/server/mod.rs:301:5: 301:70 note: Perhaps two different versions of crate `mime` are being used?
/home/hatsunearu/.cargo/registry/src/github.com-88ac128001ac3a9a/multipart-0.5.0/src/server/mod.rs:301     cont_type.parse().ok().unwrap_or_else(::mime_guess::octet_stream)
                                                                                                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: aborting due to 3 previous errors
Could not compile `multipart`.

To learn more, run the command again with --verbose.

Testing

The library seems to be devoid of any meaningful testing. What is the testing story? How confident are you that the library works?

[Easy/Mentored] Create Sample Projects

I started to work on sample projects for each of the features in this release, but I realized it would be a perfect opportunity to mentor some beginners to Rust webdev!

If you are interested in tackling any of these, let me know on this PR and we'll coordinate on the #rust-webdev channel of the Mozilla IRC or another communication channel of your choosing.

Each of the following items will be created as a new Cargo binary project cargo new --bin [name] under a new samples/ directory in this repo. When done, they should be submitted via a PR. Each solution should have sufficient documentation in comments (not necessarily step-by-step, but perhaps a high-concept overview of each stage of request submission/reception).

  • hyper_client: use multipart::client::Multipart and hyper::client::Request to make a multipart request to localhost:80 with a text field, a file field (as myfile.txt in the root of the sample project with a paragraph of lorem ipsum text, and stream field (with a vector of random/arbitrary bytes as the source).
  • hyper_reqbuilder: use multipart::client::lazy::Multipart and hyper::Client to make the same request as above.
  • hyper_server: host a server with hyper::Server on localhost:80 and use multipart::server::Multipart to intercept multipart requests and read out all fields (text and file/stream) to stdout.
  • iron: do the same as above, but with iron::Request instead of Hyper.
  • iron_intercept: create a iron::Chain coupling the multipart::server::iron::Intercept before-middleware and a handler which extracts the multipart::server::Entries from the request and reads the fields to stdout.
  • tiny_http (completed by @White-Oak): do the same as hyper_server but with tiny_http.
  • nickel: do the same as hyper_server but with Nickel.

Content-Disposition does not need to be the first entry

As the title states multipart/form-data does not specify that Content-Disposition needs to be the first field in a request. The optional Content-Type might also be the first one.

Relevant Sections of the RFC:
https://tools.ietf.org/html/rfc2388#section-3
https://tools.ietf.org/html/rfc2388#section-5.5

multipart's inability to handle such requests recently caused brackage in one of my projects:
MPIB/hazel#5

As a result of one of the primary clients changing its default order:
NuGet/Home#2661

multipart is an indirect dependency via https://github.com/iron/params in my project and although I am using a currently outdated version (0.7), I do not think this issue can be resolved by updating my dependencies after looking at your source code:

https://docs.rs/multipart/0.8.0/src/multipart/.cargo/registry/src/github.com-1ecc6299db9ec823/multipart-0.8.0/src/server/mod.rs.html#427

Chunked Requests do not work in Iron Framework

Not sure where the problem lies here but appears that the multipart intercept in iron gets stuck on a chunked boundary.

Not normally an issue if you use this via curl, but the multipart hyper client appears to send it in chunks which is not read correctly by the multipart iron handler.

Steps to test:

  • Submit to an iron request with the intercept enabled and a println!("{:?}", entries) statement above the response:
sudo cargo run --features all --example iron_intercept

Watch the println output of the iron request:

Entries { fields: {}, files: {}, dir: Temp(TempDir { path: "/tmp/multipart.fcIrXAW1ERnN" }) }

Notice that there are no files.

With curl (curl -F "file=@lorem_ipsum.txt" localhost) this works ok:

Entries { fields: {}, files: {"file": SavedFile { path: "/tmp/multipart.GgWgbNoThb1U/kJiP8pJ5MRNr", filename: Some("lorem_ipsum.txt"), size: 5 }}, dir: Temp(TempDir { path: "/tmp/multipart.GgWgbNoThb1U" }) }

Separate into client and server crates

Generally, you either want one or the other. The client and server sides should be separated into their own crates (but potentially still under this repo).

multipart can remain as a shell crate that reexports the client and server crates as modules, so its usage doesn't change for existing projects.

Hyper Clinet type mismatch

when I call client_request in mp which type is multipart::client::lazy::Multipart and i got this error? I can't understand &hyper::Client is not same with &hyper::client::Client?

src/main.rs:32:23: 32:52 error: mismatched types [E0308]
src/main.rs:32     mp.client_request(&hyper::client::Client::new(),"http://localhost:1234");
                                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/main.rs:32:23: 32:52 help: run `rustc --explain E0308` to see a detailed explanation
src/main.rs:32:23: 32:52 note: expected type `&hyper::client::Client`
src/main.rs:32:23: 32:52 note:    found type `&hyper::Client`

Request-for-Comments (RFC) Discussions

I have left a number of Request-for-Comments (RFC) questions on various APIs and other places
in the code as there are some cases where I'm not sure what the desirable behavior is.

This issue is a place to collect responses and discussions for these questions.

Please quote the RFC-statement (and/or link to its source line) and provide your feedback here.

Multipart not working between Iron server and Hyper client

Sort of related to this ticket: #54, doesn't appear to be decoding stuff correctly between examples.

Steps to replicate:

  • Create a lorem_ipsum.txt file:
echo "Lorem Ipsum" > lorem_ipsum.txt
  • In one terminal run the iron example:
cargo run --example iron
  • In another terminal run the hyper_client example:
cargo run --example hyper_client

What you should see

In the iron terminal:

Field "text": "Hello, world!"
Field "file" has 1 files:
Lorem Ipsum

What you see

Just the text field is processed:

Field "text": "Hello, world!"

Relicense under dual MIT/Apache-2.0

This issue was automatically generated. Feel free to close without ceremony if
you do not agree with re-licensing or if it is not possible for other reasons.
Respond to @cmr with any questions or concerns, or pop over to
#rust-offtopic on IRC to discuss.

You're receiving this because someone (perhaps the project maintainer)
published a crates.io package with the license as "MIT" xor "Apache-2.0" and
the repository field pointing here.

TL;DR the Rust ecosystem is largely Apache-2.0. Being available under that
license is good for interoperation. The MIT license as an add-on can be nice
for GPLv2 projects to use your code.

Why?

The MIT license requires reproducing countless copies of the same copyright
header with different names in the copyright field, for every MIT library in
use. The Apache license does not have this drawback. However, this is not the
primary motivation for me creating these issues. The Apache license also has
protections from patent trolls and an explicit contribution licensing clause.
However, the Apache license is incompatible with GPLv2. This is why Rust is
dual-licensed as MIT/Apache (the "primary" license being Apache, MIT only for
GPLv2 compat), and doing so would be wise for this project. This also makes
this crate suitable for inclusion and unrestricted sharing in the Rust
standard distribution and other projects using dual MIT/Apache, such as my
personal ulterior motive, the Robigalia project.

Some ask, "Does this really apply to binary redistributions? Does MIT really
require reproducing the whole thing?" I'm not a lawyer, and I can't give legal
advice, but some Google Android apps include open source attributions using
this interpretation. Others also agree with
it
.
But, again, the copyright notice redistribution is not the primary motivation
for the dual-licensing. It's stronger protections to licensees and better
interoperation with the wider Rust ecosystem.

How?

To do this, get explicit approval from each contributor of copyrightable work
(as not all contributions qualify for copyright, due to not being a "creative
work", e.g. a typo fix) and then add the following to your README:

## License

Licensed under either of

 * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
 * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.

and in your license headers, if you have them, use the following boilerplate
(based on that used in Rust):

// Copyright 2016 multipart Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.

It's commonly asked whether license headers are required. I'm not comfortable
making an official recommendation either way, but the Apache license
recommends it in their appendix on how to use the license.

Be sure to add the relevant LICENSE-{MIT,APACHE} files. You can copy these
from the Rust repo for a plain-text
version.

And don't forget to update the license metadata in your Cargo.toml to:

license = "MIT OR Apache-2.0"

I'll be going through projects which agree to be relicensed and have approval
by the necessary contributors and doing this changes, so feel free to leave
the heavy lifting to me!

Contributor checkoff

To agree to relicensing, comment with :

I license past and future contributions under the dual MIT/Apache-2.0 license, allowing licensees to chose either at their option.

Or, if you're a contributor, you can check the box in this repo next to your
name. My scripts will pick this exact phrase up and check your checkbox, but
I'll come through and manually review this issue later as well.

multipart content type ignored, probably due to boundary parameter detection problem

I tried to investigate why my multipart request was not detected. Step by step, I finally found what would might be a problem, and it seems to be in the mime parameters handling here https://github.com/cybergeek94/multipart/blob/4ceeb41cb4a3f015ccb8f7aeb74b124e96313764/src/server/hyper.rs#L86

I'm not sure what triggered the regression, probably the introduction of Boundary as a part of the enum, and when I wrote a small test to figure out what was happening, the Attr::Ext(_) branch of the match was not matched.

Test
mod test {
  use mime::{Mime, TopLevel, SubLevel, Attr, Value};

  #[test]
  fn mime_parse_test() {
    "multipart/form-data; boundary=------------------------674d2efb3fca3c58".parse::<Mime>().and_then(|m| {
      match m {
        Mime(_, _, ref params) => {
          for &(ref known, ref value) in params {
            println!("{:?} {:?}", known, value);
            match *known {
              Attr::Ext(ref name) => println!("named string {:?}", name),
              _ => ()
            };

            match *known {
              Attr::Boundary => println!("known as boundary"),
              _ => ()
            };
          }
        }
      };
      Ok(())
    });
  }
}
Result
running 1 test
Boundary Ext("------------------------674d2efb3fca3c58")
known as boundary
test test::mime_parse_test ... ok

By the way, I'm too interested with multipart handling, thanks for refreshing the code and crates !

Missing field data when field contains multiple headers

I'm having trouble using the multipart crate to parse message bodies where a multipart field contains multiple headers. Fields that only have a Content-Disposition header are successfully parsed but fields that have multiple headers do not seem to be able to extract the field data.

It's possible that I'm misusing multipart but I have a minimal repro that tries to show the problem. The repro attempts to parse a multipart body with two fields. Part 1 has a Content-Disposition header and a Content-Type header, whilst part 2 has just a Content-Disposition header. The project prints the name and data of each MultipartField, with the data for part 2 being printed successfully but the data for part 1 missing.
cargo run gives:

part1:
part2:
        Part 2 body

Windows Text File Parsing Problem

When uploading textfiles created on windows (CR+LF endings), a CR+LF at the end of the file will interact badly with BoundaryReader::read_to_boundary where is decides whether to back up two characters (

if self.boundary_read && self.search_idx >= 2 {
let two_bytes_before = &buf[self.search_idx - 2 .. self.search_idx];
trace!("Two bytes before: {:?} ({:?}) (\"\\r\\n\": {:?})",
String::from_utf8_lossy(two_bytes_before), two_bytes_before, b"\r\n");
if two_bytes_before == &*b"\r\n" {
debug!("Subtract two!");
self.search_idx -= 2;
}
}
). When this happens the header parsing fails with a confusing "invalid header name" error:

2017-10-17T14:47:47.294530130-07:00 DEBUG multipart::server::field - string buf: "77\r\nContent-Disposition: form-data; name="comment"\r\n\r\ndd"
2017-10-17T14:47:47.294537891-07:00 ERROR virtua_rasa::middleware::wiki - multi threw error Error { repr: Custom(Custom { kind: InvalidData, error: Invalid("invalid header name") }) } for page `simple.svg'

Trait HttpRequest not implemented for iron::Request

I have these dependencies.

[dependencies]
chrono = "^0.3"
cookie = { version = "^0.6", features = [ "percent-encode" ] }
hyper = "^0.10"
iron = "^0.5"
multipart = { version = "^0.9", features = [ "iron", "server" ] }
protobuf = "^1.2"
ring = "^0.7"
rustc-serialize = "^0.3"
time = "^0.1"
untrusted = "^0.3"
urlencoded = "^0.5"

But I am getting this error.

error[E0277]: the trait bound `iron::Request<'_, '_>: multipart::server::HttpRequest` is not satisfied
   --> src/csrf.rs:386:9
    |
386 |         Multipart::from_request(*request);
    |         ^^^^^^^^^^^^^^^^^^^^^^^ the trait `multipart::server::HttpRequest` is not implemented for `iron::Request<'_, '_>`
    |
    = note: required by `<multipart::server::Multipart<()>>::from_request`

I know this is related to issue #47, but messing with the dependencies and their versions still won't compile. I feel like this is non-intuitive. What does my Cargo.toml need to look like to make the compiler happy?

Add some sort of benchmarks

  • Benchmark boundary reading using types from the mock module.
  • Show comparison with sse feature to see if it provides a speedup.

Usage with hyper's RequestBuilder

I'm finding it quite challenging to retrofit multipart into a client library that was built using hyper's RequestBuilder. I've been trying to implement HttpRequest and HttpStream for a wrapper around hyper's RequestBuilder (I can't see an obvious way to implement it directly against RequestBuilder), and perhaps my Rust is still just inadequate, but I keep hitting walls. I've worked around a few limitations of RequestBuilder by handling it very much like SizedRequest in this library (buffer input to a Vec and defer setting headers until finish is called since RequestBuilder consumes self to set headers).

However, I can't seem to call RequestBuilder's body method inside finish given the trait's lifetimes: RequestBuilder only borrows the buffer, and (if I understand correctly), I can't ensure that self (which owns the buffer) outlives the response (I'm not sure I grok why it even needs to outlive the response, but the borrow checker has defeated me for today.)

I thought I'd reach out and see if there was an obvious answer that I've been overlooking all along. Are there any examples of multipart used in conjunction with hyper's RequestBuilder? Is this even in-scope for this lib?

pub struct MultipartBuilder<'a> {
    inner: RequestBuilder<'a, Url>,
    boundary: String,
    buffer: Vec<u8>,
}

impl <'a> HttpRequest for MultipartBuilder<'a>  {
    type Stream = Self;
    type Error = HyperError;

    fn apply_headers(&mut self, boundary: &str, content_len: Option<u64>) -> bool {
        // Can't actually set headers, because RequestBuilder's header method consumes self
        self.boundary = boundary.into();
        true
    }
    fn open_stream(mut self) -> Result<Self::Stream, Self::Error> {
        // Can't actually get RequestBuilder's stream, so we'll buffer it and handle it in `finish`
        self.buffer.clear();
        Ok(self)
    }

}

impl <'a> HttpStream for MultipartBuilder<'a>  {
    type Request = Self;
    type Response = Response;
    type Error = HyperError;

    fn finish(mut self) -> Result<Self::Response, Self::Error> {
        self.inner
            .header(ContentType(Mime(
                TopLevel::Multipart, SubLevel::Ext("form-data".into()),
                vec![(Attr::Ext("boundary".into()), Value::Ext(self.boundary))]
            )))
            .header(ContentLength(self.buffer.len() as u64))
            .body(&*self.buffer)  // BAD: &*self.buffer doesn't live long enough
            .send()
    }
}

Iron dependency is outdated

https://crates.io/crates/iron is on 0.3, while multipart depends on 0.2. They are probably at most compatible.

However, when I was trying to Multipart::from_request from Iron Request, rustc was terribly angered, complaining traits being not implemented. The issue was gone, when I specified iron 0.2 as a dependency of my sample.

unable to compile with iron 0.6

Unable to compile with iron 0.6.0

error[E0277]: the trait bound&mut iron::Request<', '>: multipart::server::HttpRequestis not satisfied --> src/bin/store-backend\server\routes\mod.rs:485:31 | 485 | let mut form_data = match Multipart::from_request(req) { | ^^^^^^^^^^^^^^^^^^^^^^^ the traitmultipart::server::HttpRequestis not implemented for&mut iron::Request<', '>| = note: required by<multipart::server::Multipart<()>>::from_request

hyper `write_stream` with `ContentLength`

The use case I am trying to accomplish is creating a client which pushes a file and some meta parameters using multipart.
The remote requires the ContentLength header to be set.

How do I achieve this? How do I calculate the total length from the two finite length and known length streams? I need to use add_stream to manually set mime and name appropriately.

    let request = Request::with_connector(Method::Post, url, &connector);
    let mut multipart = Multipart::from_request(request)?;

multipart.write_stream::<_,_>("metadata", &mut meta_json.as_bytes(), None, Some(mime_json))?;
multipart.write_stream::<_,_>("srpm", &mut reader, name, mime)?;

`Entries` discards values for repeated fields

Because Entries stores its fields (non file form data) as HashMap<String, String>, all but the last value of a repeated field (e.g., subjects[]) are discarded, which is really bad.

I think the best solution would be store these as Vec<(String, String)>, which could easily be collected into a HashMap<String, String> or HashMap<String, Vec<String>> as necessary. However, this would have to be a breaking change, since fields is a public field of Entries.

Support Hyper 0.9

Try a version range to keep the Nickel feature functional. Doesn't work.

Blocked on Iron and Nickel upgrading to 0.9.

nickel is missing HttpRequest trait?

I've tried to implement this module in a fresh API, written in rust, but I can't get it to compile successfully. If I could get it to run, I would write an example integration for nickel in this project.

Maybe, there's a typo anywhere?

Cargo.toml

[package]
authors = ["Sebastian Blei <...>"]
name = "try_multipart"
version = "0.1.0"

[[bin]]
doc = false
name = "multipart"

[dependencies.multipart]
version = "0.7"
# Maybe a typo, because in the source code is a `nickel` cfg feature flag, NOT A `nickel_` (with underscore)
features = ["nickel", "nickel_"]

multipart.rs

#[macro_use]
extern crate nickel;
extern crate hyper;
extern crate multipart;

use multipart::server::{Entries, Multipart, SaveResult};
use multipart::server::nickel as nickel_;
use nickel::{Nickel, HttpRouter, MiddlewareResult, Request, Response};

// Try integration as full fn.
fn handle_post<'mw>(req: &mut Request, mut res: Response<'mw>) -> MiddlewareResult<'mw> {
    match Multipart::from_request(req) {
        Ok(mut multipart) => res.send("Ok"),
        Err(_) => res.send("error"),
    }
}

fn main() {
    let mut srv = Nickel::new();

    srv.post("/multipart/", middleware! { |req, res|
        match Multipart::from_request(req) {
            Ok(mut multipart) => "upload is multipart",
            Err(_) => "error",
        }
    });

    // Try integration via macro.
    // Macro and declared fn (handle_post) will be actually result in a similliar fn.
    srv.post("/alternative_multipart/", handle_post);

    srv.listen("127.0.0.1:6868");
}

cargo run --bin multipart will result in:

❯ cargo run --bin multipart
   Compiling try_multipart v0.1.0 (file:///home/sblei/Projects/macsi-rust-api)
src/bin/main.rs:62:11: 62:34 error: the trait bound `&mut nickel::Request<'_, '_>: multipart::server::HttpRequest` is not satisfied [E0277]
src/bin/main.rs:62     match Multipart::from_request(req) {
                             ^~~~~~~~~~~~~~~~~~~~~~~
src/bin/main.rs:62:11: 62:34 help: run `rustc --explain E0277` to see a detailed explanation
src/bin/main.rs:62:11: 62:34 note: required by `multipart::server::Multipart::from_request`
error: aborting due to previous error
error: Could not compile `diesel_pg_test`.

To learn more, run the command again with --verbose.

Thanks in advance for your help.

Feature flags for conditional compilation

  • RFC: make hyper a default feature
  • Make client module compile conditionally with default client feature
  • Make server module compile conditionally with default server feature

Problems with limited inner buffers for BoundaryReader.

I'm not 100% percent sure what's happening here, since I can't reproduce the bug reliably, it happens on a hyper server requests handling, but sometimes read_to_boundary() fails to read boundary properly.

For the cases I managed to inspect, it was when the boundary was not entirely in the buffer, and truncated, with the boundary end visible on the next fill_buf(). From what I see, the code cannot handle this case, but I'm not sure if the bug is not elsewhere, and that the buffer should be large enough to fit the boundary every time. I tried to force buffer capacity with the given content length of the request, but it did not fix it, it's pretty confusing.

"Could not read field headers" with specific body

I have an application built with Rocket which let the users upload files via a form using the multipart format.
It has more than 100 successful uploads, but there is a single file which triggers the "Could not read field headers" error. I've uploaded this file with a simple use of multipart (it is entirely possibile that I'm using multipart in a wrong way) test_error.zip and giving a cargo run results in

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Custom(Custom { kind: InvalidData, error: Other("Could not read field headers") }) }', /checkout/src/libcore/result.rs:860:4

Bountysource

Doesn't work with recent hyper

I've tried to copy the hyper_reqbuilder.rs into my project, but this seems to fail due to:

error[E0308]: mismatched types
  --> src/main.rs:53:25
   |
53 |         .client_request(&Client::new(), "http://go-ipfs:5001/api/v0/add")
   |                         ^^^^^^^^^^^^^^ expected struct `hyper::client::Client`, found struct `hyper::Client`
   |
   = note: expected type `&hyper::client::Client`
              found type `&hyper::Client`

error: aborting due to previous error

I've tried to downgrade to hyper 0.9, but this fails as well due to openssl 0.7.14 being unable to compile.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.