durch / rust-s3 Goto Github PK
View Code? Open in Web Editor NEWRust library for interfacing with S3 API compatible services
License: MIT License
Rust library for interfacing with S3 API compatible services
License: MIT License
The new Stockholm region is missing from Region enum (it's eu-north-1). Thankfully we have the custom variant.
I'm trying to use rust-s3
in a method returning a Future to asynchronously get data from an S3 bucket and am currently using version 0.16.0-alpha2
. Using get_object_async
is already working fine for me, however, I would now like to list the contents of the Bucket to download further objects in the same async function. However, if I call mybucket.list
, it seems that the list method is not currently compatible with asynchronous operation:
executor failed to spawn task: tokio::spawn failed (is a tokio runtime running this future?)
Would it be possible to add a list_async
function similar to the get_object_async
method?
Many thanks for providing this very useful library!
use s3::bucket::Bucket;
use s3::creds::Credentials;
use s3::region::Region;
use s3::S3Error;
use std::fs::File;
struct Storage {
name: String,
region: Region,
credentials: Credentials,
bucket: String,
location_supported: bool,
}
fn main() -> Result<(), S3Error> {
// region_name = "nl-ams".to_string();
// endpoint = "https://s3.nl-ams.scw.cloud".to_string();
// region = Region::Custom { region: region_name, endpoint };
let aws = Storage {
name: "aws".into(),
region: Region::Custom {
region: "us-east-1".to_string(),
endpoint: "http://114.55.145.17:5432".to_string(),
},
credentials: Credentials::new(Some("admin"), Some("admin123"), None, None, None).unwrap(),
// credentials: Credentials::from_env_specific(Some("admin"), Some("admin123"), None, None)?,
bucket: "test".to_string(),
location_supported: true,
};
let bucket = Bucket::new(&aws.bucket, aws.region, aws.credentials)?;
let (_, code) =
bucket.put_object_blocking("test_file", "I want to go to S3".as_bytes(), "text/plain")?;
// println!("{}", bucket.presign_get("test_file", 604801)?);
assert_eq!(200, code);
Ok(())
}
I'm attempting to list a bucket contents; however, Bucket::list
doesn't handle AWS's pagination, so the results are truncated. This seems to be intentional, though as a library consumer, I would really love to not need to implement pagination myself.
However: even aware of this, Bucket::list
doesn't take a marker for an argument, so there's no good way to get latter pages of data. We can use add_query
, but this again feels like something that the library should handle more straightforwardly for us. Lastly, even if I use that to provide a marker thusly,
let mut bucket = Bucket::new(bucket, region.clone(), credentials.clone());
if let Some(mark) = marker {
bucket.add_query("marker", &mark);
}
let (list_output, _) = bucket.list("", None)?;
I get nonsensical results: is_truncated
is true
, but next_marker
is None
:
if !list_output.is_truncated {
break;
} else {
println!("is_truc = {:?}, marker = {:?}", list_output.is_truncated, list_output.next_marker);
marker = list_output.next_marker;
}
is_truc = true, marker = None
After some more digging: rust-s3
is using list-type=2
, or v2 of the list API; that version doesn't return a marker, rather, it has been renamed ContinuationToken
. (I think they did this as there are cases in the v1 API where marker was None, but the response was truncated; you figured out the next marker yourself from the last listed key. Sounds like from the docs they did away with all that; if your response is truncated, you get a continuation token, end of story, which is simpler.)
When relying on loading a Credentials object from a specific profile, you should include the token as well if it is present within the .credentials file. For example, this is the format of my credentials file. aws_session_token and aws_security_token are both the same value. This format works with the current AWS S3 CLI, but I get an access denied when it's processed by this library.
[teamname/account_name]
aws_access_key_id=<redacted>
aws_secret_access_key=<redacted>
aws_session_token=<redacted>
aws_security_token=<redacted>
I'm trying to use rust-s3 to talk to the wasabi object storage, unfortunately I'm getting a http 400 reply with an empty response body. Also rust-s3 is having issues with the empty response and reacts like this:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: S3Error(SerdeXML(Error { pos: 1:1, kind: Syntax("Unexpected end of stream: no root element found") }), State { next_error: None, backtrace: None })', src/libcore/result.rs:999:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
This is with 0.15.0, using this config (edited into src/bin/simple_crud.rs):
const BUCKET: &str = "REDACTED";
let region_name = "eu-central-1".to_string();
let endpoint = "https://s3.wasabisys.com".to_string();
let region = Region::Custom { region: region_name, endpoint };
let credentials = Credentials::new(Some("REDACTED".into()), Some("REDACTED".into()), None, None);
let bucket = Bucket::new(BUCKET, region, credentials)?;
I've tested both list
and put
, both of them fail in the same way.
Line 36 in e5aef2d
Error: BucketPut { source: RequestError { source: ReqwestFuture } }
Newer to rust, and wanted to confirm if this misspelling was intentional or possibly related to the issue above.
I just tried upgrading from 0.20.1 to the latest release, but had to realize that path style bucket access is no longer supported.
This is bad for minio compatibility. Minio does accept dns based bucket access, but it's simply never used by anyone in a kubernetes context as there's no easy way to properly generate these dns records there.
It'd be great if there was a way to configure the behavior here.
Can this be considered?
At the moment rust-s3
's backend uses reqwest and it's native-tls TLS/SSL feature. native-tls isn't as portable as rustls - it'd be a nice enhancement to support via feature flag compiling reqwest with different tls backends.
Using the rust-s3 create (0.18.0), the Bucket::list_all hangs indefinitely using wasabi (or any) custom endpoint.
let region = Region::Custom {
region: config.s3_region.unwrap(),
endpoint: config.s3_endpoint.unwrap(),
};
let credentials = Credentials::new(config.s3_key, config.s3_secret, None, None);
let bucket = Bucket::new(&config.s3_bucket.unwrap(), region, credentials)?;
let results = bucket.list_all(config.s3_prefix.unwrap(), Some("/".to_string()))?;
Hi there !
Thanks for your work on this bucket :) Really like the simplicity of it compared to other existing solutions in Rust !
Actually enjoyed integrating S3 into an app. Was definitely not the case using rusoto...
Anyway, I would like to suggest making the path-style
cargo feature a configuration flag. The problem with using it as a cargo feature flag is one cannot use multiple backends in the same app. Which is my case at the moment. Perhaps, it would be possible to add an optional boolean value to the Region
to specify if it is path based or subdomain-based.
Will try to make something work locally and try to send a PR in !
First, thanks for writing this crate! I tried it after being disappointed with the get object performance in rusoto
, and your implementation is significantly faster*.
Is your feature request related to a problem? Please describe.
Listing objects does not work against at least one non-AWS S3 implementation that I tested. rust-s3
fails to deserialize the results because the response XML contained Contents
elements interleaved with CommonPrefixes
elements. This is valid XML, however, it is not supported by serde.
rusoto
supports this format through its own custom deserializer that you can see in this file, search for "impl ListObjectsV2OutputDeserializer".
Describe the solution you'd like
I fiddled around with it for a while, and was eventually able to get it to work by changing the serialization library to yaserde, a serde
fork that specializes in XML.
I am happy to submit a PR using yaserde
, but I'm not going to waste my time if there is no interest in changing crates. I understand that is likely not an issue when interacting with AWS S3, and is therefore unlikely to be very high priority.
Describe alternatives you've considered
I also tried using quick-xml integrated with serde
, but it doesn't work either because the limitation is in the core serde
crate.
[Edit] *Turns out the performance issues I noticed with rusoto are only present in debug builds. rusoto and rust-s3 perform similarly when rusoto is compiled as release.
The code of this crate is nicely split up into modules, but each module implementing just one thing, the docs and paths, that is, the outward-facing API is a bit awkward. There isn't a lot of types, so it would make sense to regroup them into larger units, and possibly expose them directly at root.
Facade pattern would help with this: the modules would be private to the crate, and the root module would expose the APIs. This would allow to see the important types at glance in the documentation, plus help with importing stuff; at the moment everything has its own path so it's a bit painful.
Of course, this is a quite big breaking change.
It would be nice to be able to feed it a custom URL for the S3 endpoint. That way this could be pointed at something like Ceph or Cleversafe.
The way it is now, I need to edit the region.rs and add custom endpoints there that get referenced via region.
As of today, requests that were working before (such as the simple_crud example) are now failing with 403 errors. The following is an example of the output I'm seeing:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>NOTMYREALKEY</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20170103T174025Z
20170103/us-east-1/s3/aws4_request
b77e1d8c5570a3e788088cabbd6ac1d71b87158b41d313518e3f41df3a980a9e</StringToSign><SignatureProvided>96d4a42b2e14b8ca31514fa6d64dd300e0c8242ab0d7d5b7ffdc892131e3b348</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 31 37 30 31 30 33 54 31 37 34 30 32 35 5a 0a 32 30 31 37 30 31 30 33 2f 75 73 2d 65 61 73 74 2d 31 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 62 37 37 65 31 64 38 63 35 35 37 30 61 33 65 37 38 38 30 38 38 63 61 62 62 64 36 61 63 31 64 37 31 62 38 37 31 35 38 62 34 31 64 33 31 33 35 31 38 65 33 66 34 31 64 66 33 61 39 38 30 61 39 65</StringToSignBytes><CanonicalRequest>DELETE
/bitcurry-s3-test/test_file
content-length:0
content-type:text/plain
date:Tue, 3 Jan 2017 17:40:25 +0000
host:s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20170103T174025Z
content-length;content-type;date;host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855</CanonicalRequest><CanonicalRequestBytes>44 45 4c 45 54 45 0a 2f 62 69 74 63 75 72 72 79 2d 73 33 2d 74 65 73 74 2f 74 65 73 74 5f 66 69 6c 65 0a 0a 63 6f 6e 74 65 6e 74 2d 6c 65 6e 67 74 68 3a 30 0a 63 6f 6e 74 65 6e 74 2d 74 79 70 65 3a 74 65 78 74 2f 70 6c 61 69 6e 0a 64 61 74 65 3a 54 75 65 2c 20 33 20 4a 61 6e 20 32 30 31 37 20 31 37 3a 34 30 3a 32 35 20 2b 30 30 30 30 0a 68 6f 73 74 3a 73 33 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 65 33 62 30 63 34 34 32 39 38 66 63 31 63 31 34 39 61 66 62 66 34 63 38 39 39 36 66 62 39 32 34 32 37 61 65 34 31 65 34 36 34 39 62 39 33 34 63 61 34 39 35 39 39 31 62 37 38 35 32 62 38 35 35 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 31 37 30 31 30 33 54 31 37 34 30 32 35 5a 0a 0a 63 6f 6e 74 65 6e 74 2d 6c 65 6e 67 74 68 3b 63 6f 6e 74 65 6e 74 2d 74 79 70 65 3b 64 61 74 65 3b 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 65 33 62 30 63 34 34 32 39 38 66 63 31 63 31 34 39 61 66 62 66 34 63 38 39 39 36 66 62 39 32 34 32 37 61 65 34 31 65 34 36 34 39 62 39 33 34 63 61 34 39 35 39 39 31 62 37 38 35 32 62 38 35 35</CanonicalRequestBytes><RequestId>EA64B25CAC07F68C</RequestId><HostId>YBqFkt8SBfs8Q9Jd5Z+kheykAjmxbh/ZhcOvrmjgp069MFENdFpBI7MoUQ7wTWOeWjFVy2pzKtc=</HostId></Error>
thread 'main' panicked at 'assertion failed: `(left == right)` (left: `204`, right: `403`)', examples/simple_crud.rs:37
It's possible, since the issue seems to be somewhat related to the turn of the year.
Hi, by following the examples, I do can get objects:
let (data, code) = bucket.get_object("/test.file").unwrap();
println!("Code: {}\nData: {:?}", code, data);
But can't list the contents of the bucket, I get this error:
no method named `list` found for type `std::result::Result<s3::bucket::Bucket, s3::error::S3Error>` in the current scope
I am testing with:
I am using rust-s3 = "0.15.0"
Is there a way to get all the metadata from a file?
Thanks in advance
Lines 11 to 13 in 2f684e2
Describe the bug
I tried to upload a piece of video to my S3 bucket using this method:
bucket.tokio_put_object_stream(&mut mp4_file, &s3_video_path).await?
When I check out the uploaded object, I found that it has zero length. I ran into the implementation and found the following code:
pub async fn tokio_put_object_stream<R: TokioAsyncRead + Unpin, S: AsRef<str>>(
&self,
reader: &mut R,
s3_path: S,
) -> Result<u16> {
let mut bytes = Vec::new();
reader.read(&mut bytes).await?;
let command = Command::PutObject {
content: &bytes[..],
content_type: "application/octet-stream",
};
let request = Request::new(self, s3_path.as_ref(), command);
Ok(request.response_data_future(false).await?.1)
}
However, according to the documentation of function tokio::io::AsyncRead::read
Pulls some bytes from this source into the specified buffer, returning how many bytes were read.
If the return value of this method is
Ok(n)
, then it must be guaranteed that0 <= n <= buf.len()
. A nonzeron
value indicates that the bufferbuf
has been filled in withn
bytes of data from this source.
This function tries to fill the given slice. Since an empty vector is used, length of slice is always 0
and thus nothing was sent. Logs of hyper
also shows this problem:
[2020-09-17T07:54:43Z DEBUG reqwest::connect] starting new connection: https://******
[2020-09-17T07:54:43Z DEBUG hyper::client::connect::dns] resolving host="******"
[2020-09-17T07:54:43Z DEBUG hyper::client::connect::http] connecting to 54.222.**.**:443
[2020-09-17T07:54:43Z DEBUG hyper::client::connect::http] connected to 54.222.**.**:443
[2020-09-17T07:54:43Z DEBUG hyper::proto::h1::io] flushed 636 bytes
[2020-09-17T07:54:43Z DEBUG hyper::proto::h1::io] read 261 bytes
[2020-09-17T07:54:43Z DEBUG hyper::proto::h1::io] parsed 6 headers
[2020-09-17T07:54:43Z DEBUG hyper::proto::h1::conn] incoming body is empty
[2020-09-17T07:54:43Z DEBUG hyper::client::pool] pooling idle connection for ("https", ******)
[2020-09-17T07:54:43Z DEBUG reqwest::async_impl::client] response '200 OK' for https://******
To Reproduce
See above.
Expected behavior
This function should correctly upload data.
Environment
1.47 nightly
0.23.0
Additional context
put_object_stream
of version 0.22 seems to have the same problem.
Hi, thanks for this crate. I have a small feature request: currently, Request::execute
can either write the response body to a writer or return it as a blob. It would be useful if, as a third option, it could return the response as-is. This would allow "pull" streaming the response - ie. reading from it.
Hi, I was using rust-s3
in the Elastic Container Service, but by #64, it became available only in EC2. Shall we allow to use it also in other services?
Currently the library only supports downloading into a vector. I have some large files I want to write directly to at the least the file system or better, pipe it into a XzDecoder
. I do not see a good way to make this work at the moment.
Ideally there is a fn get_stream
which returns an iterator or a file you can read from.
Describe the bug
When adding rust-s3 = "0.22.12"
to my dependencies in a rust-wasm project, it causes the build to fail.
Mainly it says: unresolved import 'sys'
To Reproduce
Create a rust-wasm project and add rust-s3
you can follow this
Then:
1 - cargo generate --git https://github.com/rustwasm/wasm-pack-template
2 - Enter any name
3 - Add rust-s3 to Cargo.toml
4 - wasm-pack build
Expected behavior
For it to build.
Environment
1.44
]Are there any current plans into moving to std::future or future 0.3? async/await has recently landed on stable build and both tokio and reqwest have alpha builds porting code to new futures model.
Describe the bug
Call to put_object
(with await
) in an async
function consistently never finishes. Upon inspection via lsof -i
I see that my process has indeed opened a connection to the blobstore (in this case the S3-compatible endpoint for a GCS bucket):
my_binary 87130 me 23u IPv4 0xc9ce8b2d93753fb7 0t0 TCP X.X.X.X:62088->172.217.164.112:https (ESTABLISHED)
But it then consistently remains stuck having not actually closed the underlying client's end of the TCP socket for the HTTP connection as I see it stuck in the CLOSE_WAIT
state (at least for a few minutes after which point I kill my binary's process):
my_binary 87130 me 23u IPv4 0xc9ce8b2d93753fb7 0t0 TCP X.X.X.X:62088->172.217.164.112:https (CLOSE_WAIT)
To Reproduce
Snippet of code:
let app_bits_bucket = s3::bucket::Bucket::new(
"my-bucket",
s3::region::Region::Custom {
region: "us-west2".to_owned(),
endpoint: "https://storage.googleapis.com".to_owned(),
},
s3::creds::Credentials::new(
Some("access-key"),
Some("secret-key"),
None,
None,
None,
)
.await
.unwrap(),
)
.unwrap();
debug!("uploading to blobstore...");
app_bits_bucket
.put_object(
format!("some-object/{}", guid),
&raw_data[..],
"application/octet-stream",
)
.await
.unwrap();
debug!("finished uploading to blobstore!");
Expected behavior
The TCP connection to close subsequently causing the HTTP connection to close and then at least receive some sort of response from the call to put_object
be it erroneous or not.
Environment
Additional context
Would love any advice on how to debug this further. Perhaps the response that has been streamed back via tcpdump
or wireshark
? Admittedly my knowledge of both is very limited though, especially when it comes to intercepting SSL/TLS-encrypted requests
Describe the bug
When running cargo test
for the s3 crate in an environment without the EU_AWS_ACCESS_KEY_ID & EU_AWS_SECRET_ACCESS_KEY environment variables, the tests fail.
To Reproduce
unset EU_AWS_ACCESS_KEY_ID
unset EU_AWS_SECRET_ACCESS_KEY
cd s3
cargo test
Expected behavior
Tests should succeed.
Environment
1.46
I've tried to download a binary file from s3. Here's my following code:
let obj_req = self.s3handle.get_object(format!("artifacts/{}/{}.bin", self._name, n).as_str());
let obj = obj_req.unwrap();
println!("got {}", obj.0.len());
but this code has the same error:
let obj_req = self.s3handle.get_object_stream(format!("artifacts/{}/{}.bin", self._name, n).as_str(), &mut fh);
This file is exactly 3072 bytes, generated using this command:
$ dd if=/dev/urandom of=7.bin bs=1024 count=3
3+0 records in
3+0 records out
3072 bytes (3.1 kB, 3.0 KiB) copied, 0.000438891 s, 7.0 MB/s
$ ls -l 7.bin
-rw-r--r-- 1 volfco users 3072 Dec 25 18:00 7.bin
When I run my program, I get 5535 bytes back. Or, the output from the 1st code snippet:
got 5535
. I'm expecting exactly 3072 bytes back. I'm not sure where the extra cruft is coming from, but this is causing my code to break.
For public buckets/objects, how can I use an anonymous credential? Seems to throw an error any method I try.
Hi!
I've tried to update rust-s3
from 0.19.3 to 0.19.4 and the new version changed return type of Credentials::new
from Credentials
to future (which is a breaking change). As I can see it was introduced here: 9c4ecf1.
So, are you happy to revert it for 0.19.x? Or to have another non-async initializer?
I was trying to see if there was support for ec2 instance roles via the instance metadata?
Hello,
The crate does not seem to be aware of of the eu-west-3
region which is what the paris one is called.
Here is sample code which uploads file to Yandex Object Storage and then downloads that file and compares content (rust-s3 version 0.19.4):
use s3::bucket::Bucket;
use s3::credentials::Credentials;
use s3::error::S3Error;
const MESSAGE: &str = "Example message";
const REGION: &str = "storage.yandexcloud.net";
const BUCKET: &str = "<MY_BUCKET>";
const ACCESS_KEY: &str = "<MY_ACCESS_KEY>";
const SECRET_KEY: &str = "<MY_SECRET_KEY>";
fn main() -> Result<(), S3Error> {
let region = REGION.parse()?;
let credentials = Credentials::new_blocking(Some(ACCESS_KEY.into()), Some(SECRET_KEY.into()), None, None)?;
let bucket = Bucket::new(BUCKET, region, credentials)?;
let (_, code) = bucket.put_object_blocking("test_file", MESSAGE.as_bytes(), "text/plain")?;
assert_eq!(200, code); // ok
// Get the "test_file" contents and make sure that the returned message matches what we sent.
let (data, code) = bucket.get_object_blocking("test_file")?;
let string = std::str::from_utf8(&data).unwrap();
assert_eq!(200, code); // fail, code is 403
assert_eq!(MESSAGE, string);
Ok(())
}
put_object
request completes successfully (and file is actually uploaded), but get_object
request fails with the following error:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided.</Message>
<Resource>/MY_BUCKET/test_file</Resource>
<RequestId>2c57a.......</RequestId>
</Error>
Yandex Object Storage promise to support some of the Amazon S3 HTTP API methods. Also, I am able to successfully download file from Yandex Object Storage using rusoto library (version 0.42), with the following code:
use std::io::Read;
use rusoto_s3::{GetObjectRequest, PutObjectRequest, S3, S3Client};
const MESSAGE: &str = "Example message";
const REGION: &str = "storage.yandexcloud.net";
const BUCKET: &str = "<MY_BUCKET>";
const ACCESS_KEY: &str = "<MY_ACCESS_KEY>";
const SECRET_KEY: &str = "<MY_SECRET_KEY>";
fn main() {
let credentials_provider = rusoto_credential::StaticProvider::new(ACCESS_KEY.into(), SECRET_KEY.into(), None, None);
let region = rusoto_core::Region::Custom { name: "us-east-1".into(), endpoint: REGION.into() };
let s3_client = S3Client::new_with(rusoto_core::HttpClient::new().unwrap(), credentials_provider, region);
s3_client.put_object(PutObjectRequest {
bucket: BUCKET.into(),
key: "test_file_rusoto".into(),
content_type: Some("text/plane".to_owned()),
body: Some(MESSAGE.to_owned().into_bytes().into()),
..Default::default()
}).sync().expect("could not upload");
let get_object_result = s3_client.get_object(GetObjectRequest {
bucket: BUCKET.into(),
key: "test_file_rusoto".into(),
..Default::default()
}).sync().expect("could not download");
let mut message_stream = get_object_result.body.unwrap().into_blocking_read();
let mut message = String::new();
message_stream.read_to_string(&mut message).unwrap();
assert_eq!(MESSAGE, message);
}
The dependency update published in 0.6.2 contains multiple breaking changes. 0.6.2 should be yanked, and 0.7.0 released instead.
Here are the breaking changes I noticed (there may be others):
chrono
0.2 was updated to 0.4. This breaks the DateTime
types used in the API of this crate.hmac
and sha2
both depend on generic-array
, and they were bumped to 0.8. It seems that even though the API isn't exposed, having generic-array
0.7 and 0.8 in the build breaks it. This makes using rust-s3
together with any crate that depends on generic-array
0.7 to break.I get that somebody else may want to use aws-region/aws-creds without s3, hence the need for a separate crate.
However it's hard to use rust-s3 without these crates, and they have to be of the same version.
For this use, you usually want to re-export aws-region and aws-creds, making it easier on your users: they don't have to keep track of these dependencies, and the doc works better.
Amazon AWS is deprecating "path-style" access to S3 buckets, as detailed in this blog post. Prior to the sunset in September 2020, the rust-s3 URL handling logic will need to be updated to use "subdomain style".
I am trying to create a stream server with actix-web.
So when an user is visiting a page i need to make a request to my s3 and send him the result.
But i get an S3Error(Reqwest(Error(BlockingClientInFutureContext, "http://localhost:9000/mybucket/")), State { next_error: None, backtrace: None })
.
I think it's because the lib doesn't use async reqwest client. Is there any plan to return future ?
It's going to be faster for large files download and also use significantly less memory.
Describe the bug
When using put_object_stream_blocking
, I went to check the results of the file upload only to find 0B file of the correct name.
To Reproduce
Using a String or Path, open the file
// Open and upload file
let path = Path::new(&s.file);
if !path.exists() {
error!("{} not found!", path.to_str().unwrap());
std::process::exit(1);
}
let mut file = File::open(path).expect("Unable to open file!");
let status_code = bucket
.put_object_stream_blocking(&mut file, &server_filename)
.unwrap_or_else(|e| {
error!("Unable to upload {}! Error: {}", &server_filename, e);
std::process::exit(1);
});
println!("Status code: {}", status_code);
Expected behavior
The file is currently 180KB. I expect that to be reflected in what's uploaded to AWS.
Environment
Additional context
Context: I'm uploading a firmware binary file.
hello, is there any plan to generate presigned urls for arbitrary operations (get, put, etc)? Also, the readme mentions that the put
operation returns a presigned url, but I get an empty vec instead, is it a common issue?
Error: Request failed with code 403
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAIPZ5PCIILIHZ6MWA</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20200625T071020Z
20200625/eu-west-2/s3/aws4_request
4a910798f4489414a2213d64486f6d4bd82753d67a4d6b85c99f5429a75ccfbd</StringToSign><SignatureProvided>7c580d2dbdb751ee700ab64a8e17c4d301c837e9e86f9ee905832746eaf091ae</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 30 30 36 32 35 54 30 37 31 30 32 30 5a 0a 32 30 32 30 30 36 32 35 2f 65 75 2d 77 65 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 34 61 39 31 30 37 39 38 66 34 34 38 39 34 31 34 61 32 32 31 33 64 36 34 34 38 36 66 36 64 34 62 64 38 32 37 35 33 64 36 37 61 34 64 36 62 38 35 63 39 39 66 35 34 32 39 61 37 35 63 63 66 62 64</StringToSignBytes><CanonicalRequest>PUT
/backup/today.txt
content-length:18
content-md5:uiCyA1I5IyVHaZ0OePdvSg==
content-type:text/plain
host:valinde.s3-eu-west-2.amazonaws.com
x-amz-content-sha256:9aba03e14018e548c115fe1f4554805b270a5abad3bb7531b5a59a44b6fadaef
x-amz-date:20200625T071020Z
content-length;content-md5;content-type;host;x-amz-content-sha256;x-amz-date
9aba03e14018e548c115fe1f4554805b270a5abad3bb7531b5a59a44b6fadaef</CanonicalRequest><CanonicalRequestBytes>50 55 54 0a 2f 62 61 63 6b 75 70 2f 74 6f 64 61 79 2e 74 78 74 0a 0a 63 6f 6e 74 65 6e 74 2d 6c 65 6e 67 74 68 3a 31 38 0a 63 6f 6e 74 65 6e 74 2d 6d 64 35 3a 75 69 43 79 41 31 49 35 49 79 56 48 61 5a 30 4f 65 50 64 76 53 67 3d 3d 0a 63 6f 6e 74 65 6e 74 2d 74 79 70 65 3a 74 65 78 74 2f 70 6c 61 69 6e 0a 68 6f 73 74 3a 76 61 6c 69 6e 64 65 2e 73 33 2d 65 75 2d 77 65 73 74 2d 32 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 39 61 62 61 30 33 65 31 34 30 31 38 65 35 34 38 63 31 31 35 66 65 31 66 34 35 35 34 38 30 35 62 32 37 30 61 35 61 62 61 64 33 62 62 37 35 33 31 62 35 61 35 39 61 34 34 62 36 66 61 64 61 65 66 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 32 30 30 36 32 35 54 30 37 31 30 32 30 5a 0a 0a 63 6f 6e 74 65 6e 74 2d 6c 65 6e 67 74 68 3b 63 6f 6e 74 65 6e 74 2d 6d 64 35 3b 63 6f 6e 74 65 6e 74 2d 74 79 70 65 3b 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 39 61 62 61 30 33 65 31 34 30 31 38 65 35 34 38 63 31 31 35 66 65 31 66 34 35 35 34 38 30 35 62 32 37 30 61 35 61 62 61 64 33 62 62 37 35 33 31 62 35 61 35 39 61 34 34 62 36 66 61 64 61 65 66</CanonicalRequestBytes><RequestId>8D6E7966CF915695</RequestId><HostId>93N6t1NsQn1B55UvmTDuT2LcPbmNTbTflersciGoFHgYgQl32LQi5JsBSUaEHayBDqtGthZFwrg=</HostId></Error>
This is the code in question
fn run(settings: &Config) -> Result<()> {
let bucket = Bucket::new(
&settings.get_str("aws-bucket-name")?,
"eu-west-2".parse()?,
Credentials::new_blocking(None, None, None, None, None)?,
)?;
bucket.put_object_blocking(
"/backup/today.txt",
"I want to go to S3".as_bytes(),
"text/plain",
)?;
Ok(())
}
0.21 works:
rust-s3 = { version = "0.21" }
aws-creds = "0.21"
This is my toml for 0.22
rust-s3 = { version = "0.22", features = ["fail-on-err"] }
aws-creds = "0.22.1"
aws has an implementation
aws s3api get-bucket-location --bucket <bucketname>
Due to breaking changes in hmac
and hash
functions in openssl
version newer than 0.7.14
, we are stuck to that version, this also means curl
is stuck.
If anyone wants to take a stab at this, they are more than welcome :)
Upgrading from 0.18.3 to 0.18.5 resulted in broken error boxing.
error[E0277]: the trait bound `s3::error::S3Error: std::error::Error` is not satisfied
--> src/routecache/mod.rs:184:82
|
184 | let bucket = Bucket::new(&config.s3_bucket.unwrap(), region, credentials)?;
| ^ the trait `std::error::Error` is not implemented for `s3::error::S3Error`
|
= note: required because of the requirements on the impl of `std::convert::From<s3::error::S3Error>` for `std::boxed::Box<dyn std::error::Error>`
= note: required by `std::convert::From::from`
error[E0277]: the trait bound `s3::error::S3Error: std::error::Error` is not satisfied
--> src/routecache/mod.rs:192:57
|
192 | let results = bucket.list_all(base_prefix, None)?;
| ^ the trait `std::error::Error` is not implemented for `s3::error::S3Error`
|
= note: required because of the requirements on the impl of `std::convert::From<s3::error::S3Error>` for `std::boxed::Box<dyn std::error::Error>`
= note: required by `std::convert::From::from`
Full code context is here. Just a simple Box
https://github.com/haiku/hpkgbouncer/blob/master/src/routecache/mod.rs#L184
Similar to #10, but for uploads.
Hi, I found it sometimes crashes with "Not an EC2 instance", even in EC2 instances.
According to scylladb/scylladb@2a4ba88#diff-23bb822fd3271daf086e185150f31517, it's because some EC2 instances don't have /sys/hypervisor/uuid
, but have /sys/class/dmi/id/board_vendor
.
It would be nice to be able to disable SSL verification. There are situations in an internal corporate environment with proxies or alternate S3 implementations that either perform MITM attacks or just have self signed certificates and it would be nice to have an option to just blindly accept these scenarios.
My current solution is a monkey patch on request.rs to add handle.ssl_verify_peer(false);
to the execute function.
If a client error occur (i.e trying to download an invalid bucket) while calling get_object_stream()
it will save the xml formatted error message from S3 into the file. I would expect the function to return an Err() so my program can handle this case instead of continue as everything is okay.
It can be replicated by changing the bucket name to something bogus in the code snippet from the documentation: https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.get_object_stream
Disclaimer: I'm new to Rust so I might miss something :)..
On a new computer where I have no credential setup for AWS when I call Credentials::default(), it hangs forever until a timeout. I've debugged the problem to the fact that it tries a request to http://169.254.169.254/latest/meta-data/iam/info. This service does not exist for us and it does not work.
Just doing a list to see if a file (directory exists), both are at the root level:
let (list, code) = bucket.list("user/", Some("/")).unwrap();
- This directory exists on the bucketlet (list, code) = bucket.list("test/", Some("/")).unwrap();
- This one does notAppears AmazonS3 responses may have changed[1].
When attempting to list a directory that doesn't exist:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: S3Error(SerdeXML(missing field `NextMarker`), State { next_error: None, backtrace: Some(stack backtrace:
0: 0x10f117f14 - backtrace::backtrace::trace<closure>
When attempting to list a directory that does exist:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: S3Error(SerdeXML(missing field `Owner`), State { next_error: None, backtrace: Some(stack backtrace:
0: 0x10b960f14 - backtrace::backtrace::trace<closure>
[1] Here's a documentation page where it appears NextMarker isn't sent if it isn't needed http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.