Giter VIP home page Giter VIP logo

ex_aws_s3's Introduction

ExAws.S3

CI Module Version Hex Docs Total Download License Last Updated

Service module for https://github.com/ex-aws/ex_aws

Installation

The package can be installed by adding :ex_aws_s3 to your list of dependencies in mix.exs along with :ex_aws, your preferred JSON codec / HTTP client, and optionally :sweet_xml to support operations like list_objects that require XML parsing.

def deps do
  [
    {:ex_aws, "~> 2.0"},
    {:ex_aws_s3, "~> 2.0"},
    {:poison, "~> 3.0"},
    {:hackney, "~> 1.9"},
    {:sweet_xml, "~> 0.6.6"}, # optional dependency
  ]
end

Operations on AWS S3

Basic Operations

The vast majority of operations here represent a single operation on S3.

Examples

S3.list_objects("my-bucket") |> ExAws.request! #=> %{body: [list, of, objects]}
S3.list_objects("my-bucket") |> ExAws.stream! |> Enum.to_list #=> [list, of, objects]

S3.put_object("my-bucket", "path/to/bucket", contents) |> ExAws.request!

Higher Level Operations

There are also some operations which operate at a higher level to make it easier to download and upload very large files.

Multipart uploads

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request #=> {:ok, :done}

See ExAws.S3.upload/4 for options

Download large file to disk

S3.download_file("my-bucket", "path/on/s3", "path/to/dest/file")
|> ExAws.request #=> {:ok, :done}

More high level functionality

Task.async_stream makes some high level flows so easy you don't need explicit ExAws support.

For example, here is how to concurrently upload many files.

upload_file = fn {src_path, dest_path} ->
  S3.put_object("my_bucket", dest_path, File.read!(src_path))
  |> ExAws.request!
end

paths = %{"path/to/src0" => "path/to/dest0", "path/to/src1" => "path/to/dest1"}

paths
|> Task.async_stream(upload_file, max_concurrency: 10)
|> Stream.run

Bucket as host functionality

Examples

opts = [virtual_host: true, bucket_as_host: true]

ExAws.Config.new(:s3)
|> S3.presigned_url(:get, "bucket.custom-domain.com", "foo.txt", opts)

{:ok, "https://bucket.custom-domain.com/foo.txt"}

Configuration

The scheme, host, and port can be configured to hit alternate endpoints.

For example, this is how to use a local minio instance:

# config.exs
config :ex_aws, :s3,
  scheme: "http://",
  host: "localhost",
  port: 9000

An alternate content_hash_algorithm can be specified as well. The default is :md5. It may be necessary to change this when operating in a FIPS-compliant environment where MD5 isn't available, for instance. At this time, only :sha256, :sha, and :md5 are supported by both Erlang and S3.

# config.exs
config :ex_aws_s3, :content_hash_algorithm, :sha256

License

The MIT License (MIT)

Copyright (c) 2014 CargoSense, Inc.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

ex_aws_s3's People

Contributors

almirsarajcic avatar aseigo avatar barnabasj avatar bcat-eu avatar benwilson512 avatar bernardd avatar d3caf avatar davidwebster48 avatar dcdourado avatar dependabot-preview[bot] avatar dependabot[bot] avatar dprbook avatar firx avatar gazler avatar geofflane avatar gr8adakron avatar holsee avatar ishikawa avatar jeffreyplusplus avatar kianmeng avatar lostkobrakai avatar mgwidmann avatar mindreframer avatar mmzx avatar nathanl avatar nathany-copia avatar sammarten avatar stwf avatar thehosepipe avatar tylerpachal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ex_aws_s3's Issues

Issue with Regions

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

% elixir --version
Erlang/OTP 21 [erts-10.0.8] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe]

Elixir 1.7.3 (compiled with Erlang/OTP 21)
% mix deps |grep ex_aws

  • ex_aws 2.1.0 (Hex package) (mix)
    locked at 2.1.0 (ex_aws) b9265152
  • ex_aws_s3 2.0.1 (Hex package) (mix)
    locked at 2.0.1 (ex_aws_s3) 9e09366e

% mix deps | grep hackney

  • hackney 1.14.0 (Hex package) (rebar3)
    locked at 1.14.0 (hackney) 66e29e78

Current behavior

So my default config is

config :ex_aws,
  access_key_id: [{:system, "AWS_ACCESS_KEY_ID"}, {:awscli, "default", 30}, :instance_role],
  secret_access_key: [
    {:system, "AWS_SECRET_ACCESS_KEY"},
    {:awscli, "default", 30},
    :instance_role
  ],
  region: "us-west-2"

Trying to do other region I get some weird behavior?

ExAws.S3.list_objects("outreach-insights")  |> ExAws.request(region: "us-west-1") 

10:50:41.129 [warn]  ExAws: Received redirect, did you specify the correct region?
{:error, {:http_error, 301, "redirected"}}
ExAws.S3.put_object("outreach0-insights", "yo", "hello") |> ExAws.request(region: "us-west-1")
{:error,
 {:http_error, 400,
  %{
    body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-west-2' is wrong; expecting 'us-west-1'</Message><Region>us-west-1</Region><RequestId>ABB891594B7B1A67</RequestId><HostId>pW/8POxk5rVecmj/9rJuAJBhrY/g2NZjG6Hs+RVxoUU4f7XmkyETQuSMnAMrLcAom6T+jLQfdqA=</HostId></Error>",
    headers: [
      {"x-amz-request-id", "ABB891594B7B1A67"},
      {"x-amz-id-2",
       "pW/8POxk5rVecmj/9rJuAJBhrY/g2NZjG6Hs+RVxoUU4f7XmkyETQuSMnAMrLcAom6T+jLQfdqA="},
      {"Content-Type", "application/xml"},
      {"Transfer-Encoding", "chunked"},
      {"Date", "Thu, 20 Sep 2018 17:50:53 GMT"},
      {"Connection", "close"},
      {"Server", "AmazonS3"}
    ],
    status_code: 400

Expected behavior

that it would list my bucket or put object to the region specified in the request? What am I missing?

ex-aws configuration is not used for presigned_url/4

Environment

  • ExAws version 2.0.1

Current behavior

Currently, ExAws.S3.presigned_url requires a config map to be manually passed in to access things like our secret access key.

S3.presigned_url(
  ExAws.Config.new(:s3),
  :get,
  ...
),

It seems like this should happen implicitly using our ExAws S3 configuration, since other functions use our configuration automatically. For example,

ExAws.S3.list_buckets

Discovers our configuration internally.

Expected behavior

presigned_url behaves like other functions by discovering configuration

Issues with object name with characters that should be url encoded

Environment

  • Elixir & Erlang versions (elixir --version):
    Erlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe] [dtrace]

Elixir 1.9.1 (compiled with Erlang/OTP 20)

  • ExAws version mix deps |grep ex_aws

  • ex_aws 2.1.1 (Hex package) (mix)
    locked at 2.1.1 (ex_aws) 1e4de210

  • ex_aws_sqs 2.0.1 (Hex package) (mix)
    locked at 2.0.1 (ex_aws_sqs) 42b19229

  • ex_aws_s3 2.0.2 (Hex package) (mix)
    locked at 2.0.2 (ex_aws_s3) c0258bbd

  • HTTP client version. IE for hackney do mix deps | grep hackney

  • hackney 1.15.1 (Hex package) (rebar3)
    locked at 1.15.1 (hackney) 9f8f471c

Current behavior

I'm trying to perform a head request on an object in a bucket I have access to:

ExAws.S3.head_object("test-bucket", "simples.wav") |> ExAws.request(aws_config)

which returns me the headers just fine.

If the request has a character that should be url encoded:

ExAws.S3.head_object("test-bucket", "b_+22222222_20190726T045247+0000_bf18e58a-0098-99fb-7dbb-be3539d8327a.wav") |> ExAws.request(aws_config)

I get a 404 error

If I url encode the object name

ExAws.S3.head_object("test-bucket", "b_%2B22222222_20190726T045247%2B0000_bf18e58a-0098-99fb-7dbb-be3539d8327a.wav") |> ExAws.request(aws_config)

I then get a 403 error

Expected behavior

I should get a 200 ok with the header details

{:ok, %{ headers: [ {"x-amz-id-2", "B9cWAN2B0szSy+Z4kw52gtzdqCxi70LmCN+g7nw/KNZtLWDv2CyU1MHdrdpzMpbiFk7pGv7RiU4="}, {"x-amz-request-id", "AF68FC0E0BBE2125"}, {"Date", "Fri, 26 Jul 2019 07:54:50 GMT"}, {"Last-Modified", "Fri, 26 Jul 2019 06:43:11 GMT"}, {"ETag", "\"8d754d4e52dbb4e6613446f0d9832d91\""}, {"x-amz-server-side-encryption", "AES256"}, {"Accept-Ranges", "bytes"}, {"Content-Type", "audio/x-wav"}, {"Content-Length", "1918444"}, {"Server", "AmazonS3"} ], status_code: 200 }}

Can't use stream upload

Environment

$ elixir --version
Erlang/OTP 21 [erts-10.1.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Elixir 1.7.4 (compiled with Erlang/OTP 20)
$ mix deps | grep ex_aws
* ex_aws 2.0.2 (Hex package) (mix)
  locked at 2.1.0 (ex_aws) b9265152
* ex_aws_s3 2.0.0 (Hex package) (mix)
  locked at 2.0.1 (ex_aws_s3) 9e09366e
  • HTTP client version. IE for hackney do `mix deps | grep hackney
mix deps | grep hackney
* hackney 1.10.1 (Hex package) (rebar3)
  locked at 1.10.1 (hackney) c38d0ca5

Current behavior

After trying to follow the documentation for uploading a big file in stream fassion:

path = "/path/to/file"
key  = "/path/to/s3"
bucket = "bucket.name"

path
    |> S3.Upload.stream_file
    |> S3.upload(bucket, key)
    |> ExAws.request!

I get the following stacktrace:

** (FunctionClauseError) no function clause matching in :raw_file_io_raw.open_layer/3    
    
    The following arguments were given to :raw_file_io_raw.open_layer/3:
    
        # 1
        "/path/to/file"
    
        # 2
        [:read, {:read_ahead, 65536}, :binary, :binary]
    
        # 3
        [:raw, :raw]
     
    (kernel) raw_file_io_raw.erl:24: :raw_file_io_raw.open_layer/3
    (elixir) lib/file/stream.ex:78: anonymous fn/3 in Enumerable.File.Stream.reduce/3
    (elixir) lib/stream.ex:1362: anonymous fn/5 in Stream.resource/3
    (elixir) lib/stream.ex:1553: Enumerable.Stream.do_each/4
    (elixir) lib/task/supervised.ex:306: Task.Supervised.stream_reduce/7
    (elixir) lib/enum.ex:2979: Enum.reverse/1
    (elixir) lib/enum.ex:2613: Enum.to_list/1
    (ex_aws_s3) lib/ex_aws/s3/upload.ex:89: ExAws.Operation.ExAws.S3.Upload.perform/2
    (ex_aws) lib/ex_aws.ex:61: ExAws.request!/2
    (elixir) lib/enum.ex:1314: Enum."-map/2-lists^map/1-0-"/2

Expected behavior

I'd like for the file to just be uploaded in chunks.

Support for mox

Just wondering is it possible to add behaviour to each public API? I am trying to test my function that is integrated with this awesome library. Unfortunately, it does not have behaviour which means mox can not mock it.

(FunctionClauseError) no function clause matching in :raw_file_io_raw.open_layer/3

When I am trying to upload to S3 I get:
(FunctionClauseError) no function clause matching in :raw_file_io_raw.open_layer/3

It used to work before I upgrade to OTP 22 and Elixir 1.9 though.

  defp deps do
    [
      {:ex_aws, "~> 2.0"},
      {:ex_aws_s3, "~> 2.0"},
      {:hackney, "~> 1.9"},
      {:sweet_xml, "~> 0.6"},
    ]
  end

Here is the code that do it:

cloud_s3.path
      |> S3.Upload.stream_file
      |> S3.upload(cloud_s3.s3_bucket, cloud_s3.s3_output)
      |> ExAws.request
      |> case do
        {:ok, result } ->
          Logger.warn "S3: Done for #{inspect(result)}"
        _ ->
          Logger.warn "S3: Err for #{inspect(cloud_s3)}, Not sure why!}"
      end

Now I logged cloud_s3.path |> S3.Upload.stream_file out and here is the result:

%File.Stream{
  line_or_bytes: 5242880,
  modes: [:raw, :raw, {:read_ahead, 65536}, :binary, :binary],
  path: "/var/folders/3m/n6p202494w15b5xx0fc15y580000gn/T//plug-1562/multipart-1562739090-118134755902720-4",
  raw: true
}

I think, raw: true this is the reason I get error. I am not sure how to fix it.

When S3 object's path contains double slash, S3 download_file throws an error

Environment

  • Elixir & Erlang versions (elixir --version):

Erlang/OTP 21 [erts-10.3.5] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]

Elixir 1.8.2 (compiled with Erlang/OTP 20)

  • ExAws version mix deps | grep ex_aws

ex_aws (Hex package) (mix)
locked at 2.1.0 (ex_aws) b9265152
ex_aws_s3 (Hex package) (mix)
locked at 2.0.1 (ex_aws_s3) 9e09366e

  • HTTP client version. IE for hackney do mix deps | grep hackney

hackney 1.15.1 (Hex package) (rebar3)
locked at 1.15.1 (hackney) 9f8f471c

Current behaviour

Given that: a S3 object has a double slash (//) in its full S3 path like:

audit-function-test//dima-elixir/file-to-test

between bucket

audit-function-test

and prefix

dima-elixir/file-to-test

When calling

ExAws.S3.download_file("audit-function-test", "/dima-elixir/file-to-test", "/home/my_home/temp/file-to-test") |> ExAws.request

function, I am getting error 404 (resource does not exist):

** (ExAws.Error) ExAws Request Error!

{:error, {:http_error, 404, %{headers: [{"x-amz-request-id", "A13E052839646A65"}, {"x-amz-id-2", "E0FrYVT1t79h5w4OC2Yam7t9q6ktdDde3SYzwsuYLKveBTa1pv5LvIidRXSCKSQM+fsD31fbkhY="}, {"Content-Type", "application/xml"}, {"Transfer-Encoding", "chunked"}, {"Date", "Fri, 03 May 2019 06:59:01 GMT"}, {"Server", "AmazonS3"}], status_code: 404}}}

(ex_aws) lib/ex_aws.ex:66: ExAws.request!/2
(ex_aws_s3) lib/ex_aws/s3/download.ex:51: ExAws.S3.Download.get_file_size/3
(ex_aws_s3) lib/ex_aws/s3/download.ex:28: ExAws.S3.Download.build_chunk_stream/2
(ex_aws_s3) lib/ex_aws/s3/download.ex:68: ExAws.Operation.ExAws.S3.Download.perform/2

Expected behaviour

The object file-to-test from full S3 path

audit-function-test//dima-elixir/file-to-test

has to be successfully downloaded.

For example with the aws CLI command

aws s3 cp s3://audit-function-test//dima-elixir/file-to-test

I can do it successfully

Debug Information

from S3 access logs

2019-05-03-07-47-26-92C1FCD19BE95A1A:a6df35be316ba91b3ea41c653408ae300a0a037341e498644e690cdefb6f0c8c audit-function-test [03/May/2019:06:59:02 +0000] 76.28.221.2 arn:aws:iam::111111111111:user/some_user A13E052839646A65 REST.HEAD.OBJECT dima-elixir/file-to-test "HEAD /audit-function-test/dima-elixir/file-to-test HTTP/1.1" 404 NoSuchKey 295 - 16 - "-" "hackney/1.15.1" - E0FrYVT1t79h5w4OC2Yam7t9q6ktdDde3SYzwsuYLKveBTa1pv5LvIidRXSCKSQM+fsD31fbkhY= SigV4 ECDHE-RSA-AES128-GCM-SHA256 AuthHeader s3.amazonaws.com TLSv1.2

400 MalformedXML error when object key contains ampersand

Environment

  • Elixir & Erlang versions (elixir --version):
Erlang/OTP 22 [erts-10.5.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1]
Elixir 1.9.2 (compiled with Erlang/OTP 22)
  • ExAws version mix deps |grep ex_aws
* ex_aws 2.1.1 (Hex package) (mix)
  locked at 2.1.1 (ex_aws) 1e4de210
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) c0258bbd
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.15.2 (Hex package) (rebar3)
  locked at 1.15.2 (hackney) 07e33c79

Current behavior

delete_multiple_objects/3 gives a 400 MalformedXML error when an object key contains an ampersand.

{:error,
 {:http_error, 400,
  %{
    body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MalformedXML</Code><Message>The XML you provided was not well-formed or did not validate against our published schema</Message><RequestId>...</RequestId><HostId>...</HostId></Error>",
    headers: [
      {"x-amz-request-id", "..."},
      {"x-amz-id-2", "..."},
      {"Content-Type", "application/xml"},
      {"Transfer-Encoding", "chunked"},
      {"Date", "Fri, 06 Mar 2020 22:54:25 GMT"},
      {"Connection", "close"},
      {"Server", "AmazonS3"}
    ],
    status_code: 400
  }}}

Expected behavior

The request should succeed.

It looks like the key is just being inserted straight into the XML without any character entity encoding (e.g. & => &amp;):

{key, version} -> ["<Object><Key>", key, "</Key><VersionId>", version, "</VersionId></Object>"]

SignatureDoesNotMatch for S3 key with space

Environment

  • Elixir & Erlang versions (elixir --version):
Erlang/OTP 23 [erts-11.0.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Elixir 1.10.4 (compiled with Erlang/OTP 23)
  • ExAws version mix deps |grep ex_aws
* ex_aws 2.1.4 (Hex package) (mix)
  locked at 2.1.4 (ex_aws) 18eae006
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) 0569f5b2
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.16.0 (Hex package) (rebar3)
  locked at 1.16.0 (hackney) 3bf0bebb

Current behavior

ExAws.S3.list_objects("some-bucket", marker: "key_WITH SPACE") |> ExAws.request()
{:error,
 {:http_error, 403,
  %{
    body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>

This may be fixed with

ExAws.S3.list_objects("some-bucket", marker: URI.encode("key_WITH SPACE")) |> ExAws.request()

BUT the issue happens also when during streaming (ExAws.stream!) the continuation marker has spaces. The marker is processed internally by the ExAws.S3 and can't be encoded.

Expected behavior

ExAws.S3.list_objects("some-bucket", marker: "key_WITH SPACE") |> ExAws.request()
{:ok, 

Using a S3 request call without bang still raises an exception

Environment

  • ExAws version 2.0.1, ExAwsS3 version 2.0.0

Current behavior

I'm uploading files to S3, and i have a poor internet connection and my intention is that this code does not raise an exception (that's why i use ExAws.request without a bang)

file
    |> ExAws.S3.Upload.stream_file
    |> ExAws.S3.upload(bucket_name, path, timeout: :infinity)
    |> ExAws.request

However, in the flow, upload_chunk! is called, which has a request call with a bang inside, and in some cases, this error occurs:

** (EXIT from #PID<0.444.0>) evaluator process exited with reason: an exception was raised:
    ** (ExAws.Error) ExAws Request Error!

{:error, {:http_error, 400, %{body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?><Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>C55655BDD4AA47EF</RequestId><HostId>ltnii8mbF9VrPPUaXUlaFZi10d9oi1B2u2ZRLjEqHfL0XlLPzDpQL7Lms991yg4ylsSNo/Vwz32XWcOgsW+wa0GAMqoIL0R1</HostId></Error>", headers: [{"Date", "Tue, 14 Nov 17 13:07:20 GMT"}, {"Connection", "close"}, {"Transfer-Encoding", "chunked"}, {"x-amz-id-2", "ltnii8mbF9VrPPUaXUlaFZi10d9oi1B2u2ZRLjEqHfL0XlLPzDpQL7Lms991yg4ylsSNo/Vwz32XWcOgsW+wa0GAMqoIL0R1"}, {"x-amz-request-id", "C55655BDD4AA47EF"}, {"Content-Type", "application/xml"}], status_code: 400}}}

        (ex_aws) lib/ex_aws.ex:46: ExAws.request!/2
        (ex_aws) lib/ex_aws/s3/upload.ex:67: ExAws.S3.Upload.upload_chunk!/3
        (elixir) lib/task/supervised.ex:85: Task.Supervised.do_apply/2
        (elixir) lib/task/supervised.ex:36: Task.Supervised.reply/5
        (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3

Is there a workaround for this or i'm missing something ?

Expected behavior

If the request fails, returns a tuple {:error, error}.

Support for Requester Pays S3 Buckets

Linked: #7

It would be a nice feature to support "requester pays" at a more general level across the S3 API.

I am not sure what would be the best hook for applying the header and query params across ExAws.S3 API surface, so I opened this issue to start the discussion.

I think this feature would warrant being enabled via static config as well as at runtime.

AWS Docs:

x-amz-acl not signed in presigned_url

S3 requires x-amz-acl to be signed. Without it, the request will 403.

I'm generating a presigned URL with this:

    key = Ecto.UUID.generate()

    query_params = [{"x-amz-acl", "public-read"}]
    presign_options = [query_params: query_params]

    with {:ok, url} <-
           ExAws.Config.new(:s3)
           |> ExAws.S3.presigned_url(:put, @bucket, key, presign_options) do
      {:ok, %FileUpload{url: url, key: key}}
    end

Environment

  • Elixir 1.9.0 & Erlang 22
  • ExAws 2.0.2
  • Hackney 1.15.1

Current behavior

The generated URL doesn't sign the x-amz-acl header:

https://s3.amazonaws.com/REDACTED?Content-Type=image%2Fpng&x-amz-acl=public-read&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=REDACTED%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190820T212049Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=REDACTED

Note the only signed headers are host and X-Amz-Signature.

<Error>
  <Code>AccessDenied</Code>
  <Message>There were headers present in the request which were not signed</Message>
  <HeadersNotSigned>x-amz-acl</HeadersNotSigned>
  <RequestId>REDACTED</RequestId><HostId>REDACTED</HostId>
</Error>

Expected behavior

The generated URL should work for a file upload.

New release

Please release a new release with recent fixes.
Thanks

Uploading files broken in Elixir 1.10+

Environment

  • Elixir & Erlang versions (elixir --version): Erlang/OTP 22, Elixir 1.10.2
  • ExAws version mix deps |grep ex_aws: 2.0.2
  • HTTP client version. hackney: 1.15.2

Current behavior

When trying to upload an image to S3 using elixir 1.10+ it breaks because of what seems like an error in the URL formatting that changed between 1.9 -> 1.10.

Uploading a file with put_object/4 yields:

** (exit) an exception was raised:
    ** (ArgumentError) :path in URI must be nil or an absolute path if :host or :authority are given, got: %URI{authority: nil, fragment: nil, host: "s3.amazonaws.com", path: "XXXXX.png", port: 443, query: "", scheme: "https", userinfo: nil}
        (elixir 1.10.2) lib/uri.ex:662: String.Chars.URI.to_string/1
        (ex_aws 2.1.2) lib/ex_aws/request/url.ex:16: ExAws.Request.Url.build/2
        (ex_aws 2.1.2) lib/ex_aws/operation/s3.ex:29: ExAws.Operation.ExAws.Operation.S3.perform/2
...

Expected behavior

In 1.9.4 the file uploads as expected.

S3.upload not returning what I'm expecting from the docs

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

  • Elixir & Erlang versions (elixir --version):
    Elixir 1.6.4
  • ExAws version: ex_aws 2.0.2
  • HTTP client version: hackney 1.12.1

Current behavior

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> %{body: "", headers: [], status_code: 200}

Include code samples, errors and stacktraces if appropriate.

Expected behavior

"path/to/big/file"
|> S3.Upload.stream_file
|> S3.upload("my-bucket", "path/on/s3")
|> ExAws.request! #=> {:ok, :done}

How to check if bucket is writable.

Given a bucket name, aws key, aws secret, and region, how can I tell if bucket is writable or not. Is there a function to achieve this functionality.

download_file with ExAws.request! does not raise even though file was not downloaded completely

Environment

  • Elixir & Erlang versions (elixir --version):
    Elixir 1.6.4 (compiled with OTP 19)
  • ExAws version mix deps |grep ex_aws
    ex_aws 2.1.0
    ex_aws_s3 2.0.1
  • HTTP client version. IE for hackney do mix deps | grep hackney
    hackney 1.15.1

Current behavior

Sometimes, if you do

ExAws.S3.download_file(bucket, filename, tmp_file)
|> ExAws.request!

no error is raised even though the file has not been downloaded completely.

Expected behavior

It should raise an ExAws.Error.

We noticed this issue because we save our files with their hash as the filename and we check these hashes after downloading the file.

I think this is related to the :delayed_write option used in download.ex:66, see also https://hexdocs.pm/elixir/File.html#close/1 and http://erlang.org/doc/man/file.html#open-2 . It should be possible to fix this by checking that File.close actually returns :ok in download.ex:84.

Feature request, support endpoint based buckets (e.g. eu-west-2)

Environment

  • Elixir & Erlang versions (elixir --version):

Elixir 1.10.0 (compiled with Erlang/OTP 22)

  • ExAws version mix deps |grep ex_aws
ex_aws 2.1.5 (Hex package) (mix)
  locked at 2.1.5 (ex_aws) 0f0357f1
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) 0569f5b2
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.16.0 (Hex package) (rebar3)
  locked at 1.16.0 (hackney) 3bf0bebb

Current behaviour

Specified region is ignored and the us endpoint is used, causing a redirect which ExAws returns as an error and warns about setting region, but setting region has no effect.

# Copying file within the same bucket.
config = ExAws.Config.new(:s3) |> Map.put(:region, "eu-west-2")
ExAws.request(ExAws.S3.put_object_copy(bucket(), destination_path, bucket(), source_path, config))

Produces:

# 22:13:19.849 [warn] ExAws: Received redirect, did you specify the correct region?
# {:error, {:http_error, 301, "redirected"}}

Expected behavior

The operation succeeds without redirecting because ExAws has used the correct endpoint.

e.g. Operations need to use https://my-bucket.s3.amazonaws.com and not https://s3.amazonaws.com/my-bucket

Support `mv` ?

The AWS CLI suports a mv command for S3 - https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html

Maybe that would be a useful function for this lib?

My use case: I'd like to use ExAws.S3 to move an object in S3 to a different key (to represent that the data has been processed). I can simulate this by using put_object_copy/5 and deleting the old one, which isn't bad.

No pressure or guilt; do it if you want. Thanks for the nice library! 😄

list_buckets/0 crashes if there are no buckets visible

When made to an account with no buckets:

iex(1)>     ExAws.S3.list_buckets |> ExAws.request
** (Protocol.UndefinedError) protocol Enumerable not implemented for nil. This protocol is implemented for: DBConnection.PrepareStream, DBConnection.Stream, Date.Range, Ecto.Adapters.SQL.Stream, File.Stream, F
loki.HTMLTree, Function, GenEvent.Stream, HashDict, HashSet, IO.Stream, List, Map, MapSet, Postgrex.Stream, Range, Stream, Timex.Interval
    (elixir) lib/enum.ex:1: Enumerable.impl_for!/1
    (elixir) lib/enum.ex:141: Enumerable.reduce/3
    (elixir) lib/stream.ex:933: Stream.do_enum_transform/7
    (elixir) lib/stream.ex:857: Stream.do_transform/5
    (sweet_xml) lib/sweet_xml.ex:582: anonymous fn/4 in SweetXml.continuation_opts/2
    (xmerl) xmerl_scan.erl:568: :xmerl_scan.scan_document/2
    (xmerl) xmerl_scan.erl:291: :xmerl_scan.string/2
    (sweet_xml) lib/sweet_xml.ex:237: SweetXml.parse/2
    (sweet_xml) lib/sweet_xml.ex:421: SweetXml.xpath/2
    (sweet_xml) lib/sweet_xml.ex:451: SweetXml.xpath/3
    (sweet_xml) lib/sweet_xml.ex:526: anonymous fn/4 in SweetXml.xmap/3
    (elixir) lib/map.ex:791: Map.get_and_update/3
    (sweet_xml) lib/sweet_xml.ex:526: SweetXml.xmap/3
    (sweet_xml) lib/sweet_xml.ex:525: SweetXml.xmap/3
    (ex_aws_s3) lib/ex_aws/s3/parsers.ex:51: ExAws.S3.Parsers.parse_all_my_buckets_result/1

This is due to the XML parser returning nil instead of [], it is ambiguous in the XML.

spec for ExAws.S3.upload/4 does not account for pass through options

Environment

  • Elixir 1.12.0, Eralng OTP 24
  • ExAws version 2.2.0

Current behavior

Specifying upload pass through options, such as content_type:, to S3.upload/4 function causes the dialyzer to fail.

The documentation states that options such as content_type will be passed through, see here.

But the forth argument in the spec, only allows opts :: upload_opts which is defined here and doesn't allow for the pass through options.

I made a simple project, that demonstrates the problem on this line. You can run the dialyzer on the project to reproduce the problem.

Expected behavior

The spec for the S3.upload/4 function should allow the pass through options, such as content_type:.

put_object_copy function does not properly encode the src_bucket and src_object

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

  • Elixir & Erlang versions (elixir --version):
Erlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]

Elixir 1.9.1 (compiled with Erlang/OTP 20)
  • ExAws version mix deps |grep ex_aws
* ex_aws 2.1.1 (Hex package) (mix)
  locked at 2.1.1 (ex_aws) 1e4de210
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) c0258bbd
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.15.1 (Hex package) (rebar3)
  locked at 1.15.1 (hackney) 9f8f471c

Current behavior

The put_object_copy/5 method does not work with objects that have + in name. The function will return a 404 error as it cannot find the source object in the source bucket

Expected behavior

The object should be copied across to the new bucket.

This is happening because of this line:

|> Map.put("x-amz-copy-source", URI.encode "/#{src_bucket}/#{src_object}")

The URI.encode "/#{src_bucket}/#{src_object}" does not escape the + character

I think the solution here is replace:
URI.encode "/#{src_bucket}/#{src_object}"

with
"/#{URI.encode_www_form(src_bucket)}/#{URI.encode_www_form(src_object)}"

This will also require this PR: ex-aws/ex_aws#648, as this ensures that objects with a + in the name are properly escaped in the main library

rm_rf/1 for directory removal.

I'm looking for an operation of s3 directory removal. Locally File.rm_rf!/1 was used.

But unfortunately there seems no directory operations in ex_aws_s3.

If there's no direct way to do, then how about make a function that receive a path and remove all the files in it recursively? I think I can make a PR after some discussions.

presigned_url/4 Doesn't allow default overrides

I am trying to specify a custom X-Amz-Expires value when generating a presigned_url for s3.

Calling the function like this:
presigned_url(:get, bucket, object, query_params: [{"X-Amz-Expires", 60 * 5}])

Adds the query param, but also keeps the original. I am getting a signatures mismatch when trying to use the link. I believe this is because the request is signed before the query_params are added?

Pre-signing doesn't work for multipart uploads

I'm trying to re-create the AWS JS SDK code from here: https://github.com/transloadit/uppy/blob/7a137de07989f161bd16f6b56696bba9523da66e/packages/%40uppy/companion/src/server/controllers/s3.js#L178-L185

  def initiate_upload(key, type) do
    config = Application.fetch_env!(:ex_aws, :s3)

    response =
      ExAws.S3.initiate_multipart_upload(config[:bucket], key, [
        content_type: type,
        expires: 3600 * 15,
        acl: :public_read,
      ])
      |> ExAws.request!()
    {:ok, response.body}
  end

  def prepare_upload_part(key, upload_id, part_number) do
    config = Application.fetch_env!(:ex_aws, :s3)

    ExAws.S3.presigned_url(ExAws.Config.new(:s3), :put, config[:bucket], key, [
      expires_in: 3600 * 15,
      query_params: [
        partNumber: part_number,
        uploadId: upload_id,
      ]
    ])
  end

Generating a presigned URL for a part of a multipart upload:

{:ok, response} = initiate_upload("text.txt", "text/plain")
{:ok, url} = prepare_upload_part("text.txt", response.upload_id, 1)

Then trying to upload a part to that URL via the browser:

curl '<url>' -X PUT -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:64.0) Gecko/20100101 Firefox/64.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://localhost:8080/about' -H 'Origin: http://localhost:8080' -H 'DNT: 1' -H 'Connection: keep-alive' --data $'hello world\n'

Fails with SignatureDoesNotMatch.

Removing the query_params works, but then we're just doing a regular upload via PUT instead of a multipart upload (the part parameters are missing).

S3 Bucket name should be part of the host (not path) for compliance reasons

Environment

  • Elixir & Erlang versions (elixir --version): 1.5.2
  • ExAws version mix deps |grep ex_aws: 2.0.2
  • HTTP client version. IE for hackney do mix deps | grep hackney: 1.6.5

Current behavior

I'm not able to use an S3 bucket name in the subdomain.

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

How I want it:

<bucket-name>.<AWS-region>.amazonaws.com

How it actually comes out:

<AWS-region>.amazonaws.com/<bucket-name>/

Desired behavior

Either allow the option to have the bucket name be in the subdomain or always do it that way. Currently it is setup as part of the path. I'd like the option to instead have the bucket-name be part of the host. (If that's how you'd like it solved).

%ExAws.Operation.S3{body: "", bucket: "bucket-name", headers: %{},
 http_method: :get, params: %{"" => 1},
 parser: &ExAws.S3.Parsers.parse_list_objects/1, path: "/bucket-name//",
 resource: "", service: :s3,
 stream_builder: #Function<2.27133219/1 in ExAws.S3.list_objects/2>}
scheme, host, port: %{host: "s3-us-west-2.amazonaws.com", port: 443, scheme: "https://"}
query: %{host: "s3-us-west-2.amazonaws.com", port: 443, query: "", scheme: "https://"}

Reason

When working in an environment with compliance concerns (like PCI compliance), we want to block all outbound requests except those that we explicitly white-list. When the bucket is part of the subdomain, we can see it and allow/block it as appropriate. When it is part of the URL query, it get encrypted in the HTTPS body so we can't see it. This means we have to allow our environment to access ALL of S3. This is a data exfiltration risk. If someone found a vulnerability in our system, they could steal/exfiltrate the data to any S3 bucket/account in AWS.

Proposed Solution

I'm happy to help solve the issue if you are open to help and could provide me some initial direction.

My first thought is to add a new option (configuration option at the project level) that defaults to keep the same behavior where the bucket name is part of the path. But, when the new option is set, it would be a subdomain of the host..

[Feature Request] Multipart copy support

I'd love to see this library support a high level operation similar to S3.upload that would do a copy in parts transparently to the end user. The Ruby's SDKs AWS::S3::ObjectMultipartCopier.

We already have the upload_part_copy primitive, just need to do the coordination of all the parts and calculate start and end ranges and kick off a bunch of the upload_part_copy commands similar to the defimplin https://github.com/ex-aws/ex_aws_s3/blob/v2.0.1/lib/ex_aws/s3/upload.ex.

I'll try to start a PR, but I'll probably need polishing help. My plan.

  1. If user provides the size, use it; otherwise do a head_object to get the size.
  2. Create a list of {start, stop} tuples, based on given chunk size (or the standard 5MB)
  3. Use Task.async_streams to iterate through the ranges, calling upload_part_copy for each one.

list_objects/1 with stream!/0 fails at end of "page" (after first 1000)

Environment

  • Elixir & Erlang versions 1.10.2
  • ExAws version 2.1.3
  • HTTP client version 1.16.0

Current behavior

ExAws.S3.list_objects("mybucket")
|> ExAws.stream!()
|> Stream.filter(&has_key?(&1.key))
|> Stream.map(&get_and_process_objects(&1.key, "mybucket"))
|> Enum.to_list()

After we finish processing the 1000th object, we get:

** (ExAws.Error) ExAws Request Error!
{:error, {:http_error, 403, %{body: “<?xml version=\“1.0\” encoding=\“UTF-8\“?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>******</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256\n20200807T233025Z\n20200807/us-east-2/s3/aws4_request\n7669d942e1f83a2c6061b9bad8a7755f6e441ec952cfc01de30327cadef84a45</StringToSign><SignatureProvided>93f03b3e14236e620bfcfcf78cfc95e6deaff6a8870687b54834998596e47b3b</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 30 30 38 30 37 54 32 33 33 30 32 35 5a 0a 32 30 32 30 30 38 30 37 2f 75 73 2d 65 61 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 37 36 36 39 64 39 34 32 65 31 66 38 33 61 32 63 36 30 36 31 62 39 62 61 64 38 61 37 37 35 35 66 36 65 34 34 31 65 63 39 35 32 63 66 63 30 31 64 65 33 30 33 32 37 63 61 64 65 66 38 34 61 34 35</StringToSignBytes><CanonicalRequest>GET\n/cariq-calamp-logs/\nmarker=2019-07-26%2B03%3A18%3A08.345831\nhost:s3.us-east-2.amazonaws.com\nx-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nx-amz-date:20200807T233025Z\n\nhost;x-amz-content-sha256;x-amz-date\ne3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855</CanonicalRequest><CanonicalRequestBytes>47 45 54 0a 2f 63 61 72 69 71 2d 63 61 6c 61 6d 70 2d 6c 6f 67 73 2f 0a 6d 61 72 6b 65 72 3d 32 30 31 39 2d 30 37 2d 32 36 25 32 42 30 33 25 33 41 31 38 25 33 41 30 38 2e 33 34 35 38 33 31 0a 68 6f 73 74 3a 73 33 2e 75 73 2d 65 61 73 74 2d 32 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 65 33 62 30 63 34 34 32 39 38 66 63 31 63 31 34 39 61 66 62 66 34 63 38 39 39 36 66 62 39 32 34 32 37 61 65 34 31 65 34 36 34 39 62 39 33 34 63 61 34 39 35 39 39 31 62 37 38 35 32 62 38 35 35 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 32 30 30 38 30 37 54 32 33 33 30 32 35 5a 0a 0a 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 65 33 62 30 63 34 34 32 39 38 66 63 31 63 31 34 39 61 66 62 66 34 63 38 39 39 36 66 62 39 32 34 32 37 61 65 34 31 65 34 36 34 39 62 39 33 34 63 61 34 39 35 39 39 31 62 37 38 35 32 62 38 35 35</CanonicalRequestBytes><RequestId>A192E91CD4669824</RequestId><HostId>MC9cG0H9QIUgT39rQjnArUgfDoX55P9PEAJTZMYG4KXS5lJVmCxEg7UIm+NGzBo4Nqlxd5VrSpI=</HostId></Error>“, headers: [{“x-amz-bucket-region”, “us-east-2"}, {“x-amz-request-id”, “A192E91CD4669824"}, {“x-amz-id-2”, “MC9cG0H9QIUgT39rQjnArUgfDoX55P9PEAJTZMYG4KXS5lJVmCxEg7UIm+NGzBo4Nqlxd5VrSpI=“}, {“Content-Type”, “application/xml”}, {“Transfer-Encoding”, “chunked”}, {“Date”, “Fri, 07 Aug 2020 23:30:28 GMT”}, {“Server”, “AmazonS3"}], status_code: 403}}}
    (ex_aws 2.1.3) lib/ex_aws.ex:66: ExAws.request!/2
    (ex_aws_s3 2.0.2) lib/ex_aws/s3/lazy.ex:7: anonymous fn/4 in ExAws.S3.Lazy.stream_objects!/3
    (ex_aws_s3 2.0.2) lib/ex_aws/s3/lazy.ex:14: anonymous fn/2 in ExAws.S3.Lazy.stream_objects!/3
    (elixir 1.10.2) lib/stream.ex:1421: Stream.do_resource/5
    (elixir 1.10.2) lib/stream.ex:1609: Enumerable.Stream.do_each/4
    (elixir 1.10.2) lib/enum.ex:3383: Enum.reverse/1
    (elixir 1.10.2) lib/enum.ex:2982: Enum.to_list/1
    (myapp 0.1.0) lib/myapp.ex:27: MyApp.retrieve_messages/0

Expected behavior

I would expect stream to keep getting more objects until they've all been processed.

Support for S3 Select

Do you guys plan to support S3 Select any time soon?

I've hacked at it a bit and thought it was working until I started validating what I get back and then I realised the response is chunked and streaming and to be honest its over my head at this stage.

Will be fantastic if you could add this functionality.
The api could simply pass through the request XML and expression. The challenging part is parsing the response.

The project is much appreciated, with or without this. Thanks!

Download. get_file_size does not support HTTP/2 lowercased header names

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

  • Elixir & Erlang versions (elixir --version):
    Erlang/OTP 21
    Elixir (1.7.4)
  • ExAws version mix deps |grep ex_aws:
    ex_aws_s3 2.0.1
    ex_aws 2.0.2
  • HTTP client version:
    machine-gun 0.1.5

Current behavior

Error thrown on ExAws.S3.download_file

** (ArgumentError) argument error
:erlang.element(2, nil)
(ex_aws_s3) lib/ex_aws/s3/download.ex:61: ExAws.S3.Download.get_file_size/3
(ex_aws_s3) lib/ex_aws/s3/download.ex:33: ExAws.S3.Download.build_chunk_stream/2
(ex_aws_s3) lib/ex_aws/s3/download.ex:74: ExAws.Operation.ExAws.S3.Download.perform/2
(ex_aws) lib/ex_aws.ex:61: ExAws.request!/2

The reason is headers |> List.keyfind("Content-Length", 0, nil)

Expected behavior

To support also HTTP 2 headers
The header information retrieval shall work also for lowercased "content-length" name

ISsue using list_buckets

Environment

Erlang/OTP 21 [erts-10.0.8] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe]

Elixir 1.7.3 (compiled with Erlang/OTP 21)

there are my project dependencies so far

mix deps
* parse_trans 3.3.0 (Hex package) (rebar3)
  locked at 3.3.0 (parse_trans) 09765507
  ok
* mimerl 1.0.2 (Hex package) (rebar3)
  locked at 1.0.2 (mimerl) 993f9b0e
  ok
* nimble_parsec 0.4.0 (Hex package) (mix)
  locked at 0.4.0 (nimble_parsec) ee261bb5
  ok
* makeup 0.5.5 (Hex package) (mix)
  locked at 0.5.5 (makeup) 9e08dfc4
  ok
* metrics 1.0.1 (Hex package) (rebar3)
  locked at 1.0.1 (metrics) 25f094de
  ok
* bunt 0.2.0 (Hex package) (mix)
  locked at 0.2.0 (bunt) 951c6e80
  ok
* unicode_util_compat 0.4.1 (Hex package) (rebar3)
  locked at 0.4.1 (unicode_util_compat) d869e4c6
  ok
* idna 6.0.0 (Hex package) (rebar3)
  locked at 6.0.0 (idna) 689c46cb
  ok
* gen_stage 0.14.0 (Hex package) (mix)
  locked at 0.14.0 (gen_stage) 65ae7850
  ok
* poison 3.1.0 (Hex package) (mix)
  locked at 3.1.0 (poison) d9eb6366
  ok
* ssl_verify_fun 1.1.4 (Hex package) (mix)
  locked at 1.1.4 (ssl_verify_fun) f0eafff8
  ok
* configparser_ex 2.0.1 (Hex package) (mix)
  locked at 2.0.1 (configparser_ex) 71a002cb
  ok
* certifi 2.4.2 (Hex package) (rebar3)
  locked at 2.4.2 (certifi) 75424ff0
  ok
* hackney 1.14.0 (Hex package) (rebar3)
  locked at 1.14.0 (hackney) 66e29e78
  ok
* ex_aws 2.1.0 (Hex package) (mix)
  locked at 2.1.0 (ex_aws) b9265152
  ok
* ex_aws_s3 2.0.1 (Hex package) (mix)
  locked at 2.0.1 (ex_aws_s3) 9e09366e
  ok
* earmark 1.2.6 (Hex package) (mix)
  locked at 1.2.6 (earmark) b6da42b3
  ok
* credo 0.8.10 (Hex package) (mix)
  locked at 0.8.10 (credo) 261862bb
  ok
* makeup_elixir 0.8.2 (Hex package) (mix)
  locked at 0.8.2 (makeup_elixir) ecc130aa
  ok
* ex_doc 0.19.1 (Hex package) (mix)
  locked at 0.19.1 (ex_doc) 519bb9c1
  ok
* artificery 0.2.6 (Hex package) (mix)
  locked at 0.2.6 (artificery) f6029097
  ok
* distillery 2.0.9 (Hex package) (mix)
  locked at 2.0.9 (distillery) 1e03e1bd
  ok

Current behavior

ExAws.S3.list_buckets([content_type: "application/json"]) |> ExAws.request
** (ExAws.Error) Missing XML parser. Please see docs
(ex_aws_s3) lib/ex_aws/s3/parsers.ex:102: ExAws.S3.Parsers.missing_xml_parser/0

Expected behavior

I want to be able to use JSON? Not sure how to specify it. Looks like here you can only specify params but not headers on that function? Or am I missing something?

Hex Docs `2.0.0` missing links to source.

Environment

  • Hex Docs 2.0.0

Current behavior

Hex docs links return 404

Expected behavior

Hex docs links carry through to the source code

Example - try source link from the page below:

https://hexdocs.pm/ex_aws_s3/ExAws.S3.html#delete_object/3

MatchErrors during upload are not handled, Task.async_stream terminates during upload_chunk

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

  • Elixir & Erlang versions (elixir --version):
  • Erlang/OTP 21
  • Elixir (1.7.4)
  • ExAws version mix deps |grep ex_aws: ex_aws_s3 2.0.1, ex_aws 2.0.2
  • HTTP client version: machine-gun 0.1.5

Current behaviour

Errors in ExAws.S3.upload() cannot be handled in the parent processes.
Failure examples for larger packages upload ( > 5MB )(these can be reproduced in the context of environments which can have high latency in uploading to AWS; these scenarios may work after a retry):

[error] Task #PID<0.2306.0> started from #PID<0.1758.0> terminating
** (ExAws.Error) ExAws Request Error!
{:error, {:http_error, 403, %{body: "\nSignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method.

Expected behaviour

Handle internal tasks errors and return an {:error, :failed_to_upload} in ExAws.S3.Upload.complete

current process crashes when downloading fle

Environment

  • Elixir & Erlang versions (elixir --version):
Erlang/OTP 22 [erts-10.5] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe]

Elixir 1.10.2 (compiled with Erlang/OTP 22)
  • ExAws version
* ex_aws 2.1.3 (Hex package) (mix)
  locked at 2.1.3 (ex_aws) 0bdbe2ae
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) 0569f5b2
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.15.2 (Hex package) (rebar3)
  locked at 1.15.2 (hackney) e0100f8e

Current behavior

Hi when trying to download multiple files at once i'm getting the following error, the problem is that it seems that is causing the current process to crash as is not returning an error tuple, i think is because at download operation async_stream is being used and that links to current process, but not sure if thats the reason see https://github.com/ex-aws/ex_aws_s3/blob/master/lib/ex_aws/s3/download.ex#L71-L93 and from docs

The tasks will be linked to the current process, similarly to async/1.

https://hexdocs.pm/elixir/Task.html#async_stream/5

besides of that i'm not seeing any other stack trace, error log or anything that helps me better diagnose the problem, but at current process i'm logging errors and also i tried rescuing without success so that's why i think this may be the reason

May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     Args: [#Function<0.39970933/1 in ExAws.Operation.ExAws.S3.Download.download_to/3>, [%{end_byte: 16252927999, start_byte: 16200499200}]]
May 12 15:35:12 titan-media-parser-01 media_parser[1307]: Function: &:erlang.apply/2
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (stdlib 3.12) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (elixir 1.10.2) lib/task/supervised.ex:35: Task.Supervised.reply/5
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (elixir 1.10.2) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (ex_aws_s3 2.0.2) lib/ex_aws/s3/download.ex:76: anonymous fn/4 in ExAws.Operation.ExAws.S3.Download.download_to/3
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (ex_aws_s3 2.0.2) lib/ex_aws/s3/download.ex:21: ExAws.S3.Download.get_chunk/3
May 12 15:35:12 titan-media-parser-01 media_parser[1307]:     (ex_aws 2.1.3) lib/ex_aws.ex:66: ExAws.request!/2
May 12 15:35:12 titan-media-parser-01 media_parser[1307]: {:error, :checkout_timeout}
May 12 15:35:12 titan-media-parser-01 media_parser[1307]: ** (ExAws.Error) ExAws Request Error!
May 12 15:35:12 titan-media-parser-01 media_parser[1307]: 15:35:12.780 [error] Task #PID<0.7083.0> started from #PID<0.8148.0> terminating
May 12 15:35:12 titan-media-parser-01 media_parser[1307]: 15:35:12.779 [warn]  ExAws: HTTP ERROR: :checkout_timeout for URL: "..." ATTEMPT: 10

Expected behavior

to not crash current process, but instead return error tuple

Cannot upload file to Google Cloud Storage bucket

Environment

  • Elixir v1.9.4
  • ex_aws v2.1.0, ex_aws_s3 v2.0.1
  • hackney v1.15.2

Current behavior

When uploading file to GCS bucket, 400 returned. Code and returned request are shown below.

upload.tempfile
|> ExAws.S3.Upload.stream_file
|> ExAws.S3.upload(bucket, s3_name, [
  {:content_type, upload.content_type},
])
|> ExAws.request
{:error, {:http_error, 400, %{body: "<?xml version='1.0' encoding='UTF-8'?><Error><Code>InvalidArgument</Code><Message>Invalid argument.</Message><Details>POST object expects Content-Type multipart/form-data</Details></Error>", headers: [{"X-GUploader-UploadID","XXX"}, {"Content-Type", "application/xml; charset=UTF-8"}, {"Content-Length", "188"}, {"Vary", "Origin"}, {"Date", "Mon, 20 Jan 2020 17:08:20 GMT"}, {"Server","UploadServer"}], status_code: 400}}}

Expected behavior

Uploaded and 2xx returned.

Support AWS S3 Accelerated Transfer

Environment

$ elixir -v
Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace]

Elixir 1.7.3 (compiled with Erlang/OTP 21)

$ mix deps | grep "ex_aws\|hackney"
* hackney 1.14.3 (Hex package) (rebar3)
  locked at 1.14.3 (hackney) b5f6f5dc
* ex_aws 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws) 8df2f96f
* ex_aws_sns 2.0.0 (Hex package) (mix)
  locked at 2.0.0 (ex_aws_sns) c51dda8c
* ex_aws_firehose 2.0.0 (Hex package) (mix)
  locked at 2.0.0 (ex_aws_firehose) afb1b252
* ex_aws_s3 2.0.1 (Hex package) (mix)
  locked at 2.0.1 (ex_aws_s3) 9e09366e
* ex_aws_elastic_transcoder 2.0.0 (Hex package) (mix)
  locked at 2.0.0 (ex_aws_elastic_transcoder) d51ad7b7

Current behavior

AWS S3 Accelerated Transfer is not supported

Expected behavior

Refer: ex-aws/ex_aws#418

put_bucket_policy argument error

Environment

  • Elixir & Erlang versions (elixir --version)
    Elixir 1.7.4 (compiled with Erlang/OTP 21)
  • ExAws version mix deps |grep ex_aws
    ex_aws 2.1.0 (Hex package) (mix)
  • HTTP client version. IE for hackney do mix deps | grep hackney
    hackney 1.12.1 (Hex package) (rebar3)

Current behavior

I believe that the map is not parsed to a string. Therefor the content-size cannot be calculated because it needs a string.

action: ExAws.S3.put_bucket_policy("testing", %{"a" => "b"}) |> ExAws.request!

result: ** (ArgumentError) argument error :erlang.iolist_to_binary(%{"a" => "b"}) (crypto) crypto.erl:368: :crypto.hash/2 (ex_aws) lib/ex_aws/auth/utils.ex:20: ExAws.Auth.Utils.hash_sha256/1 (ex_aws) lib/ex_aws/operation/s3.ex:31: ExAws.Operation.ExAws.Operation.S3.perform/2

Expected behavior

action: ExAws.S3.put_bucket_policy("testing", %{"a" => "b"}) |> ExAws.request!

This result is build by breaking contract and to supply a string instead of a map.
ExAws.S3.put_bucket_policy("testing", "testing") |> ExAws.request!

result: {:error, {:http_error, 400, %{ body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MalformedPolicy</Code><Message>Policy has invalid resource.</Message><Resource>/testing/</Resource><RequestId>1568816DC6089534</RequestId><HostId>3L137</HostId></Error>", headers: [ {"Accept-Ranges", "bytes"}, {"Content-Security-Policy", "block-all-mixed-content"}, {"Content-Type", "application/xml"}, {"Server", "Minio/RELEASE.2018-10-25T01-27-03Z (linux; amd64)"}, {"Vary", "Origin"}, {"X-Amz-Request-Id", "1568816DC6089534"}, {"X-Xss-Protection", "1; mode=block"}, {"Date", "Mon, 19 Nov 2018 10:59:40 GMT"}, {"Transfer-Encoding", "chunked"} ], status_code: 400 }}}

Release of Download Stream functionality

My team is interested in using the recently added ExAws.stream!() functionality for ExAws.S3.download_file. For now we're pulling from this repo with a pinned commit, but we'd like to use a proper hex version if possible.

When do you expect to release this functionality?

If this is the wrong place to ask this kind of question, just let me know where I should. The submission guidelines didn't seem to cover this.

#60

SignatureDoesNotMatch "Regression" with "marker" on Digital Ocean Spaces

The following request worked great with ex_aws 1.15 but seems to be broken on 2.1.3/2.0.2.

Note, I'm using Digital Ocean spaces. I can't speak to whether this works with AWS, but since it used to work with spaces hopefully we can get it working again.

Environment

  • Elixir & Erlang versions:
    Erlang/OTP 22 [erts-10.7.2] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1]

Elixir 1.10.3 (compiled with Erlang/OTP 22)

  • ExAws version:

ex_aws (Hex package) (mix)
locked at 2.1.3 (ex_aws) 0bdbe2ae
ex_aws_s3 (Hex package) (mix)
locked at 2.0.2 (ex_aws_s3) 0569f5b2

  • HTTP client version:

hackney 1.16.0 (Hex package) (rebar3)
locked at 1.16.0 (hackney) 3bf0bebb

Current behavior

> ExAws.S3.list_objects("my-bucket", marker: "Iceland 2017/_DSC3120.jpg") |> ExAws.request()
[debug] ExAws: Request URL: "https://sfo2.digitaloceanspaces.com/my-bucket/?marker=Iceland%2B2017%2F_DSC3120.jpg" HEADERS: [{"Authorization", "AWS4-HMAC-SHA256 Credential=XXXX/20200609/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=fdec7497df0b8ad5b627585eabca09cb90edba71dda981b27bf12c55214af00f"}, {"host", "sfo2.digitaloceanspaces.com"}, {"x-amz-date", "20200609T123143Z"}, {"x-amz-content-sha256", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}] BODY: "" ATTEMPT: 1
{:error,
 {:http_error, 403,
  %{
    body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx000000000000132977532-005edf8130-443fd0-sfo2a</RequestId><HostId>443fd0-sfo2a-sfo</HostId></Error>",
    headers: [
      {"Content-Length", "190"},
      {"x-amz-request-id", "tx000000000000132977532-005edf8130-443fd0-sfo2a"},
      {"Accept-Ranges", "bytes"},
      {"Content-Type", "application/xml"},
      {"Date", "Tue, 09 Jun 2020 12:31:44 GMT"},
      {"Strict-Transport-Security",
       "max-age=15552000; includeSubDomains; preload"}
    ],
    status_code: 403
  }}}

As you can see this worked OK on 1.15:

> ExAws.S3.list_objects("my-bucket", marker: "Iceland 2017/_DSC3120.jpg") |> ExAws.request()                                                                                                
[debug] Request URL: "https://sfo2.digitaloceanspaces.com/my-bucket/?marker=Iceland+2017%2F_DSC3120.jpg"                                                                                          
[debug] Request HEADERS: [{"Authorization", "AWS4-HMAC-SHA256 Credential=XXXX/20200609/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature
=57df96fd870c65f88b53f86873f689dbd8aa5c2078ea2e57f9990054b1c0d2c3"}, {"host", "sfo2.digitaloceanspaces.com"}, {"x-amz-date", "20200609T123318Z"}, {"content-length", 0}, {"x-amz-content-sha256", "e3b0c4429
8fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}]                                                                                                                                                  
[debug] Request BODY: ""                                                                              
{:ok,                                                                                                 
 %{                                                                                                   
   body: %{.....

The only small difference I notice between the requests is how the URL is encoded in the debugging output Iceland%2B2017 vs. Iceland+2017. However, that may be simply in the debug output.

I'm not sure where to look here to debug more, but any help is appreciated.

New Release

Could we cut a new release with recent changes when possible?

Thanks Again

get_object doesn't support an object version

Environment

  • Elixir & Erlang versions (elixir --version):
    Elixir 1.9.1 (compiled with Erlang/OTP 20)

  • ExAws version mix deps | grep ex_aws

  * ex_aws 2.1.1 (Hex package) (mix)
    locked at 2.1.1 (ex_aws) 1e4de210
  * ex_aws_s3 2.0.2 (Hex package) (mix)
    locked at 2.0.2 (ex_aws_s3) c0258bbd

Current behavior

If you have versioning enabled on your S3 objects, there is no way to specify an object version when using get_object.

Expected behavior

Be able to pass an object_version option to fetch something other than the latest version.

Please take a look at my proposed solution here: #80

upload/4 result is not parsed

  • Do not use the issues tracker for help or support (try Elixir Forum, Slack, IRC, etc.)
  • Questions about how to contribute are fine.

Environment

  • Elixir & Erlang versions (elixir --version):
Erlang/OTP 22 [erts-10.5] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]

Elixir 1.9.1 (compiled with Erlang/OTP 21)
  • ExAws version mix deps |grep ex_aws
* ex_aws 2.1.1 (Hex package) (mix)
  locked at 2.1.1 (ex_aws) 1e4de210
* ex_aws_s3 2.0.2 (Hex package) (mix)
  locked at 2.0.2 (ex_aws_s3) c0258bbd
  • HTTP client version. IE for hackney do mix deps | grep hackney
* hackney 1.15.1 (https://github.com/benoitc/hackney.git) (rebar3)

Current behavior

When we use upload/4 method response body is not parsed, because parse_complete_multipart_upload does nothing

def parse_complete_multipart_upload(val), do: val

Expected behavior

Response body is parsed

Proposed Solution

Right now parse_upload method parses CompleteMultipartUploadResult and is not used anywhere

def parse_upload({:ok, resp = %{body: xml}}) do

So, as I see, there is two options

  1. Delete current implementation of parse_complete_multipart_upload, because it does nothing rn. And rename parse_upload to parse_complete_multipart_upload
  2. Pass &Parsers.parse_upload/1 as parser here
    request(:post, bucket, object, [params: %{"uploadId" => upload_id}, body: body], %{parser: &Parsers.parse_complete_multipart_upload/1})

I will a PR with solution, that you consider right
I, personally, prefer first solution.

MalformedXML for S3 upload, but only for empty file

NOTE: I am using the latest ex_aws and ex_aws_s3 from github, to get the 2.0 functionality. If this is related to my problem and the code is a work in progress, feel free to close this ticket. 😄

I am trying to upload a file with the following code.

"filename"
|> ExAws.S3.Upload.stream_file
|> ExAws.S3.upload("bucket-name", "test/filename")
|> ExAws.request(ExAws.Config.new(:s3, [region: "my-region"]))

In my small amount of testing so far, this works for files with content but fails if I try to upload an empty file (i.e. one I created with touch).

** (ExAws.Error) ExAws Request Error!

{:error, {:http_error, 400, %{body: "<Error><Code>MalformedXML</Code><Message>The XML you provided was not well-formed or did not validate against our published schema</Message><RequestId>...</RequestId><HostId>...</HostId></Error>", headers: [{"x-amz-request-id", "..."}, {"x-amz-id-2", "..."}, {"Content-Type", "application/xml"}, {"Transfer-Encoding", "chunked"}, {"Date", "Tue, 07 Nov 2017 20:20:55 GMT"}, {"Connection", "close"}, {"Server", "AmazonS3"}], status_code: 400}}}

    (ex_aws) lib/ex_aws.ex:46: ExAws.request!/2

Relevant deps versions in case this is important...

* sweet_xml (Hex package) (mix)
  locked at 0.6.5 (sweet_xml) dd9cde44
  the dependency build is outdated, please run "mix deps.compile"
* ex_aws (https://github.com/ex-aws/ex_aws) (mix)
  locked at 74c97c2
  the dependency build is outdated, please run "mix deps.compile"
* ex_aws_s3 (https://github.com/ex-aws/ex_aws_s3) (mix)
  locked at 1dfb158

Any ideas what might be going wrong? I was wondering if possibly some unique AWS codes happened to have special chars in them or something, but I didn't notice anything beyond /, + and =.

"The AWS Access Key Id you provided does not exist in our records" when listing objects in bucket

Environment

Erlang/OTP 20 [erts-9.2.1] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Elixir 1.6.1 (compiled with OTP 20)

ex_aws_s3 v2.0.0

Current behavior

I am getting an unexpected error from AWS when trying to do a simple list operation on a bucket:

iex(2)> ExAws.S3.list_objects("my-bucket") |> ExAws.request!(region: "us-west-2")
** (UndefinedFunctionError) function Poison.decode/1 is undefined (module Poison is not available)
    Poison.decode("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>REMOVED</AWSAccessKeyId><RequestId>REMOVED</RequestId><HostId>REMOVED</HostId></Error>")
    (ex_aws) lib/ex_aws/request.ex:58: ExAws.Request.client_error/2
    (ex_aws) lib/ex_aws/request.ex:42: ExAws.Request.request_and_retry/7
    (ex_aws) lib/ex_aws/operation/s3.ex:40: ExAws.Operation.ExAws.Operation.S3.perform/2
    (ex_aws) lib/ex_aws.ex:61: ExAws.request!/2

When I use the AWS cli on the same terminal session (AWS session set through env variables with aws-vault) I can list the objects without any issues.

aws s3 ls s3://my-bucket --region us-west-2

Am I missing anything? Seems like the ex_aws_s3 is performing the request with unexpected parameters. Any ideas on what I am missing?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.