martide / arc_gcs Goto Github PK
View Code? Open in Web Editor NEWProvides an Arc backend for Google Cloud Storage
License: Apache License 2.0
Provides an Arc backend for Google Cloud Storage
License: Apache License 2.0
Hi, I'm probably missing something obvious but I can't get my files deleted; everything else works. (arc 0.11.0; arc_gcs 0.2.3)
Here's my uploader
defmodule Forluxury.Image do
use Arc.Definition
use Arc.Ecto.Definition
@versions [:original, :thumb]
@acl :public_read
def transform(:thumb, _) do
{:convert, "-resize 75x75 -background white -gravity center -extent 75x75"}
end
def transform(:original, _) do
{:convert, "-resize 350x350 -background white -gravity center -extent 350x350"}
end
def filename(version, _) do
version
end
def storage_dir(_version, {_file, scope}) do
if scope do
"images/#{scope.auction_id}/#{scope.image_uuid}"
end
end
def __storage, do: Arc.Storage.GCS
def gcs_object_headers(_, {file, _scope}) do
[content_type: MIME.from_path(file.file_name)]
end
end
the fields auction_id & image_uuid exist in the db and are populated (the generated URLs are correct); I don't delete the db record before trying to delete the file.
in iex:
path = Forluxury.Image.url({img.image, img}, :thumb)
"https://storage.googleapis.com/ddd-images-dev/images/6b00f8c4-805b-4c7d-be4a-b3c34f60e7f8/e59279d8-37b1-11eb-aaa3-7c6d628fbb76/thumb.JPG?v=63774471479"
Image.delete({path, img})
:ok
I'm not sure what I'm missing, but it just ignores the delete. I've also tried removing the "?v=xxxx" bit, same result. Same for the 'original' version or no version specified.
My service account has "Storage Object Admin" and "Storage Object Creator" permissions.
Any pointers greatly appreciated or how I could further diagnose this. I'm not sure what else to try.
I'm currently using arc_ecto 0.11.3 and arc_gcs 0.2.3. The error looks something like:
"Value 'null' in content does not agree with value '<my-test-bucket>'. This can happen when a value set through a parameter is inconsistent with a value set in the request.\"
Looking at differences between 0.1 and this, looks like you guys are using Tesla now for binary uploads, which I bet this use-case utilizes?
I was wondering why there is always a check if the file exists?
Is this required for signed urls which was introduced with #2?
When using the standard host ("storage.googleapis.com"
), urls contain the name of the cloud bucket in the first path segment, for example: https://storage.googleapis.com/my-bucket/path/to/object.png
However, when a custom load balancer / domain is configured on GCP, the bucket name should not be there, since the load balancer handles the routing. Therefore urls should look like: https://my-domain.com/path/to/object.png
Right now, the bucket name is left in the path, which results in incorrect urls being generated. If this is a valid issue I'm happy to put together a PR with a fix!
We just updated arc_gcs
from v0.1.2
to v0.2.3
and now get a FunctionClauseError when uploading an image.
** (FunctionClauseError) no function clause matching in URI.encode/2
(elixir) lib/uri.ex:293: URI.encode(nil, &URI.char_unreserved?/1)
(google_api_storage) lib/google_api/storage/v1/api/objects.ex:732: GoogleApi.Storage.V1.Api.Objects.storage_objects_insert_simple/7
(elixir) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
(elixir) lib/task/supervised.ex:35: Task.Supervised.reply/5
(stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Function: #Function<1.111884395/0 in Arc.Actions.Store.async_put_version/3>
Args: []
17:28:07.579 pid=<0.4923.0> [error] Ranch protocol #PID<0.4924.0> of listener Slab.Web.Endpoint.HTTPS (connection #PID<0.4923.0>, stream id 1) terminated
** (exit) :function_clause
For context, the bucket name is our custom domain for assets. So, we've set asset_host:
to the domain name and bucket:
to nil
to avoid unnecessary prefix on all paths. This is our Arc config:
config :arc,
storage: Arc.Storage.GCS,
bucket: nil,
asset_host: "static.myapp.com"
I also tried setting bucket: ""
but that just gives a different error.
This seems to be the line responsible for the error:
arc_gcs/lib/arc/storage/gcs.ex
Line 133 in 0ef6308
Files with spaces in the file name cannot be downloaded using the expected filename.
kitty test +1.jpg
kitty+test++1.jpg
I noticed this recently after upgrading to Elixir 1.7.x and OTP 21, running on Fedora 29; I did not see this problem before that, but can't be sure that's what is causing the issue.
Doing a simple URI.encode()
on the do_put()
method seems to resolve this for me.
See this commit in my fork for the change - I'm happy to submit this as a PR if you like.
I have an uploader with a custom filename defined along the lines of:
defmodule Radar.Uploaders.Flag do
@moduledoc """
Upload country flag images for use throughout the platform.
"""
use Arc.Definition
# Include ecto support (requires package arc_ecto installed):
use Arc.Ecto.Definition
# allow public read of all files
# https://cloud.google.com/storage/docs/xml-api/reference-headers#xgoogacl
@acl "public-read"
@versions [:original, :main, :list]
# Whitelist file extensions:
def validate({file, _}) do
ext_name = Path.extname(file.file_name) |> String.downcase()
~w(.jpg .jpeg .png) |> Enum.member?(ext_name)
end
# Define a thumbnail transformation:
def transform(:main, _) do
resize_to_limit("320x320")
end
def transform(:list, _) do
resize_to_limit("80x80")
end
def resize_to_limit(size_str) do
{:convert, "-strip -thumbnail #{size_str}\> -format jpg", :jpg}
end
# Override the persisted filenames:
def filename(version, {file, scope}) do
name = Path.basename(file.file_name, Path.extname(file.file_name))
"#{scope.id}_#{version}_#{name}"
end
# Override the storage directory:
def storage_dir(_version, {_file, scope}) do
"uploads/countries/#{scope.country_code}"
end
# Provide a default URL if there hasn't been a file uploaded
def default_url(version, _scope) do
"/images/flag/missing_#{version}.jpg"
end
# Specify custom headers for s3 objects
# Available options are [:cache_control, :content_disposition,
# :content_encoding, :content_length, :content_type,
# :expect, :expires, :storage_class, :website_redirect_location]
#
# def s3_object_headers(version, {file, scope}) do
# [content_type: Plug.MIME.path(file.file_name)]
# end
end
The urls are returned as follows:
iex(10)> Radar.Uploaders.Flag.urls({c.flag, c})
%{
list: "https://storage.googleapis.com/company-bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_list_TH.jpg?v=63685274697",
main: "https://storage.googleapis.com/company-bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_main_TH.jpg?v=63685274697",
original: "https://storage.googleapis.com/company-bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_original_TH.jpg?v=63685274697"
}
However, the filenames stored on the server are:
https://storage.googleapis.com/company_bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_list_93c31206-0159-41b3-a255-83417b792e22_list_img.jpg
https://storage.googleapis.com/company_bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_main_93c31206-0159-41b3-a255-83417b792e22_main_img.jpg
https://storage.googleapis.com/company_bucket/uploads/countries/TH/93c31206-0159-41b3-a255-83417b792e22_original_93c31206-0159-41b3-a255-83417b792e22_original_img.jpg
i.e. the file posted to the server has the filename format of: #{scope.id}_#{version}_#{scope.id}_#{version}_#{name}
when it should be simply: #{scope.id}_#{version}_${name}
Any ideas?
Your help is greatly appreciated.
Hi, I keep getting the following error message when trying to upload an image:
errors: [file: {"is invalid", [type: Bounce.ImageFile.Type, validation: :cast]}]
From what I understood on the arc repo this could have something to do with it not being able to upload. Is there any way to turn on debugging for the google cloud upload?
The CHANGELOG.md
file in this package hasn't been updated since v0.2.0. Since then, the package version has moved on to v0.2.3. Is this file still being actively used? If so, will it be backfilled with the changes made since?
I'm asking because there have been some breaking changes from v0.2.1 to v0.2.3 which I've encountered and @dependabot has no changelog to present.
Hello - thank you very much for your hard work on this library.
In the course of using this library, I've occasionally encountered an error case where we can end up with a "no matching right hand side value" error in the private get_token
function -
defp get_token do
{:ok, %{token: token}} = Token.for_scope(@full_control_scope)
token
end
The problem is that Token.for_scope/1
can return an :error
tuple, such as
{:error, %HTTPoison.Error{id: nil, reason: :nxdomain}}
It seems to me that this function and its callers should be updated to handle that error case - perhaps ultimately just returning the error tuple in those cases.
What do you think? I'd be very happy to work on this and open a PR if you thought that was an agreeable way forward.
Thanks much!
The published release 0.0.4 isn't visible in the repo's releases page, because the tag hasn't been placed or pushed.
We recently bumped arc_gcs
to 0.2.
We have some code uploading data each night to Google Cloud Storage. It has been working flawlessly with 0.1.2 for a long time. But after our upgrade we get the following:
result = Segment.store(%{filename: filename, binary: some_data})
Results in:
{
:error,
[
%Tesla.Env{
__client__: %Tesla.Client{
adapter: nil,
fun: nil,
post: [],
pre: [
{Tesla.Middleware.Headers, :call, [[{"authorization", "Bearer XXX"}]]}
]
},
__module__: GoogleApi.Storage.V1.Connection,
body: "Missing end boundary in multipart body.",
headers: [
{"date", "Wed, 18 Dec 2019 08:36:20 GMT"},
{"server", "UploadServer"},
{"content-length", "39"},
{"content-type", "text/plain; charset=utf-8"},
{"x-guploader-uploadid", "XXX"},
{"alt-svc",
"quic=\":443\"; ma=2592000; v=\"46,43\",h3-Q050=\":443\"; ma=2592000,h3-Q049=\":443\"; ma=2592000,h3-Q048=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000"}
],
method: :post,
opts: [],
query: [uploadType: "multipart", predefinedAcl: "private"],
status: 400,
url: "https://www.googleapis.com/upload/storage/v1/b/XXX/o"
}
]
}
With the following error:
[warn] Received unexpected :ssl data on {:sslsocket, {:gen_tcp, #Port<0.2762>, :tls_connection, :undefined}, [#PID<0.2852.0>, #PID<0.2851.0>]}
Data: "HTTP/1.0 400 Bad Request\r\nContent-Type: text/html; charset=UTF-8\r\nReferrer-Policy: no-referrer\r\nContent-Length: 1555\r\nDate: Wed, 18 Dec 2019 08:36:20 GMT\r\n\r\n<!DOCTYPE html>\n<html lang=en>\n <meta charset=utf-8>\n <meta name=viewport content=\"initial-scale=1, minimum-scale=1, width=device-width\">\n <title>Error 400 (Bad Request)!!1</title>\n <style>\n *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\n </style>\n <a href=//www.google.com/><span id=logo aria-label=Google></span></a>\n <p><b>400.</b> <ins>That’s an error.</ins>\n <p>Your client has issued a malformed or illegal request. <ins>That’s all we know.</ins>\n"
MFA: :undefined
Request: :undefined
Session: {:session, {{'www.googleapis.com', 443}, #PID<0.2850.0>}, false, :https, {:sslsocket, {:gen_tcp, #Port<0.2762>, :tls_connection, :undefined}, [#PID<0.2852.0>, #PID<0.2851.0>]}, {:essl, []}, 1, :keep_alive, true}
Status: :keep_alive
StatusLine: :undefined
Profile: :httpc_manager
Hi, I saw that Google now also provides a library: https://hex.pm/packages/google_api_storage
Maybe this lib should rely on the official Google lib?
The master seems to support the 1.0 version of goth but afaik no bump on hex.pm has been done for arc_gcs. Would this be possible? Thanks
Currently, the timeouts for send and receive are just the httpoison defaults, 8 and 5 seconds respectively.
I get timeouts from GCS on a small-sized upload of 600k. Setting the :recv_timeout
option remedies that, but there's now way to do that in this package.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.