Giter VIP home page Giter VIP logo

shrine-google_cloud_storage's Introduction

Gem Version

Shrine::Storage::GoogleCloudStorage

Provides Google Cloud Storage (GCS) storage for Shrine.

Installation

gem "shrine-google_cloud_storage"

Authentication

The GCS plugin uses the google-cloud-storage gem. Please refer to its documentation for setting up authentication.

Usage

require "shrine/storage/google_cloud_storage"

Shrine.storages = {
  cache: Shrine::Storage::GoogleCloudStorage.new(bucket: "cache"),
  store: Shrine::Storage::GoogleCloudStorage.new(bucket: "store"),
}

You can set a predefined ACL on created objects, as well as custom headers using the object_options parameter:

Shrine::Storage::GoogleCloudStorage.new(
  bucket: "store",
  default_acl: 'publicRead',
  object_options: {
    cache_control: 'public, max-age: 7200'
  },
)

Contributing

Test setup

Option 1 - use the script

Review the script test/create_test_environment.sh. It will:

  • create a Google Cloud project
  • associate it with your billing account
  • create a service account
  • add the roles/storage.admin iam policy
  • download the json credentials
  • create a test bucket
  • add the needed variables to your .env file

To run, it assumes you have already run gcloud auth login. It also needs a .env file in the project root containing the project name and the billing account to use:

cp .env.sample .env
# Edit .env to fill in your project and billing accounts
./test/create_test_environment.sh

Option 2 - manual setup

Create your own bucket and provide variables that allow for project and credential lookup. For example:

GCS_BUCKET=shrine-gcs-test-my-project
GOOGLE_CLOUD_PROJECT=my-project
GOOGLE_CLOUD_KEYFILE=/Users/user/.gcp/my-project/shrine-gcs-test.json

Warning: all content of the bucket is cleared between tests, create a new one only for this usage!

Running tests

After setting up your bucket, run the tests:

$ bundle exec rake test

For additional debug, add the following to your .env file:

GCS_DEBUG=true

License

MIT

shrine-google_cloud_storage's People

Contributors

herenow avatar hwo411 avatar ianks avatar janko avatar joshuarose avatar majksner avatar renchap avatar rosskevin avatar sho918 avatar taykangsheng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

shrine-google_cloud_storage's Issues

Publish a new version?

I noticed that the published rubygems version is pretty far behind, which is breaking support for presigning urls. is there any chance you can publish an update?

File download URL encoding issue when filename contains a #

Brief Description

When url is called for an attachment stored in Google Cloud Storage for a filename that includes a hash symbol, the URL encoding does not match what Google Cloud is expecting. We realize that this is somewhat of an edge case. The # character should not be used in filenames, but with modern file systems allowing it, please consider adding support in Shrine.

Expected behavior

When the filename includes a # symbol, it should be encoded as part of the filename and not an anchor.

e.g. "my#file.pdf" -> "my%23file.pdf"

https://storage.googleapis.com/`my_bucket`/attachments/file/my%23file.pdf

Actual behavior

Shrine does not URL encode the file name portion of the download URL...

https://storage.googleapis.com/`my_bucket`/attachments/file/my#file.pdf

Simplest self-contained example code to demonstrate issue

  1. Upload a file with a hash symbol in the name.
  2. Attempt to download it.

System configuration

Ruby version: 2.5.0

Shrine version: 3.4.0
shrine-google_cloud_storage: 1.34.1

Thanks for your consideration - I will attempt a PR if I can figure it out.

Support for deleting versions

Hi, this is a great plugin and has worked well for us until recently when a scenario came up in which we need to delete files in a bucket which has object versioning enabled. Would you be open to supporting an option to delete a file as well as all of its versions?

Support for "requester-pays" buckets

GCS has an option on buckets to make them a "Requester Pays" type of bucket: https://cloud.google.com/storage/docs/requester-pays. In short, this allows tracing of who/what is actually requesting an upload/download.

To query a bucket where "requester pays" is enabled, the userProject needs to be added in the URL. The GCS Ruby library provides this as an optional user_project keyword argument to Project#bucket: https://github.com/googleapis/google-cloud-ruby/blob/7523214b3c64f88db5e96269b397b066abf4b92e/google-cloud-storage/lib/google/cloud/storage/project.rb#L208-L216

When this is not passed, an error is thrown when listing/uploading files in a requester-pays bucket: Bucket is a requester pays bucket but no user project provided. (Google::Cloud::InvalidArgumentError)

PR: #56

Error "SignatureDoesNotMatch". Google Cloud Storage Bucket PUT.

I'm loosing my mind.

I'm using Shrine (https://github.com/janko-m/shrine) with Google Cloud Storage (https://github.com/renchap/shrine-google_cloud_storage), but when I start the PUT call I get this:

<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
</Message>
<StringToSign>
PUT
image/jpeg
1518399402
/mybucket.appspot.com/7d5e4aad1e3a737fb8d2c59571fdb980.jpg
</StringToSign>
</Error>

I followed this info (http://shrinerb.com/rdoc/classes/Shrine/Plugins/PresignEndpoint.html) for presign_endpoint, but still nothing:

class FileUploader < Shrine
  plugin :presign_endpoint, presign_options: -> (request) do
    filename     = request.params["filename"]
    extension    = File.extname(filename)
    content_type = Rack::Mime.mime_type(extension)

    {
      content_type: content_type
    }
  end
end

I tried with and without this (restarting the Rails server everytime).

Where am I wrong?

I also tried with Postman with a PUT to that URL and withtout any content-type. But still nothing.

I read here: googleapis/google-cloud-node#1976 and here: googleapis/google-cloud-node#1695

How can I try without Rails?

Is there a REPL (or similar) to try with my credentials and with a file?

Cannot remove a model with missing file

When trying to remove a model that is missing a file, the library crashes with

Caught error notFound: Not Found
Error - #<Google::Apis::ClientError: notFound: Not Found>

Handle moving files

We cant rename files in GCS, but we can copy it and delete the previous one.

Change open to use Down.open

From Janko :

Could #open be implemented in terms of Down.open? E.g. Down.open(url(id))? That would be useful for things like restore_cached_data plugin, which extracts metadata from cached files (which in this case might remote, directly uploaded to Google Cloud Storage), so that it doesn't need to download the whole file just to extract metadata.
For the restore_cached_plugin, when a cached file is sent in a hidden field (e.g. with direct uploads or on validation errors), an attacker could technically modify the "metadata" attributes, thus bypassing file validations. The best way that I found to solve this in a generic way was to give the ability to re-extract metadata on that assignment. This is also useful when doing direct uploads to services that don't extract metadata, to be able to extract metadata manually.
Extracting metadata uses #open, which I previously implemented in storages to download the whole file, which wasn't ideal since that part is synchronous. However, I managed to come up with Down.open which creates an IO representing the remote file, which will download only how much is read. Yeah, that would require a signed URL. That's the way I implemented it with S3, relying on the fact that signed URLs will always work. Internally Down.open uses a generic Down::ChunkedIO, which might be usable if you don't have a URL (I used it in GridFS).

This will require #7 first

Could not load the default credentials Digital ocean

Getting RuntimeError (Could not load the default credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials
for more information on digital ocean, Puma, Nginx, Capistrano, Rails.
i've also added env varaiables in .env file.
GCS_BUCKET='photos.foobrstock.com'
GOOGLE_CLOUD_PROJECT="foobrsnaps"
GOOGLE_CLOUD_KEYFILE="Rails.root.join('config/foobrstock-service.json')"

But still getting the same error for few days.

Use random names for files generated in tests

It seems there is a rate-limit on the number of write-operations on one file in GCS (Google::Cloud::ResourceExhaustedError: rateLimitExceeded: The total number of changes to the object <bucket>/foo exceeds the rate limit. Please reduce the rate of create, update, and delete requests.)

As most tests are done using the /foo object, the test suite sometimes fails.

Each test should use its own object to avoid this.

Copy from Cache To Store Strips Content Type

The original image is successfully being stored in the 'cache' storage, and subsequently when it comes time to upload, it makes it successfully to the copy method

However, the original file that gets copied over to the main 'store' storage will lose it's Content Type.

I tested using the copy functionality directly using the google-cloud-storage gem. The content type remains.

I have been unable to track down the issue.

Screenshot_2019-08-03 Bucket details - ResidentHaven - Google Cloud Platform

Update 1
I was able to narrow it down by deleting this do loop - It appears those options are overwriting the content type.

Update 2
Setting any custom headers as described on the readme will cause the content-type to be emptied

Shrine::Storage::GoogleCloudStorage.new(
  bucket: "store",
  default_acl: 'publicRead',
  object_options: {
    cache_control: 'public, max-age: 7200'
  },
)

Copy example

Is there an example of the copy process for using with a rails model?

Does it still need to use attacher.set to avoid the problem of deleting the original?

Or is it as simple as...

duplicate = original.dup
duplicate.image = original.image
duplicate.save

Bump google-cloud-storage version

The version used currently by the latest release is 1.7.1, which depends on google-api-client 0.14.2, which is pretty far behind (causing issues integrating with bigquery specifically).

ArgumentError: project_id is missing

Getting ArgumentError: project_id is missing in syslog sidekiq on submitting. Also the images are not bieng processed in the Background.
my .env:-
GCS_BUCKET='Bucket Name'
GOOGLE_CLOUD_PROJECT="Project Name"
GOOGLE_APPLICATION_CREDENTIALS=./some-service.json

Url with expires

I see in #7 that signed urls are implemented with presign.

I'm thinking about access to an expiring url from an existing activerecord model with a file, and the implementation here seems like a different pattern than that of the S3#url.

I'm new to shrine, so please point out what I'm missing.

Assuming a model Agreement and a shrine field document_data, with S3 docs it looks as though you should be able to agreement.document.url(expires_in: 300).

Shouldn't the expected usage here be agreement.document.url(expires: 300) based on the above pattern, the UploadedFile api, and the GCS Using signed urls?

  • Is that correct?
  • Should this gem match the usage (except making options align with GCS, in this case expires)?
  • Can I already exec presign from my model and I'm missing it?

Use skip_lookup option

Hi there,

I noticed an issue when uploading a file using a service account with the role storage object administrator (!= storage administrator).
The upload fails when accessing the bucket in #get_bucket.
The google-cloud-storage gem provides a solution in form of the skip_lookup option:

storage = Google::Cloud::Storage.new
bucket = storage.bucket "sample-bucket" # raises
Google::Cloud::PermissionDeniedError: forbidden:  ... iam.gserviceaccount.com does not have storage.buckets.get access to sample-bucket

bucket = storage.bucket "sample-bucket", skip_lookup: true
=> #<Google::Cloud::Storage::Bucket:0x00...>

Do you see any issues with using the skip_lookup option in shrine-google_cloud_storage ?

More info can be found here: googleapis/google-cloud-ruby#1588

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.