Giter VIP home page Giter VIP logo

flask-s3's People

Contributors

andrewsnowden avatar bool-dev avatar cbenhagen avatar cuducos avatar e-dard avatar ebaratte avatar eriktaubeneck avatar fuyukai avatar grampajoe avatar gtalarico avatar hannseman avatar jaza avatar kageurufu avatar mazdermind avatar naudo avatar nq-ebaratte avatar pauloricardopr avatar perwagner avatar rehandalal avatar virtosubogdan avatar wassname avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flask-s3's Issues

Custom root folder under S3 bucket

I have a situation where having the static tree replicated directly under the S3 bucket root folder is problematic. I have a few apps that share most of their static assets via a single Blueprint (named Baseframe). To use Flask-S3 with them, I'd need to do something like this:

App1:
+- app1-static (bucket)
   +- baseframe/
   +- static/

App2:
+- app2-static (bucket)
   +- baseframe/
   +- static/

... and so on, which is less than ideal since Baseframe's (non-trivial collection of) assets are (a) being duplicated and (b) being delivered from different URLs at each site. It'll be nice if I could instead use a single bucket for all apps and have the assets stored like this (using app.name for the app static path):

App1, App2, ...:
+- all-app-static (bucket)
   +- baseframe/
   +- app1/
   +- app2/

While I could achieve this right now by having the app static path named "app1", "app2" etc, and have Flask-S3 simply replicate this, using anything other than "static" will interfere with the app's URLs when not using Flask-S3. It'll be nice if Flask-S3 can instead accept a configuration parameter for the app static path and write to that folder name.

ImportError: No module named boto3

$ pip install flask-s3
Collecting flask-s3
Downloading Flask-S3-0.2.7.post1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 20, in
File "/private/var/folders/2m/8277clyj4bxft9phn9znvhn8_j08wx/T/pip-build-_D5bzk/flask-s3/setup.py", line 8, in
from flask_s3 import version
File "flask_s3.py", line 18, in
import boto3
ImportError: No module named boto3

Cache invalidation

Would it be possible to upload to S3 with a unique filename (e.g., hash of file content) to force cache invalidation?

Issue Handling Blueprint Static Files

Heya,

I found an issue while attempting to integrate Flask-S3 into my Flask application. My application makes use of two separate blueprints, both of which do not have their own static files.

What happens is that this line of code (https://github.com/e-dard/flask-s3/blob/master/flask_s3.py#L56) fires off an error, as above Flask-S3 is finding my blueprint files and extending the dirs list, but since they are both set to None (I have no static files for my blueprints), this becomes an invalid operation.

I'm going to submit a patch in a moment which fixes the issue.

Add support for setting custom headers for files uploaded to S3

Today there's only support for setting the Cache-Control header. I've removed the S3_CACHE_CONTROL setting and added S3_HEADERS which supports setting whatever header you want:

S3_HEADERS = {
    'Expires': 'Thu, 15 Apr 2010 20:00:00 GMT',
    'Cache-Control': 'max-age=86400',
}

url_for overwritten during app.create_jinja_environment

I have been following the setup instructions. I have an app.py in my project and in there I create a new FlaskS3 instance and pass it the flask app instance. I can step through the process and I see app.jinja_env.globals being set for url_for correctly

from flask_s3 import FlaskS3
app = Flask(__name__)
s3 = FlaskS3(app)

However, when i make the first http request the method create_jinja_environment is called in the Flask base app object and this overwrites the url_for reference in jinja environment globals collection...

Does anyone else see this happening? Although my project has numerous other dependencies, looking at the stack trace it seems nothing else is triggering this behavior outside the normal Flask stack.

I am using Flask version 0.10.1 and Flask-S3 0.1.7

thanks, donal

Latest release 0.2.13 do not defines default values for configuration keys

I received an update from pip for Flask-S3 from 0.2.8 to 0.2.13 but there are some strange things going on and I don't think the release should be kept there.

For example, from the documentation is missing content on the section Flask-S3 Options, and also the code don't have default values for configuration keys (all of them, I kept receiving KeyError: 'FLASKS3_*' until I declared all of them in my config file).

I just downgraded to 0.2.8 but I think it's best to remove this faulty release, or to fix it.

Thanks!

Refactor app.config defaults setup

@e-dard Just wanted to get your thoughts on this.

The basic idea here is that if USE_S3 is unset, then in the case where app.debug is True (and USE_S3_DEBUG is False), we'd simply turn USE_S3 to False. This offers a couple advantages:

  • No need to check app.config on every call of url_for. We simply override it if USE_S3 is True at initialization. I don't think app.config could ever change without restarting the app (and even it it's possible, it's a serious anti-pattern, I think).
  • The application can reference app.config["USE_S3"] as a definitive test as to if files are being served locally or remotely. The use case I have for this is an app that allows users to upload images, which go directly to S3 (they never exist on the web server, except in memory between the upload and the upload to S3). I use the url_for function in S3 to generate those urls still, but then locally/testing I need to upload them to the filesystem. As of now, this requires:
if app.config['USE_S3'] and (not app.debug or app.config['USE_S3_DEBUG']):
    s3_upload()
else: # not app.config['USE_S3'] or (app.debug and not app.config['USE_S3_DEBUG'])
    local_upload()

which ideally would simplify to

if app.config['USE_S3']:
    s3_upload()
else: # not app.config['USE_S3']
    local_upload()

Documentation for filepath headers has too many backticks

Minor defect in documentation:
The example for FLASKS3_FILEPATH_HEADERS encloses the header list in back-ticks and makes it a string when it should be a dictionary as seen in the comment above.

Wrong (from docs):
{r'.txt$': {‘Texted-Up-By’: ‘Mister Foo’} }

Right:
{r'.txt$': {‘Texted-Up-By’: ‘Mister Foo’}}

Switch to best-effort for setting bucket ACL

The credentials used for static file collection may lack the rights to do the operation: s3.put_bucket_acl(Bucket=bucket_name, ACL='public-read'
This happens when credentials are created with minimum required rights and bucket setting ACL does not seem to be required. For each static file uploaded, the public policy is also set and that should be enough.

I have read #16 and #24.
This issue is related, but at least to me, seems different.

I propose that this operation is treated as a best-effort to set it (ignoring fails caused by insufficient rights) or a setting is added for this purpose.

Disable Flask-S3 when `app.testing` is True

Howdy,

While running unit tests for a Flask project in which I recently installed Flask-S3, all of my tests fail because S3_BUCKET_NAME is not set when I run unit tests (by design), but the Flask-S3 url_for() expects it to be set.

Do you think it makes sense to disable Flask-S3 when app.testing is True, or do you suspect most people run unit tests with app.debug set to True as well?

create_all() failing.

Hi
I recently pushed all my static assets to s3 and set up cloudfront CDN successfully. Afterwards I realized that I forgot to set a Cache-Control header. I also added some additional assets. I'm receiving the following traceback error when running create_all() function:

Traceback (most recent call last):
  File "app/upload_assets.py", line 6, in <module>
    flask_s3.create_all(app)
  File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 453, in create_all
    _upload_files(s3, app, all_files, bucket_name)
  File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 318, in _upload_files
    names, bucket, hashes=hashes))
  File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 289, in _write_files
    merged_dicts = merge_two_dicts(get_setting('FLASKS3_HEADERS', app), h)
  File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 82, in merge_two_dicts
    z = x.copy()
AttributeError: 'tuple' object has no attribute 'copy'

My relevant app config:

...
    FLASKS3_BUCKET_NAME = 'mybucket'
    FLASKS3_GZIP = True
    FLASK_ASSETS_USE_S3 = True
    FLASK_ASSETS_USE_CDN = True
    FLASKS3_REGION = 'ap-south-1'
    FLASKS3_BUCKET_DOMAIN = u's3.ap-south-1.amazonaws.com'
    FLASKS3_CDN_DOMAIN = u'mycdn.cloudfront.net'
    FLASKS3_HEADERS = {
        'Cache-Control': 'max-age=84600'
    }
...

I am running the following python script to update my assets:

"""
Script for uploading latest assets to S3 bucket
"""
import flask_s3
from main import app
flask_s3.create_all(app)

My environment: Windows 10, Python 3.6.1 virtualenv.
Is this an issue or have I done something stupid? Please help!

Exclude some files

When I deploy to production I'd like to exclude the sourcemap files in my static directory. I'd like a config option to exclude any "*.js.map" files so they aren't ever part of my production deploy to S3.

Until then, I've put my sourcemaps in a different directory, outside of static, but that needs extra config for webpack and for serving them in Flask.

GZipping Fails

Python 3.5.2
Only fails when GZIP is enabled.
Trace stack:

Maybe related?
http://stackoverflow.com/questions/32075135/python-3-in-memory-zipfile-error-string-argument-expected-got-bytes

[DEBUG] Uploading C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\repos\revitapidocs\app\static\favicon.ico to revitapidocs as static/favicon.ico [flask_s3.py](246)[07:17:07]
Traceback (most recent call last):
  File "upload-s3-assets.py", line 9, in <module>
    flask_s3.create_all(app)
  File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 453, in create_all
    _upload_files(s3, app, all_files, bucket_name)
  File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 319, in _upload_files
    names, bucket, hashes=hashes))
  File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 295, in _write_files
    compressed)
  File "c:\python\Lib\gzip.py", line 192, in __init__
    self._write_gzip_header()
  File "c:\python\Lib\gzip.py", line 220, in _write_gzip_header
    self.fileobj.write(b'\037\213')             # magic header
TypeError: string argument expected, got 'bytes'

Incorrect Content-Type

When I run create_all() it uploads things to S3 correctly, but likes to set the Content-Type (s3 Metadata for the item) to binary/octet-stream. This causes my css and other stuff to not load. The only solution i know of is to go into the AWS console and change the content type to the correct value.

Should the Content-Type be detected? Do I need to specify, if so how?

Update PyPi Version

The latests changes to the repo aren't available from a pip install. The documentation on readthedocs.org also don't reflect the most recent changes.

Handle the special "_xyz" arguments for flask's url_for

I think the url_for function should seamlessly handle the special arguments for flask's url_for.
This way whoever is adding flask-s3 at a later point in their development cycle do not have to go back and change all their static url_for calls.

So the special arguments:
_method, _external, _anchor can be simply ignored.
And _scheme can be used for overriding the config setting, on a per url basis. This is an unlikely occurrence but useful nonetheless, imho.

Will be adding a pull request shortly.

Minification? Use in larger projects?

First, let me start by saying this project looks very interesting to me!

I am interested to understand how you use this in production, specifically with versioning and minification of assets. I would love to understand how other people using Flask manage their assets in different environments, and how people have found patterns for transforming assets for production.

create_all incorrectly handles paths in Windows

(Platform: Windows 10 running Python 2.7)

The offending method is _gather_files, line 177:
dirs = [(six.u(app.static_folder), app.static_url_path)]

The six.u method escapes strings which causes Windows path backslashes to break:

  • Original path: "C:\\Code\\test"
  • Result of six.u: "C:\\Code\test" (with a tab character, \t)

This causes create_all to warn that the path does not exist, even though it does (due to the malformed path).

Change deepcopy in deprecation testing to avoid "TypeError"

While trying out the latest 0.2.7 version. I ran into this error:

TypeError: object.__new__(NotImplementedType) is not safe, use NotImplementedType.__new__()

Resulting from the deepcopy() within _test_deprecation. Probably because I have imported some modules in my config, and the standard deepcopy doesn't know what to do for them.

Maybe this could be changed?
Not really needed to copy the config, since the code directly changes the app.config. Moreover we don't really need to loop through all the config keys since we are interested only in flask-s3 specific keys.

Naming of config variables

Hey folks, I just added Flask-S3 to a project, and I think the naming of the config variables ought to be changed to be specific to this module; perhaps with a FLASKS3_ prefix (like how the Flask-Security module does it).

I already have a config var called S3_BUCKET in my project, and now I have S3_BUCKET_NAME.. Which doesn't provide the clearest distinction in the code base.

Just a thought. Interested to see what you think.

Can't upload only a single folder in static to S3 - anyone?

Hi,

When I upload to S3 by using flask_s3.create_all(app), I sometimes want to limit it to only the tailwind css. According to the docs, the filepath_filter_regex variable is supposed to accommodate that. In my static folder I have a gen_tailwind folder, so I would think that using flask_s3.create_all(app,filepath_filter_regex=r'^gen_tailwind') would upload only this folder. However, when I run it, nothing happens, and it doesn't give any return message.

Anyone know what I am doing wrong?

Default settings: use `.get(…)` instead of brackets?

Hi all,

Thanks for creating and maintaining Flask-S3.

Maybe I'm doing something wrong (probably), but I had to manually set the value of some settings to the default value in order to get version 0.3.0 to work.

I was getting KeyError exception when Flask-S3 tried to reach some values in app.config (e.g. lines 96, 103 and 109).

I was considering send a PR using a app.config.get(…) instead of app.config[…] to avoid this exceptions — what do you think about it? May I or is there a reason to avoid that?

Many thanks,

socket.error when calling create_all()

I was trying to setup flask-s3 for my app but I kept getting a socket.error (either broken pipe or connection reset by peer) when running flask_s3.create_all(app) in a python shell as suggested in the docs. I had already made the bucket and was just trying to upload my static assets.

The fix appears to be to set the (undocumented) config variable S3_REGION to the region in which you created your bucket. However, this caused an error for me when the create_all() function tried to call conn.create_bucket(bucket_name). By changing this call to a get_bucket() call, I was able to execute create_all() without error.

Support for versioning and prefixing

There should be an option to augment the prefix for static files.
This should be done by a static prefix and a dynamic option (for versioning).

The use case for the static prefix is that a bucket contains a lot of resources for many applications/environments.
It seems similar to #22, but because the static file location does not depend at all on the application logic, this is completely different.
The static files should be collected to /path_to_allowed_prefix/environment/static_files on the S3 bucket, where /path_to_allowed_prefix/environment/ should be specified by a setting.

The use case for versioning is the standard one. Appending a hash to the filename is an alternative, but the choice between hash appending and another prefix should be up to the flask-s3 user.
The proposition is to do this by another setting which will contain the path to the function where the user implements the versioning prefix.

All requests should add no overhead when collecting or using url_for for users that do not want to use them.

Only possible to use root user

I would like to make a user with restricted permissions but permission are reset to 'public-read' on each usage. This removes authorised users and restrictes writing to the account owner.

To see this behaviour:

  • Create an IAM user with permissions to access the BUCKETNAME
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::BUCKETNAME"
        }
    ]
}
  • Set the bucket so that any authorised user has all permissions.
  • Run flask-s3
  • Observe to see if authorised users have no permissions on the bucket. it has reset to root write and public read.

Removal of S3_ config keys

I am going to push 0.3.0 on the 20th July 2016.
This means the old S3_ config keys will not work any more, and you must move to using FLASKS3 config keys.

Six.u UnicodeDecodeError

Using Windows 7 Professional

...\flask_s3.py", line 177, in _gather_files
dirs = [(six.u(app.static_folder), app.static_url_path)]

...\flask_s3.py", line 177, in _gather_files
dirs = [(six.u(app.static_folder), app.static_url_path)]

UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position
2-3: truncated \UXXXXXXXX escape

Seems like how Flask handles storing Windows 7 paths isn't automatically compatible with how Six wants to convert to unicode.

PyPI Release?

Hi there!

I'm attempting to use flask-s3 (really awesome app, thank you for making this), but I can't find the release on PyPI. Would you mind pushing it?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.