e-dard / flask-s3 Goto Github PK
View Code? Open in Web Editor NEWSeamlessly serve your static assets of your Flask app from Amazon S3
Home Page: http://flask-s3.readthedocs.org/en/latest/
License: Do What The F*ck You Want To Public License
Seamlessly serve your static assets of your Flask app from Amazon S3
Home Page: http://flask-s3.readthedocs.org/en/latest/
License: Do What The F*ck You Want To Public License
I have a situation where having the static tree replicated directly under the S3 bucket root folder is problematic. I have a few apps that share most of their static assets via a single Blueprint (named Baseframe). To use Flask-S3 with them, I'd need to do something like this:
App1:
+- app1-static (bucket)
+- baseframe/
+- static/
App2:
+- app2-static (bucket)
+- baseframe/
+- static/
... and so on, which is less than ideal since Baseframe's (non-trivial collection of) assets are (a) being duplicated and (b) being delivered from different URLs at each site. It'll be nice if I could instead use a single bucket for all apps and have the assets stored like this (using app.name
for the app static path):
App1, App2, ...:
+- all-app-static (bucket)
+- baseframe/
+- app1/
+- app2/
While I could achieve this right now by having the app static path named "app1", "app2" etc, and have Flask-S3 simply replicate this, using anything other than "static" will interfere with the app's URLs when not using Flask-S3. It'll be nice if Flask-S3 can instead accept a configuration parameter for the app static path and write to that folder name.
I used create_all to upload all static to S3, but browser show me message as above and css is not interpreted.
Looks like some content-type needs to be set
http://blog.viktorkelemen.com/2012/05/amazon-s3-and-resource-interpreted-as.html
Setup is failing because it can't find boto3 (or some other package depending on env) on fresh installs,
because of line#8 in setup.py:
from flask_s3 import __version__
$ pip install flask-s3
Collecting flask-s3
Downloading Flask-S3-0.2.7.post1.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 20, in
File "/private/var/folders/2m/8277clyj4bxft9phn9znvhn8_j08wx/T/pip-build-_D5bzk/flask-s3/setup.py", line 8, in
from flask_s3 import version
File "flask_s3.py", line 18, in
import boto3
ImportError: No module named boto3
Would it be possible to upload to S3 with a unique filename (e.g., hash of file content) to force cache invalidation?
It would be handy to use .gitignore-style configuration to ignore files/directories (eg. node_modules, venv, pycache, etc) for create_all() uploads.
http://flask-s3.readthedocs.org/en/latest/ shows documentation for 0.1.2 for some reason. So S3_USE_CACHE_CONTROL
and other minor updates are missing.
Will you consider a fork for this plugin for other cloud providers such as Alicloud?
Their S3 is called OSS and has very similar API endpoints.
Heya,
I found an issue while attempting to integrate Flask-S3 into my Flask application. My application makes use of two separate blueprints, both of which do not have their own static files.
What happens is that this line of code (https://github.com/e-dard/flask-s3/blob/master/flask_s3.py#L56) fires off an error, as above Flask-S3 is finding my blueprint files and extending the dirs
list, but since they are both set to None
(I have no static files for my blueprints), this becomes an invalid operation.
I'm going to submit a patch in a moment which fixes the issue.
Please update pip and code package with pull requests and fixes!
Today there's only support for setting the Cache-Control
header. I've removed the S3_CACHE_CONTROL
setting and added S3_HEADERS
which supports setting whatever header you want:
S3_HEADERS = {
'Expires': 'Thu, 15 Apr 2010 20:00:00 GMT',
'Cache-Control': 'max-age=86400',
}
When FLASK_ASSETS_USE_S3 is on, urls get replaced even if FLASKS3_ACTIVE IS False.
I have been following the setup instructions. I have an app.py in my project and in there I create a new FlaskS3 instance and pass it the flask app instance. I can step through the process and I see app.jinja_env.globals being set for url_for correctly
from flask_s3 import FlaskS3
app = Flask(__name__)
s3 = FlaskS3(app)
However, when i make the first http request the method create_jinja_environment is called in the Flask base app object and this overwrites the url_for reference in jinja environment globals collection...
Does anyone else see this happening? Although my project has numerous other dependencies, looking at the stack trace it seems nothing else is triggering this behavior outside the normal Flask stack.
I am using Flask version 0.10.1 and Flask-S3 0.1.7
thanks, donal
If bucket already exist and contains thousands static files
https://github.com/e-dard/flask-s3/blob/master/flask_s3.py#L188
bucket.make_public(recursive=True)
it's very bad idea...
I received an update from pip for Flask-S3 from 0.2.8 to 0.2.13 but there are some strange things going on and I don't think the release should be kept there.
For example, from the documentation is missing content on the section Flask-S3 Options, and also the code don't have default values for configuration keys (all of them, I kept receiving KeyError: 'FLASKS3_*'
until I declared all of them in my config file).
I just downgraded to 0.2.8 but I think it's best to remove this faulty release, or to fix it.
Thanks!
Basically every variable to start with FLASK_S3_var or something like that?
@e-dard Just wanted to get your thoughts on this.
The basic idea here is that if USE_S3
is unset, then in the case where app.debug
is True
(and USE_S3_DEBUG
is False
), we'd simply turn USE_S3
to False. This offers a couple advantages:
app.config
on every call of url_for
. We simply override it if USE_S3
is True
at initialization. I don't think app.config
could ever change without restarting the app (and even it it's possible, it's a serious anti-pattern, I think).app.config["USE_S3"]
as a definitive test as to if files are being served locally or remotely. The use case I have for this is an app that allows users to upload images, which go directly to S3 (they never exist on the web server, except in memory between the upload and the upload to S3). I use the url_for
function in S3 to generate those urls still, but then locally/testing I need to upload them to the filesystem. As of now, this requires:if app.config['USE_S3'] and (not app.debug or app.config['USE_S3_DEBUG']):
s3_upload()
else: # not app.config['USE_S3'] or (app.debug and not app.config['USE_S3_DEBUG'])
local_upload()
which ideally would simplify to
if app.config['USE_S3']:
s3_upload()
else: # not app.config['USE_S3']
local_upload()
Minor defect in documentation:
The example for FLASKS3_FILEPATH_HEADERS encloses the header list in back-ticks and makes it a string when it should be a dictionary as seen in the comment above.
Wrong (from docs):
{r'.txt$': {‘Texted-Up-By’: ‘Mister Foo’}
}
Right:
{r'.txt$': {‘Texted-Up-By’: ‘Mister Foo’}}
The credentials used for static file collection may lack the rights to do the operation: s3.put_bucket_acl(Bucket=bucket_name, ACL='public-read'
This happens when credentials are created with minimum required rights and bucket setting ACL does not seem to be required. For each static file uploaded, the public policy is also set and that should be enough.
I have read #16 and #24.
This issue is related, but at least to me, seems different.
I propose that this operation is treated as a best-effort to set it (ignoring fails caused by insufficient rights) or a setting is added for this purpose.
Howdy,
While running unit tests for a Flask project in which I recently installed Flask-S3, all of my tests fail because S3_BUCKET_NAME
is not set when I run unit tests (by design), but the Flask-S3 url_for()
expects it to be set.
Do you think it makes sense to disable Flask-S3 when app.testing
is True, or do you suspect most people run unit tests with app.debug
set to True as well?
Would it be possible to show progress when uploading files to S3, please?
According to some credible sources, certain files should not be gzipped, as there is hardly any gain, and sometimes it could in fact bloat the file.
Some sources:
So IMHO there should be a config variable to limit gzipping to certain files only. I am ready with a solution, will be adding a pull-request soon.
Convert tests to use statements like
assert condition, 'message'
instead of
self.assertTrue(condition, 'message')
Hi
I recently pushed all my static assets to s3 and set up cloudfront CDN successfully. Afterwards I realized that I forgot to set a Cache-Control header. I also added some additional assets. I'm receiving the following traceback error when running create_all() function:
Traceback (most recent call last):
File "app/upload_assets.py", line 6, in <module>
flask_s3.create_all(app)
File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 453, in create_all
_upload_files(s3, app, all_files, bucket_name)
File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 318, in _upload_files
names, bucket, hashes=hashes))
File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 289, in _write_files
merged_dicts = merge_two_dicts(get_setting('FLASKS3_HEADERS', app), h)
File "C:\projects\adminout\venv\lib\site-packages\flask_s3.py", line 82, in merge_two_dicts
z = x.copy()
AttributeError: 'tuple' object has no attribute 'copy'
My relevant app config:
...
FLASKS3_BUCKET_NAME = 'mybucket'
FLASKS3_GZIP = True
FLASK_ASSETS_USE_S3 = True
FLASK_ASSETS_USE_CDN = True
FLASKS3_REGION = 'ap-south-1'
FLASKS3_BUCKET_DOMAIN = u's3.ap-south-1.amazonaws.com'
FLASKS3_CDN_DOMAIN = u'mycdn.cloudfront.net'
FLASKS3_HEADERS = {
'Cache-Control': 'max-age=84600'
}
...
I am running the following python script to update my assets:
"""
Script for uploading latest assets to S3 bucket
"""
import flask_s3
from main import app
flask_s3.create_all(app)
My environment: Windows 10, Python 3.6.1 virtualenv.
Is this an issue or have I done something stupid? Please help!
When I deploy to production I'd like to exclude the sourcemap files in my static directory. I'd like a config option to exclude any "*.js.map" files so they aren't ever part of my production deploy to S3.
Until then, I've put my sourcemaps in a different directory, outside of static, but that needs extra config for webpack and for serving them in Flask.
Python 3.5.2
Only fails when GZIP is enabled.
Trace stack:
Maybe related?
http://stackoverflow.com/questions/32075135/python-3-in-memory-zipfile-error-string-argument-expected-got-bytes
[DEBUG] Uploading C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\repos\revitapidocs\app\static\favicon.ico to revitapidocs as static/favicon.ico [flask_s3.py](246)[07:17:07]
Traceback (most recent call last):
File "upload-s3-assets.py", line 9, in <module>
flask_s3.create_all(app)
File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 453, in create_all
_upload_files(s3, app, all_files, bucket_name)
File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 319, in _upload_files
names, bucket, hashes=hashes))
File "C:\Users\gtalarico\Dropbox\Shared\dev-ubuntu\.virtualenvs-win8\revitapidocs\lib\site-packages\flask_s3.py", line 295, in _write_files
compressed)
File "c:\python\Lib\gzip.py", line 192, in __init__
self._write_gzip_header()
File "c:\python\Lib\gzip.py", line 220, in _write_gzip_header
self.fileobj.write(b'\037\213') # magic header
TypeError: string argument expected, got 'bytes'
When I run create_all() it uploads things to S3 correctly, but likes to set the Content-Type (s3 Metadata for the item) to binary/octet-stream. This causes my css and other stuff to not load. The only solution i know of is to go into the AWS console and change the content type to the correct value.
Should the Content-Type be detected? Do I need to specify, if so how?
Current GitHub code doesn't matches with content of Pypi. For example in Pypi version (I downloaded de latest package: 0.3.3) the feature FLASKS3_ENDPOINT_URL is not implemented but in GitHub it works well:
Line 409 in b8c72b4
The latests changes to the repo aren't available from a pip install. The documentation on readthedocs.org also don't reflect the most recent changes.
I think the url_for
function should seamlessly handle the special arguments for flask's url_for
.
This way whoever is adding flask-s3 at a later point in their development cycle do not have to go back and change all their static url_for
calls.
So the special arguments:
_method
, _external
, _anchor
can be simply ignored.
And _scheme
can be used for overriding the config setting, on a per url basis. This is an unlikely occurrence but useful nonetheless, imho.
Will be adding a pull request shortly.
First, let me start by saying this project looks very interesting to me!
I am interested to understand how you use this in production, specifically with versioning and minification of assets. I would love to understand how other people using Flask manage their assets in different environments, and how people have found patterns for transforming assets for production.
I think this is a nice feature to have. It always helps to add correct mimetypes to avoid browser warnings/errors. I just ran into this problem so I thought would add the issue.
A reference doc to prove the requirement:
https://developer.mozilla.org/en-US/docs/Incorrect_MIME_Type_for_CSS_Files
S3_FORCE_MIMETYPE = True # defaults to False?
(Platform: Windows 10 running Python 2.7)
The offending method is _gather_files
, line 177:
dirs = [(six.u(app.static_folder), app.static_url_path)]
The six.u
method escapes strings which causes Windows path backslashes to break:
six.u
: "C:\\Code\test" (with a tab character, \t
)This causes create_all
to warn that the path does not exist, even though it does (due to the malformed path).
Could we push the latest version to Pypi please :)
Thanks!
While trying out the latest 0.2.7 version. I ran into this error:
TypeError: object.__new__(NotImplementedType) is not safe, use NotImplementedType.__new__()
Resulting from the deepcopy()
within _test_deprecation
. Probably because I have imported some modules in my config, and the standard deepcopy doesn't know what to do for them.
Maybe this could be changed?
Not really needed to copy the config, since the code directly changes the app.config
. Moreover we don't really need to loop through all the config keys since we are interested only in flask-s3 specific keys.
Hey folks, I just added Flask-S3 to a project, and I think the naming of the config variables ought to be changed to be specific to this module; perhaps with a FLASKS3_
prefix (like how the Flask-Security module does it).
I already have a config var called S3_BUCKET
in my project, and now I have S3_BUCKET_NAME
.. Which doesn't provide the clearest distinction in the code base.
Just a thought. Interested to see what you think.
Hi,
When I upload to S3 by using flask_s3.create_all(app)
, I sometimes want to limit it to only the tailwind css. According to the docs, the filepath_filter_regex
variable is supposed to accommodate that. In my static
folder I have a gen_tailwind
folder, so I would think that using flask_s3.create_all(app,filepath_filter_regex=r'^gen_tailwind')
would upload only this folder. However, when I run it, nothing happens, and it doesn't give any return message.
Anyone know what I am doing wrong?
Would it be possible to export the machinery used to construct the bucket path as it would be resolved by url_for?
Hi all,
Thanks for creating and maintaining Flask-S3.
Maybe I'm doing something wrong (probably), but I had to manually set the value of some settings to the default value in order to get version 0.3.0 to work.
I was getting KeyError
exception when Flask-S3 tried to reach some values in app.config
(e.g. lines 96, 103 and 109).
I was considering send a PR using a app.config.get(…)
instead of app.config[…]
to avoid this exceptions — what do you think about it? May I or is there a reason to avoid that?
Many thanks,
I was trying to setup flask-s3 for my app but I kept getting a socket.error
(either broken pipe
or connection reset by peer
) when running flask_s3.create_all(app)
in a python shell as suggested in the docs. I had already made the bucket and was just trying to upload my static assets.
The fix appears to be to set the (undocumented) config variable S3_REGION
to the region in which you created your bucket. However, this caused an error for me when the create_all()
function tried to call conn.create_bucket(bucket_name)
. By changing this call to a get_bucket()
call, I was able to execute create_all()
without error.
There should be an option to augment the prefix for static files.
This should be done by a static prefix and a dynamic option (for versioning).
The use case for the static prefix is that a bucket contains a lot of resources for many applications/environments.
It seems similar to #22, but because the static file location does not depend at all on the application logic, this is completely different.
The static files should be collected to /path_to_allowed_prefix/environment/static_files
on the S3 bucket, where /path_to_allowed_prefix/environment/
should be specified by a setting.
The use case for versioning is the standard one. Appending a hash to the filename is an alternative, but the choice between hash appending and another prefix should be up to the flask-s3 user.
The proposition is to do this by another setting which will contain the path to the function where the user implements the versioning prefix.
All requests should add no overhead when collecting or using url_for
for users that do not want to use them.
I would like to make a user with restricted permissions but permission are reset to 'public-read' on each usage. This removes authorised users and restrictes writing to the account owner.
To see this behaviour:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKETNAME"
}
]
}
I am going to push 0.3.0 on the 20th July 2016.
This means the old S3_ config keys will not work any more, and you must move to using FLASKS3 config keys.
Using Windows 7 Professional
...\flask_s3.py", line 177, in _gather_files
dirs = [(six.u(app.static_folder), app.static_url_path)]
...\flask_s3.py", line 177, in _gather_files
dirs = [(six.u(app.static_folder), app.static_url_path)]
UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position
2-3: truncated \UXXXXXXXX escape
Seems like how Flask handles storing Windows 7 paths isn't automatically compatible with how Six wants to convert to unicode.
Set CORS rules from config var when bucket creating
Hi there!
I'm attempting to use flask-s3 (really awesome app, thank you for making this), but I can't find the release on PyPI. Would you mind pushing it?
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.