mishudark / s3-parallel-put Goto Github PK
View Code? Open in Web Editor NEWParallel uploads to Amazon AWS S3
License: MIT License
Parallel uploads to Amazon AWS S3
License: MIT License
hello,I use s3-parallel-put and getting a trouble
python s3-parallel-put --bucket=reocar-test --host=192.168.0.191:7480 --log-filename=/tmp/s3pp.log --dry-run --limit=1 .
Traceback (most recent call last):
File "s3-parallel-put", line 459, in
sys.exit(main(sys.argv))
File "s3-parallel-put", line 430, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 542, in head_bucket
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
but i have this bucket and my bucket can upload file or down
I have been set access-key and secret-key in environment .
root@ceph1:~/s3-parallel-put-master# s3cmd ls
2018-08-02 05:46 s3://reocar-test
when i used
python s3-parallel-put --bucket=s3://reocar-test --host=192.168.0.191:7480 --log-filename=/tmp/s3pp.log --dry-run --limit=1 .
Traceback (most recent call last):
File "s3-parallel-put", line 459, in
sys.exit(main(sys.argv))
File "s3-parallel-put", line 430, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 509, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 556, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
my host is ceph rgw s3 not Amazon AWS S3
Did i miss some thing ? Thankyou !
Executing the command with an S3 bucket located in Sydney Australia (ap-southeast-1) throws an error.
root@vmd001 [/path/to/folder/test]# /path/to/folder/s3-parallel-put --bucket=vmd001 --put=add --insecure --dry-run --limit=1 .
Traceback (most recent call last):
File "/path/to/folder/s3-parallel-put", line 420, in
sys.exit(main(sys.argv))
File "/path/to/folder/s3-parallel-put", line 391, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/lib/python2.6/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/lib/python2.6/site-packages/boto/s3/connection.py", line 549, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 301 Moved Permanently
Hey, Im trying to run it on our CI and I receive:
...
tools/s3-parallel-put: 1: tools/s3-parallel-put: --2019-07-24: not found
tools/s3-parallel-put: 2: tools/s3-parallel-put: Syntax error: "(" unexpected
Command exited with non-zero status 2
Any ideas?
The command:
sudo time tools/s3-parallel-put --quiet --processes=64 --put=stupid \
--bucket_region=s3-eu-central-1 --bucket=circleci-mim-results --prefix=test test.txt --verbose
Python 2.7.12
It would be useful to support python-magic. The builtin mimetypes library determines type solely based on filename extension, and may fail if the file if the extension is somehow abnormal.
In our case, we have a fairly large static website generated from a dynamic site, and a number of files contain trailing get queries.
if I run :
s3-parallel-put --bucket=mybucket --prefix=myfiles data.txt
then s3://mybucket/myfiles/data.txt is present (as expected)
but if I run this (with an absolute path)
s3-parallel-put --bucket=mybucket --prefix=myfiles /data.txt
then I expect
s3://mybucket/myfiles/data.txt
but I get
s3://mybucket/data.txt
as a workaround I have to use relative path if I want to use a prefix
I'm getting this error
error: [Errno 104] Connection reset by peer
when trying to upload a file greater than 5GB.
for files less than 5GB, it works fine.
Does the processes option uploads different files in parallel. or same file is broken into chunks and uploaded in parallel.
Hi,
is it possible to had the option to change the target host ?
For example, any S3 provider
There seems to be a common issue when uploading to S3 of getting broken pipes.
Although people mention it for large files, I've experienced this consistently with a 91k file (perhaps its the filename, who knows).
The fix seems to be to pass the 'host' parameter in the S3 connection.
Changing
if connection is None:
connection = S3Connection(is_secure=options.secure)
to
HOST='s3-us-west-2.amazonaws.com'
<snip>
if connection is None:
connection = S3Connection(is_secure=options.secure, host=HOST)
Has resolved the problem for me.
Albeit, obviously not the proper fix.
This may not be an issue for your project, I'm just posting here so you can determine what action, if any, to take.
References:
fog/fog#824
boto/boto#621
http://reterwebber.wordpress.com/2013/08/22/broken-pipe-error-when-using-boto-s3/
boto/boto@75d5c7b#L0R340
I'm getting this exception when I run:
./s3-parallel-put —-bucket-region=us-west-2 --bucket=my.bucket.name localfolder
Exception
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 269, in match_hostname
% (hostname, ', '.join(map(repr, dnsnames))))
CertificateError: hostname 'my.bucket.name.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1212, in connect
server_hostname=server_hostname)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 350, in wrap_socket
_context=self)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 566, in __init__
self.do_handshake()
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 796, in do_handshake
match_hostname(self.getpeercert(), self.server_hostname)
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 269, in match_hostname
% (hostname, ', '.join(map(repr, dnsnames))))
CertificateError: hostname 'my.bucket.name.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'
INFO:s3-parallel-put[statter-8316]:put 0 bytes in 0 files in 0.8 seconds (0 bytes/s, 0.0 files/s)
I'm getting the follow error when I execute the following command:
/home/user/s3-parallel-put-master/s3-parallel-put --bucket=photos --put=stupid --insecure --dry-run --limit=1
ERROR:s3-parallel-put:missing source operand
what am I missing?
Hey @mishudark ,
I'm getting this error when I'm trying to upload files from a mounted directory
`Thu, 03 Sep 2020 18:12:34 GMT
/s3-bucket-name/s3-bucket-sub/1-12750/Basel%20Images/JPC/1968Simplicity_005_BoxVI%2000054.jpg
DEBUG:boto:Signature:
AWS ********
DEBUG:boto:Final headers: {'Content-Length': '12009858', 'Content-MD5': 'nhtriFT3wzWRS9lpDxtRAQ==', 'Expect': '100-Continue', 'Date': 'Thu, 03 Sep 2020 18:12:34 GMT', 'User-Agent': 'Boto/2.49.0 Python/2.7.5 Linux/3.10.0-1127.19.1.el7.x86_64', 'Content-Type': 'image/jpeg', 'Authorization': u'AWS *******}
DEBUG:boto:encountered error exception, reconnecting
DEBUG:boto:establishing HTTPS connection: host=*****.s3.amazonaws.com, kwargs={'port': 443, 'timeout': 70}
DEBUG:boto:Token: None
DEBUG:boto:StringToSign:
PUT
image/jpeg`
However, I can upload a test folder on the mounted directory so I know I am able to use the tool to push files up.
The script didnt work with a bucket I have in the eu-west-1 region and gave the error error: [Errno 104] Connection reset by peer
adding --host=s3-eu-west-1.amazonaws.com
seemed to make it work again
By now, I've accidentally uploaded multiple days worth of data to S3, only to find it unusable due to the content-type being application/octet-stream
. This renders images, html, css, etc, unusable by default.
The default logic should be safe to use and therefore: the default content-type option should be guess
.
This will need to take into account the gzip options logic to ensure it doesn't break gzip when no content-type is specified.
Tried to push one of my large-ish repositories to test (2 million-ish files, about 8.5TB). It made it about 2TB in and the server (a Sun Fire X4140 w/ 12 CPU cores and 16GB of RAM) ran out of memory and the push was killed by the kernel. Nothing else was running on the system, and system logs show that all physical memory and swap was eaten up by python.
This was using the "add" mode.
I'm getting a:
TypeError: 'NoneType' object is not iterable
on line 202 when I called:
./s3-parallel-putter --bucket=bucketname --prefix=foo somefolder/*
Aparently, the options.headers variable is a None when I call it this way. I can still make it work if I call it with a phony header:
./s3-parallel-putter --bucket=bucketname --prefix=foo --header=a:a somefolder/*
Python 2 is becoming more and more obsolete, things such as AWS CodeBuild and others are starting to remove Python2. Will this get adapted to Python3?
The LICENSE file says MIT, but the header of the script itself says GPL3.
installed: python 2.7.6, boto
i am having this error:
Traceback (most recent call last):
File "/s3-parallel-put", line 410, in
sys.exit(main(sys.argv))
File "/s3-parallel-put", line 381, in main
bucket = connection.get_bucket(options.bucket)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 503, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 522, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 665, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1030, in _mexe
raise ex
socket.gaierror: [Errno -2] Name or service not known
May i know how to resolve it?
Thanks.
I have not personally used s3-parallel-put
for several years. Although I'm happy to merge pull requests, I do not have the time or resources to test them or do further development.
If you would like to take over maintenance of the project, please comment here and I'll give you write access to the repository.
Can we add feature for KMS SSE header?
Hi @mishudark ,
I've added an if/else clause to support connecting with tokens, this allows s3-parallel-put to work with AWS setups using Okta IDP, which supports authentication only via ephemeral token auth. If no session token variable is present, it will create the connection with the id/secret credentials as usual.
PR here: #55
Reported by @duchy in fef435b#commitcomment-13084920:
Current version has been broken by this change, option bucket_region should be added first in previous definition.
Besides that several connect() calls should also be updated.
How to setup the S3 Access Key, Secret Key and Bucket.
Also a command to copy from one folder to another?
Can anyone please help me
It would be useful if s3-parallel-put supported --include and --exclude patterns, a-la rsync.
Hi. I am using s3-parallel-put and its working very well, thank you. I have a question about the '--put=add' parameter which avoids re-uploading a file if it is already uploaded to s3. My question is, how sensitive is this? If I try to upload a file and a file with the same file name exists, will it upload, or does it check additional items such as size?
I'm attempting to copy files from various subdirectories to the base of my S3 bucket. Here's an example:
~/s3-parallel-put --bucket=mybackups --prefix /backups/12413412/mysql-backup/backups/myappdb-02/xtrabackups /mydb-server-01-backup-20160926-091734.tar.gz
In this example I want the file mydb-server-01-backup-20160926-091734.tar.gz copied to the base of my bucket.
When I run s3-parallel-put I'm getting this:
INFO:s3-parallel-put[statter-46529]:put 0 bytes in 0 files in 0.0 seconds (0 bytes/s, 0.0 files/s)
Am I doing something wrong here?
This Time piece on doctor payments took a giant dataset and turned it out into flat files and predictable URLs. To do it, @wilson428 and pratheekrebala used s3-parallel-put
to get 350,000+ files into S3 in a reasonable timeframe, as documented here:
https://source.opennews.org/en-US/articles/case-flat-files-big-data-projects/
You can close this, there's nothing to discuss - just wanted to make sure you knew it was helping people. 😸
I get the following error when I try to run s3-parallel-put on CentOS release 5.6 (Final)
(It works well for me on CentOS 6.3)
File "/bin/s3-parallel-put", line 81
with self.file_object_cache.open(self.filename) as file_object:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.