Giter VIP home page Giter VIP logo

grunt-aws's Introduction

grunt-aws

A Grunt interface into the Amazon Web Services Node.JS SDK aws-sdk

NPM version

Getting Started

This plugin requires Grunt 0.4.x

If you haven't used Grunt before, be sure to check out the Getting Started guide, as it explains how to create a Gruntfile as well as install and use Grunt plugins. Once you're familiar with that process, you may install this plugin with this command:

npm install --save-dev grunt-aws

One the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:

grunt.loadNpmTasks('grunt-aws');

Supported Services

This plugin aims to provide a task for each service on AWS. Currently however, it only supports:


The "s3" task

Features

  • Fast
  • Simple
  • Auto Gzip
  • Smart Local Caching

Usage

To upload all files inside build/ into my-awesome-bucket:

  grunt.initConfig({
    aws: grunt.file.readJSON("credentials.json"),
    s3: {
      options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        bucket: "my-awesome-bucket"
      },
      build: {
        cwd: "build/",
        src: "**"
      }
    }
  });

See the complete example here

Options

accessKeyId required (String)

Amazon access key id

secretAccessKey required (String)

Amazon secret access key

bucket required (String)

Bucket name

sessionToken (String)

Amazon session token, required if you're using temporary access keys

region (String)

Default US Standard

For all possible values, see Location constraints.

sslEnabled (Boolean)

Default true

SSL is enabled or not

maxRetries (Number)

Default 3

Number of retries for a request

access (String)

Default "public-read"

File permissions, must be one of:

  • "private"
  • "public-read"
  • "public-read-write"
  • "authenticated-read"
  • "bucket-owner-read"
  • "bucket-owner-full-control"

gzip (Boolean)

Default true

Gzips the file before uploading and sets the appropriate headers

Note: The default is true because this task assumes you're uploading content to be consumed by browsers developed after 1999. On the terminal, you can retrieve a file using curl --compressed <url>.

dryRun (Boolean)

Default false

Performs a preview run displaying what would be modified

concurrency (Number)

Default 20

Number of S3 operations that may be performed concurrently

overwrite (Boolean)

Default true

Upload files, whether or not they already exist (set to false if you never update existing files).

CopyFile (String)

Default None

Path to copy filewithin S3. ex. my-bucket2/output/d.txt

CopyFrom (String)

Default None

Path to copy all files within S3. ex. my-bucket2/output/

cache (Boolean)

Default true

Skip uploading files which have already been uploaded (same ETag). Each target has it's own options cache, so if you change the options object, files will be forced to reupload.

cacheTTL (Number)

Default 60*60*1000 (1hr)

Number of milliseconds to wait before retrieving the object list from S3. If you only modify this bucket from grunt-aws on one machine then it can be Infinity if you like. To disable cache, set it to 0.

headers (Object)

Set HTTP headers, please see the putObject docs

The following are allowed:

  • ContentLength
  • ContentType (will override mime type lookups)
  • ContentDisposition
  • ContentEncoding
  • CacheControl (accepts a string or converts numbers into header as max-age=<num>, public)
  • Expires (converts dates to strings with toUTCString())
  • GrantFullControl
  • GrantRead
  • GrantReadACP
  • GrantWriteACP
  • ServerSideEncryption ("AES256")
  • StorageClass ("STANDARD" or "REDUCED_REDUNDANCY")
  • WebsiteRedirectLocation

The properties not listed are still available as:

  • ACL - access option above
  • Body - the file to be uploaded
  • Key - the calculated file path
  • Bucket - bucket option above
  • Metadata - meta option below

meta (Object)

Set custom HTTP headers

All custom headers will be prefixed with x-amz-meta-. For example {Foo:"42"} becomes x-amz-meta-foo:42.

charset (String)

Add a charset to every one of your Content-Type. For example: utf-8. If this is not set, then all text files will get charset of UTF-8 by default.

mime (Object)

Define your own mime types

This object will be passed into mime.define()

mimeDefault (String)

Default "application/octet-stream"

The default mime type for when mime.lookup() fails

createBucket (Boolean)

Default false

Create the bucket if it does not exist. Use the bucket option to name the bucket. Use the access and region as parameters when creating the bucket.

enableWeb (object)

Default false

Configure static web hosting for the bucket. Set to true to enable the default hosting with the IndexDocument set to index.html. Otherwise, set the value to be an object that matches the parameters required for WebsiteConfiguration in putBucketWebsite docs.

Caching

First run will deploy like:

Running "s3:uat" (s3) task
Retrieving list of existing objects...
>> Put 'public/vendor/jquery.rest.js'
>> Put 'index.html'
>> Put 'scripts/app.js'
>> Put 'styles/app.css'
>> Put 'public/img/loader.gif'
>> Put 'public/vendor/verify.notify.js'
>> Put 6 files

Subsequent runs should look like:

Running "s3:uat" (s3) task
>> No change 'index.html'
>> No change 'public/vendor/jquery.rest.js'
>> No change 'styles/app.css'
>> No change 'scripts/app.js'
>> No change 'public/img/loader.gif'
>> No change 'public/vendor/verify.notify.js'
>> Put 0 files

Explained Examples

s3: {
  //provide your options...

  options: {
    accessKeyId: "<%= aws.accessKeyId %>",
    secretAccessKey: "<%= aws.secretAccessKey %>",
    bucket: "my-bucket"
  },

  //then create some targets...

  //upload all files within build/ to root
  build: {
    cwd: "build/",
    src: "**"
  },

  //upload all files within build/ to output/
  move: {
    cwd: "build/",
    src: "**",
    dest: "output/"
  },

  //upload and rename an individual file
  specificFile: {
    src: "build/a.txt",
    dest: "output/b.txt"
  },

  //upload and rename many individual files
  specificFiles: {
    files: [{
      src: "build/a.txt",
      dest: "output/b.txt"
    },{
      src: "build/c.txt",
      dest: "output/d.txt"
    }]
  },

  //upload and rename many individual files (shorter syntax)
  specificFilesShort: {
    "output/b.txt": "build/a.txt"
    "output/d.txt": "build/c.txt"
  },

  //upload the img/ folder and all it's files
  images: {
    src: "img/**"
  },

  //upload the docs/ folder and it's pdf and txt files
  documents: {
    src: "docs/**/*.{pdf,txt}"
  },

  //upload the secrets/ folder and all its files to a different bucket
  secrets: {
    //override options
    options: {
    	bucket: "my-secret-bucket"
    }
    src: "secrets/**"
  },

  //upload the public/ folder with a custom Cache-control header
  longTym: {
    options: {
      headers: {
        CacheControl: 'max-age=900, public, must-revalidate'
      }
    }
    src: "public/**"
  },

  //upload the public/ folder with a 2 year cache time
  longTym: {
    options: {
      headers: {
        CacheControl: 630720000 //max-age=630720000, public
      }
    }
    src: "public/**"
  },

  //upload the public/ folder with a specific expiry date
  beryLongTym: {
    options: {
      headers: {
        Expires: new Date('2050') //Sat, 01 Jan 2050 00:00:00 GMT
      }
    }
    src: "public/**"
  },

  //Copy file directly from s3 bucket to a different bucket
  copyFile: {
    src: "build/c.txt",
    dest: "output/d.txt",
    options: {
      copyFile: "my-bucket2/output/d.txt"
    }
  },

  //Copy all files in directory
  copyFiles: {
    src: "public/**",
    options: {
      copyFrom: 'my-bucket2/public'
    }
  }

}

References

Todo

  • Download operation
  • Delete unmatched files

The "route53" task

Features

  • Create DNS records using simple configuration
  • Smart Local Caching

Usage

To create two new records - the first resolving to an IP address and the second resolving to the domain name a bucket:

  grunt.initConfig({
    aws: grunt.file.readJSON("credentials.json"),
    route53: {
      options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        zones: {
		      'mydomain.org': [{
             name: 'record1.mydomain.org',
             type: 'A',
             value: ['1.1.1.1']
          },{
            name: 'record2.mydomain.org',
            type: 'CNAME',
            value: ['record2.mydomain.org.s3-website-ap-southeast-2.amazonaws.com']
          }]
        }
      }
    }
  });

Options

accessKeyId required (String)

Amazon access key id

secretAccessKey required (String)

Amazon secret access key

assumeRole (Boolean)

Use AWS IAM Role instead of credentials

zones required (Object)

An object containing names of zones and a list of DNS records to be created for this zone in Route 53.

Each record requires name, type and value to be set. The name property is the new domain to be created. The type is the DNS type e.g. CNAME, ANAME, etc.. The value is a list of domain names or IP addresses that the DNS entry will resolve to.

It is also possible to specify any of the additional options described in the ResourceRecordSet section of the changeResourceRecordSets method. For example, AliasTarget could be used to set up an alias record.

TTL (Number)

Default 300

Default TTL of any new Route 53 records.

dryRun (Boolean)

Default false

Performs a preview run displaying what would be modified

concurrency (Number)

Default 20

Number of Route53 operations that may be performed concurrently

cache (Boolean)

Default true

Cache data returned from Route 53. Once records

References

Todo

  • Better support for alias records
  • Create zones?

The "cloudfront" task

Features

  • Invalidate a list of files, up to the maximum allowed by CloudFront, like /index.html and /pages/whatever.html
  • Update CustomErrorResponses
  • Update OriginPath on the first origin in the distribution, other origins will stay the same
  • Update DefaultRootObject

Usage

A sample configuration is below. Each property must follow the requirements from the CloudFront updateDistribution Docs.

  grunt.initConfig({
    aws: grunt.file.readJSON("credentials.json"),
    cloudfront: {
      options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        distributionId: '...',
      },
      html: {
        options: {
          invalidations: [
            '/index.html',
            '/pages/whatever.html'
          ],
          customErrorResponses: [ {
            ErrorCode: 0,
            ErrorCachingMinTTL: 0,
            ResponseCode: 'STRING_VALUE',
            ResponsePagePath: 'STRING_VALUE'
          } ],
          originPath: 'STRING_VALUE',
          defaultRootObject: 'STRING_VALUE'
        }
      }
    }
  });

Options

accessKeyId required (String)

Amazon access key id

secretAccessKey required (String)

Amazon secret access key

distributionId required (String)

The CloudFront Distribution ID to be acted on

invalidations optional (Array)

An array of strings that are each a root relative path to a file to be invalidated

customErrorResponses optional (Array)

An array of objects with the properties shown above

originPath optional (String)

A string to set the origin path for the first origin in the distribution

defaultRootObject optional (String)

A string to set the default root object for the distribution

The "sns" task

Features

  • Publish to a SNS topic

Usage

To public a message

  grunt.initConfig({
    aws: grunt.file.readJSON("credentials.json"),
    cloudfront: {
      options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        region: '<%= aws.region %>',
        target: 'AWS:ARN:XXXX:XXXX:XXXX',
        message: 'You got it',
        subject: 'A Notification'
      }
    }
  });

Options

accessKeyId required (String)

Amazon access key id

secretAccessKey required (String)

Amazon secret access key

region required (String)

The region that the Topic is hosted under

target required (String)

The AWS ARN for the topic

message required (String)

The message content for the notification

subject required (String)

The subject to use for the notification

References

Todo

  • Add other SNS functionality

MIT License

Copyright © 2013 Jaime Pillora <[email protected]>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

grunt-aws's People

Contributors

bemjb avatar bxjx avatar ccorcos avatar cheeaun avatar chrisui avatar chug2k avatar dcrec1 avatar iisrail avatar jmalonzo avatar jpillora avatar jrit avatar lukebussey avatar miguel250 avatar mmeyeralitmetrik avatar nephiw avatar rchl avatar solcre-gr avatar vepasto avatar wdalmut avatar woodcoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grunt-aws's Issues

Bug: s3: uploading to dest with slash in beginning should fail but doesn't

When you specify a "dest" option that begins with a slash, eg:

{
  src: "**",
  dest: "/myfolder/"
}

The task output displays like it has succeeded, eg:

Running "s3:assets" (s3) task
Retrieving list of existing objects prefixed with '/myfolder/'...
Put '/myfolder/myfile.html'

When actually nothing is uploaded at all.

The task should probably fail in these cases, as a slash is not valid at the beginning of an object identifier in S3.

Howto: Dump local S3 cache?

I recently started using your "grunt-aws" plugin with deploying my project and I absolutely LOVE it! However, I'm running into an issue. I used it to deploy files to S3. In an attempt to update some files in my S3 bucket online, I accidentally deleted some files from my bucket. I thought I could just run my Grunt deploy task again (the task which runs my grunt-aws s3 configuration) to re-upload the missing file, but for some reason it will not upload.

I noticed that the S3 portion of this plugin supports "Smart Local Caching". Is there some way to clear/dump that local cache so I can just re-deploy all project files fresh to my S3 bucket? Not seeing any documentation in the plugin's README, or may have missed it. Is the "overwrite" option what I need? Documentation says it's set to "true" by default, so theoretically it should be overwriting (re-deploying) all files, right?

Any help would be much appreciated. Thank you!

Buckets in Different Regions

Not sure this is a big issue for most people and I only discovered it by mistake(set the wrong region). Overriding the default bucket with one from a different region results in the following:

NetworkingError: Hostname/IP doesn't match certificate's altnames

_.detect is not a function - detect is deprecated in lodash 4.0.0+

When you try to create a new bucket using the s3 service the program is unable to detect if it already exists because _.detect method is deprecated in lodash 4.0.0 and on, resulting in the following error:

Running "s3:build" (s3) task
##vso[task.debug]load strings from: F:\Desarrollo\vewd-app-base\node_modules\vsts-task-lib\lib.json
##vso[task.debug]load loc strings from: F:\Desarrollo\vewd-app-base\node_modules\vsts-task-lib\Strings\resources.resjson\en-US\resources.resjson
##vso[task.debug]task result: Failed
##vso[task.complete result=Failed;]Unhandled: _.detect is not a function
Unhandled: _.detect is not a function

It needs to be replaced with _.find method for it to work.

More info here https://github.com/lodash/lodash/wiki/Deprecations

Cheers.

grunt-aws Not Processing Any Files

I am trying to automate uploading files to an S3 bucket by using: https://github.com/jpillora/grunt-aws#the-s3-task

My Gruntfile.js "compiles" correctly, but when executed it simply hangs when it gets to the S3 portion -- without errors.

The following is my Gruntfile.js:

module.exports = function(grunt) {
    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json'),
        aws: grunt.file.readJSON('aws.json'),
        concat: {
            options: {
                separator: '\n',
                stripBanners: {
                    block: true
                }
            },
            scripts: {
                src: [
                    'scripts/bootstrap.js', 
                    'scripts/bootstrap-select.js', 
                    'scripts/bootbox.js',
                    'scripts/app.js',
                    'scripts/jquery.validate.js', 
                    'scripts/additional-methods.js',
                    'scripts/captcha.js',
                    'scripts/mail.js',
                    'scripts/render.js'
                ],
                dest: 'scripts/bundle.js'
            },
            style: {
                src: [
                    'style/jquery-ui.css', 
                    'style/bootstrap.css', 
                    'style/bootstrap-select.css',
                    'style/en-us.css'
                ],
                dest: 'style/bundle.css'
            }
        },
        uglify: {
            options: {
                banner: '/*! <%= grunt.template.today("dd-mm-yyyy") %> */\n',
                mangle: {
                    except: ['jQuery']
                }
            },
            scripts: {
                files: {
                    'scripts/bundle.min.js': 'scripts/bundle.js'    
                }
            }
        },
        cssmin: {
            target: {
                files: [{
                    expand: true,
                        cwd: 'style',
                        src: ['bundle.css'],
                        dest: 'style',
                        ext: '.min.css'
                    }]
            }   
        },
        s3: {
            options: {
                accessKeyId: '<%= aws.key %>',
                secretAccessKey: '<%= aws.secret %>',
                bucket: '<%= aws.bucket %>',
                region: '<%= aws.region %>',
                access: 'public-read'
            },
            upload: {
                headers: {
                    CacheControl: 604800,
                    Expires: new Date(Date.now() + 604800000).toUTCString()
                },
                cwd: "/",
                src: "**"
            }
        }
    });

    grunt.loadNpmTasks('grunt-contrib-uglify');
    grunt.loadNpmTasks('grunt-contrib-watch');
    grunt.loadNpmTasks('grunt-contrib-concat');
    grunt.loadNpmTasks('grunt-contrib-cssmin');
    grunt.loadNpmTasks('grunt-aws');

    grunt.registerTask('default', ['concat', 'uglify', 'cssmin', 's3']);
};

The associated output of _sudo grunt s3 -v --force_ is as follows:

Loading "cloudfront.js" tasks...OK

  • cloudfront Loading "route53.js" tasks...OK
  • route53 Loading "s3.js" tasks...OK
  • s3 Loading "aws.js" tasks...OK

    No tasks were registered or unregistered. Loading "cache-mgr.js" tasks...OK
    No tasks were registered or unregistered. Loading "Gruntfile.js" tasks...OK

  • default

Running tasks: s3

Running "s3" task

Running "s3:upload" (s3) task Verifying property s3.upload exists in
config...OK

At that point, it hangs...

Any ideas?

Options for different environments

Would it be possible to extend the config in a way that different environments can be supported?

I'm thinking about sth like the following:

/*..*/
  dev: {
    s3: {
        options: {
            accessKeyId: "<%= aws.accessKeyId %>",
            secretAccessKey: "<%= aws.secretAccessKey %>",
            bucket: "my-dev-bucket"
        }
      }
  },
  staging: {
    s3: {
        options: {
            accessKeyId: "<%= aws.accessKeyId %>",
            secretAccessKey: "<%= aws.secretAccessKey %>",
            bucket: "my-staging-bucket"
        }
      }
  },
  prod: {
    s3: {
        options: {
            accessKeyId: "<%= aws.accessKeyId %>",
            secretAccessKey: "<%= aws.secretAccessKey %>",
            bucket: "my-prod-bucket"
        }
      }
  }
/* .. */
}

Then the s3 task can be run as follows

grunt s3 --target=dev

Feature: Delete files in bucket

It would be nice if there was a way to delete some or all of the files in the bucket on deploy. For example, it could be used to ensure that files removed from the project repository got removed from S3 on build.

With the S3 task, charset adds to every mime-type

I am adding dozens of files to different S3 buckets depending upon if I am releasing to staging, production, or beta. So I already have the 6 different S3 tasks (each one has 2 cache control configurations), but now I need to set the charset on html, css, and map files so they render properly in the browser.

Upon reading through the code, I see that the charset is added here: https://github.com/jpillora/grunt-aws/blob/master/tasks/services/s3.js#L325 and that the mime type is not consulted to determine if they should be added so I get Content-Types like this: image/gif; charset=utf-8 which is not ideal.

I take this to mean that I need to add more tasks to separate the configuration of utf-8 encoded files separately of my image files. Is there a better way of handling this? Can I define the mime type to include charset for example?

Access Denied when acessKeyId has some restrictions

Hi, I'm getting Access Denied when trying to upload files to my bucket.

The accessKey that I need to use has some restrictions in the S3 directory, it can only see 1 bucket, and can't list/access others.

The IT guys at my company are setting the S3 access with this kind of policy now, when the keys are setted with "fullaccess" the api works fine, but now that they are changing I'm getting this error.

Is there anyway to define that kind of access in options?

Here's how I'm using this api to upload to the bucket.

        aws: grunt.file.readJSON("credentials.json"),
        s3: {
            options: {
                accessKeyId: "<%= aws.accessKeyId %>",
                secretAccessKey: "<%= aws.secretAccessKey %>",
                bucket: "cdn-html5",
                cacheTTL: 0
            },
            build: {
                files: [
                    {
                        cwd: "<%= yeoman.dist %>",
                        src: ["scripts/**", "styles/**", "images/**", "doc/**", "swf/**"],
                        dest: "reader_api/<%= yeoman.version %>/"
                    },
                    {
                        cwd: "<%= yeoman.dist %>",
                        src: ["static/**"],
                        dest: "reader_api/"
                    }
                ]
            }
        }

I really don't know if its something with the access settings, or with the api. I'm trying to see with the IT department too.

NOTE: Opening in the 3Hub app (for Mac) I can login with the credentials and read/write the 'cdn-html5' bucket without any problem.

GZip encoding is breaking video media

When looping over video uploaded to S3 with this grunt task, the default GZip behavior uploads the content with the appropriate Content-Type header to gzip.

Problem is when playing back video or attempting to loop that video content, I'm met with 'net::ERR_CONTENT_DECODING_FAILED' as a response. This happens when viewing content directly from the S3 url or via Cloudfront.

ContentType of "text/plain" not honored for extensionless file

Great plugin—thanks for your work on it.

I have an extensionless text file containing a SHA that needs to be uploaded with a ContentType of "text/plain". I have noticed when it makes it to S3 its content type is "binary/octet-stream" instead.

Configuration:

s3: {
  options: {
    accessKeyId: '<%= aws.key %>',
    secretAccessKey: '<%= aws.secret %>',
    bucket: 'www.mysite.com'
  },
  deploy: {
    files: [
      {
        src: 'sha',
        cwd: '<%= compile_dir %>',
        options: {
          headers: { ContentType: 'text/plain' }
        }
      }
    ]
  }
}

task stalled

created a very simple example
Gruntfile.js

module.exports = function(grunt) {
  grunt.loadNpmTasks('grunt-aws');

  grunt.initConfig({
    aws: grunt.file.readJSON(".aws-credentials.json"),
    s3: {
      options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        bucket: "remoto-releases"
      },
      build: {
        cwd: "electron-out/make",
        src: "**"
      }
    }
  });
}

Then, I run:

$ grunt s3 -v
Initializing
Command-line options: --verbose

Reading "Gruntfile.js" Gruntfile...OK

Registering Gruntfile tasks.

Registering "grunt-aws" local Npm module tasks.
Reading /Users/fredguth/Code/remoto/remoto-app/node_modules/grunt-aws/package.json...OK
Parsing /Users/fredguth/Code/remoto/remoto-app/node_modules/grunt-aws/package.json...OK

Registering "/Users/fredguth/Code/remoto/remoto-app/node_modules/grunt-aws/tasks/services" tasks.
Loading "cloudfront.js" tasks...OK
+ cloudfront
Loading "route53.js" tasks...OK
+ route53
Loading "s3.js" tasks...OK
+ s3
Loading "sns.js" tasks...OK
+ sns
Loading "aws.js" tasks...OK
>> No tasks were registered or unregistered.
Loading "cache-mgr.js" tasks...OK
>> No tasks were registered or unregistered.
Reading .aws-credentials.json...OK
Parsing .aws-credentials.json...OK
Initializing config...OK
Loading "Gruntfile.js" tasks...OK
>> No tasks were registered or unregistered.

Running tasks: s3

Running "s3" task

Running "s3:build" (s3) task
Verifying property s3.build exists in config...OK
Files: , Remoto-1.1.53.dmg, Remoto-darwin-x64-1.1.53.zip
Options: access="public-read", concurrent=20, cacheTTL=3600000, dryRun=false, gzip, cache, overwrite, createBucket=false, enableWeb=false, signatureVersion="v4", accessKeyId="something", secretAccessKey="something", bucket="remoto-releases"
^C

(removed actual credentials values)

The task was stoped in this point for a while and didn't seem to progress.

Specifying Cache Control

I have the following, but it doesn't appear that the Cache-Control and Expires headers are actually being set on the files in S3.
s3: { options: { accessKeyId: '<%= aws.key %>', secretAccessKey: '<%= aws.secret %>', bucket: '<%= aws.bucket %>', region: '<%= aws.region %>', access: 'public-read' }, upload: { headers: { CacheControl: 604800, Expires: new Date(Date.now() + 604800000).toUTCString() }, cwd: ".", src: "**" } }
When I inspect the file in Chrome (for example: https://s3-us-west-2.amazonaws.com/sitecorearizona.org/scripts/bundle.min.js ) I do not see either header included in the response?

HTML file being uploaded as binary

Hi,
Thanks for this script.
When I try running the s3 task, all of the files are uploaded fine except index.html, which is uploaded as a binary.
Here is my config:

      s3: {
      options: {
        accessKeyId: "<%= aws.key %>",
        secretAccessKey: "<%= aws.secret %>",
        bucket: "pushtest"
      },
      build: {
        cwd: "dist/",
        src: "**"
      }
    }

When it runs, it uploads all the files, and index.html has the proper content-type of text/html in the s3 dashboard, but it looks like gobbledygook in the browser and when I download it in cyberduck it looks like binary.

0.5.1 S3 fails with 'Cannot set property 'Timestamp' of undefined'

0.5.1 was working and after upgrading to 0.5.1 I receive the following error:

Running "s3:dist" (s3) task
Retrieving list of existing objects prefixed with 'favicon.ico'...
Warning: Cannot set property 'Timestamp' of undefined Use --force to continue.

I've deleted my node_modules folder and reinstalled everything, but to no avail.

Upload HTML files last

It would be awesome if the html files were uploaded last. Otherwise, theres a short amount of time when you can refresh the page and the html file will load without the necesary assets being available.

CacheControl set as string

Currently the Cache-control header can only be set using a number or object, but the header is a string and can accept more than just the values allowed.

I propose that it accepts a number, an object, OR a string. Where string is the value of the header you wish to set. E.g.

options: {
  headers: {
    CacheControl: "public, must-revalidate, no-transform, max-age=900, s-maxage=60"
  }
}

The specs detail all of the header values here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9

I'd be happy to open a pull request and make the change backwards compatible. Any thoughts?

need lodash update

  Moderate        Prototype Pollution

  Package         lodash

  Patched in      >=4.17.11

  Dependency of   grunt-aws

  Path            grunt-aws > lodash

  More info       https://npmjs.com/advisories/782

Won't push new versions of existing files

When I update local files and attempt to sync them to the s3 bucket they get marked 'No change' and don't get updated.

I tried forcing overwriting existing files by setting the 'overwrite' option to true with no success.

Can't use grunt-aws behind a proxy

I would like to do something like

s3: {
    options: {
        accessKeyId: "<%= aws.accessKeyId %>",
        secretAccessKey: "<%= aws.secretAccessKey %>",
        bucket: "elasticbeanstalk-eu-west-1-475982404834",
        httpOptions: {
          proxy: process.env.http_proxy
       }
    },
    build: {
        cwd: "public/",
        src: "**"
    }
}

But the httpOptions are not used

Copy from one S3 bucket to another

The documentation for the "copyFrom" option would imply that this is possible, however after trying the sample code (included below for convenience), I was not able to copy the contents of one bucket to another. I tried adding the "dest" parameter, adding the "bucket" parameter to options, and removing the "src" parameter. I got an error when I removed the "src" parameter, and all other attempts said "Put 0 files".

s3: {

  options: {
    accessKeyId: "<%= aws.accessKeyId %>",
    secretAccessKey: "<%= aws.secretAccessKey %>",
    bucket: "my-bucket"
  },

  //Copy all files in directory
  copyFiles: {
    src: "static/**",
    options: {
      copyFrom: 'my-bucket2/static'
    }
  }

}

A little more context. I'm copying my static assets to S3, to be served by CloudFront. I'm putting everything in a static folder in S3, however, that static folder doesn't exist locally. Could that be an issue? Is there something else I'm missing here?

Can cloudfront task invalidation array be dynamic?

I absolutely LOVE this grunt plugin, but am really having a hard time using the cloudfront task. It appears from the documentation that the 'invalidations' array needs to have static full paths to files passed in (i.e. '/index.html', '/assets/css/main.css', etc.). However, as I'm sure you can deduce, keeping a list of every file in the project (esp. as the project grows over time) with each file being it's own line-item/string is exhaustive. I was hoping that instead, I could use some of Grunt's built-in globbing functions so that I am able to scan the entire folder tree for specific file types and invalidate those files.

Here's what I'm trying below in my Gruntfile. If anyone has any ideas, please let me know! This causes the Grunt build to fail. Not sure why I cannot use Grunt's globbing patterns. :(

cloudfront: {
    options: {
        accessKeyId: '###masked for security###',  // confirmed this is working
        secretAccessKey: '###masked for security###',  // confirmed this is working
        distributionId: '###masked for security###',  // confirmed this is working
        invalidations: [
            // Fonts
            '/assets/fonts/**/*.{eot,svg,ttf,woff,woff2}',
            // Text
            '/**/*.{txt,xml,pdf}',
            // Images
            '/assets/images/**/*.{jpg,webp,png,gif}',
            '/projects/images/**/*.{jpg,webp,png,gif}',
            // JS
            '/assets/js/**/*.{js,map}',
            // CSS
            '/assets/css/**/*.{css,map}',
            // HTML
            '/**/*.html'
        ]
    }
}

Fatal error: mime.lookup is not a function

upgraded to 0.7.0 and getting this error.

my config;

s3: {
      options: {
        accessKeyId: "<%= aws.access %>",
        secretAccessKey: "<%= aws.secret %>",
        bucket: "<%= aws.bucket %>",
        cache: false
      },
      assets: {
        options: {
          headers: {
            CacheControl: 'max-age=2628000, public, must-revalidate' // cache for a month.
          }
        },
        cwd: "public/",
        src: "**",
        dest: ""
      }
    }

any ideas?

Files not uploading (similar to issue #28)

Similar problem to issue #28; however, suggested fix isn't resolving the problem. I'm referencing the 'prod' directory, which contains my build as the cwd:

module.exports = function(grunt) {

    grunt.loadNpmTasks('grunt-aws');
    grunt.loadNpmTasks('grunt-contrib-uglify');
    grunt.loadNpmTasks('grunt-contrib-copy');

    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json'),
        aws: grunt.file.readJSON("aws-credentials.json"),

        s3: {
            options: {
                accessKeyId: "<%= aws.accessKeyId %>",
                secretAccessKey: "<%= aws.secretAccessKey %>",
                bucket: "<%= aws.bucket %>",
                region: "<%= aws.region %>",
                access: "public-read"
            },
            upload: {
                headers: {
                    CacheControl: 604800,
                    Expires: new Date(Date.now() + 604800000).toUTCString()
                },
                cwd: "prod/",
                src: "**"
            }
        },

        cloudfront: {
            options: {
                accessKeyId: "<%= aws.accessKeyId %>",
                secretAccessKey: "<%= aws.secretAccessKey %>",
                distributionId: "EGPU172BPH6RM",
                invalidations: [
                    "/index.html",
                    "/css/osis.css",
                    "/js/osis.js",
                    "/js/osis-fullpage-controls.js"
                ]
            }
        },

        copy: {
            main: {
                expand: true,
                cwd: 'dev/',
                src: '**',
                dest: 'prod/',
            },
        },

        uglify: {
            dist: {
                files: {
                    'build/<%= pkg.name %>.min.js': ['<%= concat.dist.dest %>']
                }
            }
        }

    });

    grunt.registerTask('default', ['copy']);
    grunt.registerTask('s3', ['s3']);
    grunt.registerTask('uglify', ['uglify']);
};

And here's the output of grunt s3 -v:

user$ grunt s3 -v
Initializing
Command-line options: --verbose

Reading "Gruntfile.js" Gruntfile...OK

Registering Gruntfile tasks.

Registering "grunt-aws" local Npm module tasks.
Reading /Users/user/Sites/osis.dev/node_modules/grunt-aws/package.json...OK
Parsing /Users/user/Sites/osis.dev/node_modules/grunt-aws/package.json...OK

Registering "/Users/user/Sites/osis.dev/node_modules/grunt-aws/tasks/services" tasks.
Loading "cloudfront.js" tasks...OK

  • cloudfront
    Loading "route53.js" tasks...OK
  • route53
    Loading "s3.js" tasks...OK
  • s3
    Loading "aws.js" tasks...OK

    No tasks were registered or unregistered.
    Loading "cache-mgr.js" tasks...OK
    No tasks were registered or unregistered.

Registering "grunt-contrib-uglify" local Npm module tasks.
Reading /Users/user/Sites/osis.dev/node_modules/grunt-contrib-uglify/package.json...OK
Parsing /Users/user/Sites/osis.dev/node_modules/grunt-contrib-uglify/package.json...OK
Loading "uglify.js" tasks...OK

  • uglify

Registering "grunt-contrib-copy" local Npm module tasks.
Reading /Users/user/Sites/osis.dev/node_modules/grunt-contrib-copy/package.json...OK
Parsing /Users/user/Sites/osis.dev/node_modules/grunt-contrib-copy/package.json...OK
Loading "copy.js" tasks...OK

  • copy
    Reading package.json...OK
    Parsing package.json...OK
    Reading aws-credentials.json...OK
    Parsing aws-credentials.json...OK
    Initializing config...OK
    Loading "Gruntfile.js" tasks...OK
  • default, s3, uglify

Running tasks: s3

Running "s3" task

Running "s3" task

Running "s3" task
...

The line "Running 's3' task" continues ad-infinitum. Any help would be greatly appreciated!

Does not support authentication for buckets in newer regions

I have setup a bucket in the new Frankfurt location. If I try to upload to it using grunt-aws I get the following error message:

InvalidRequest: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

According to http://stackoverflow.com/questions/26533245/the-authorization-mechanism-you-have-provided-is-not-supported-please-use-aws4 newer regions will only support AWS4-HMAC-SHA256 and older authentication methods are deprecated.

I guess this can be fixed by upgrading to the latest version of aws-sdk.

Add an exclude option

Would it be possible to add an exclude option to the S3 sync tasks?

A simple array of files/folders to exclude would work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.