Giter VIP home page Giter VIP logo

gulp-awspublish's Introduction

gulp-awspublish

NPM version Dependency Status Install size

awspublish plugin for gulp

Usage

First, install gulp-awspublish as a development dependency:

npm install --save-dev gulp-awspublish

Then, add it to your gulpfile.js:

var awspublish = require('gulp-awspublish');

gulp.task('publish', function () {
  // create a new publisher using S3 options
  // http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#constructor-property
  var publisher = awspublish.create(
    {
      region: 'your-region-id',
      params: {
        Bucket: '...',
      },
    },
    {
      cacheFileName: 'your-cache-location',
    }
  );

  // define custom headers
  var headers = {
    'Cache-Control': 'max-age=315360000, no-transform, public',
    // ...
  };

  return (
    gulp
      .src('./public/*.js')
      // gzip, Set Content-Encoding headers and add .gz extension
      .pipe(awspublish.gzip({ ext: '.gz' }))

      // publisher will add Content-Length, Content-Type and headers specified above
      .pipe(publisher.publish(headers))

      // create a cache file to speed up consecutive uploads
      .pipe(publisher.cache())

      // print upload updates to console
      .pipe(awspublish.reporter())
  );
});

// output
// [gulp] [create] file1.js.gz
// [gulp] [create] file2.js.gz
// [gulp] [update] file3.js.gz
// [gulp] [cache]  file3.js.gz
// ...
  • Note: If you follow the aws-sdk suggestions for providing your credentials you don't need to pass them in to create the publisher.

  • Note: In order for publish to work on S3, your policy has to allow the following S3 actions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::BUCKETNAME"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:GetObject",
        "s3:GetObjectAcl",
        "s3:DeleteObject",
        "s3:ListMultipartUploadParts",
        "s3:AbortMultipartUpload"
      ],
      "Resource": ["arn:aws:s3:::BUCKETNAME/*"]
    }
  ]
}

Bucket permissions

By default, if no x-amz-acl header is passed, the uploaded object will inherit from the bucket setting. If you have specific requirements for the uploaded object, make sure to pass a value for the x-amz-acl header:

publisher.publish({ 'x-amz-acl': 'something' });

See canned ACL on AWS documentation.

Testing

  1. Create an S3 bucket which will be used for the tests. Optionally create an IAM user for running the tests.
  2. Set the buckets Permission, so it can be edited by the IAM user who will run the tests.
  3. Add an aws-credentials.json file to the project directory with the name of your testing buckets and the credentials of the user who will run the tests.
  4. Run npm test
{
  "params": {
    "Bucket": "<test-bucket-name>"
  },
  "credentials": {
    "accessKeyId": "<your-access-key-id>",
    "secretAccessKey": "<your-secret-access-key>",
    "signatureVersion": "v3"
  }
}

API

awspublish.gzip(options)

create a through stream, that gzip file and add Content-Encoding header.

  • Note: Node version 0.12.x or later is required in order to use awspublish.gzip. If you need an older node engine to work with gzipping, you can use v2.0.2.

Available options:

  • ext: file extension to add to gzipped file (eg: { ext: '.gz' })
  • smaller: gzip files only when result is smaller
  • Any options that can be passed to zlib.gzip

awspublish.create(AWSConfig, cacheOptions)

Create a Publisher. The AWSConfig object is used to create an aws-sdk S3 client. At a minimum you must pass a Bucket key, to define the site bucket. You can find all available options in the AWS SDK documentation.

The cacheOptions object allows you to define the location of the cached hash digests. By default, they will be saved in your projects root folder in a hidden file called '.awspublish-' + 'name-of-your-bucket'.

Adjusting upload timeout

The AWS client has a default timeout which may be too low when pushing large files (> 50mb). To adjust timeout, add httpOptions: { timeout: 300000 } to the AWSConfig object.

Credentials

By default, gulp-awspublish uses the credential chain specified in the AWS docs.

Here are some example credential configurations:

Hardcoded credentials (Note: We recommend you not hard-code credentials inside an application. Use this method only for small personal scripts or for testing purposes.):

var publisher = awspublish.create({
  region: 'your-region-id',
  params: {
    Bucket: '...',
  },
  credentials: {
    accessKeyId: 'akid',
    secretAccessKey: 'secret',
  },
});

Using a profile by name from ~/.aws/credentials:

var AWS = require('aws-sdk');

var publisher = awspublish.create({
  region: 'your-region-id',
  params: {
    Bucket: '...',
  },
  credentials: new AWS.SharedIniFileCredentials({ profile: 'myprofile' }),
});

Instead of putting anything in the configuration object, you can also provide the following environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_PROFILE. You can also define a [default] profile in ~/.aws/credentials which the SDK will use transparently without needing to set anything.

Publisher.publish([headers], [options])

Create a through stream, that push files to s3.

  • header: hash of headers to add or override to existing s3 headers.
  • options: optional additional publishing options
    • force: bypass cache / skip
    • putOnly: bypass cache and head request (overrides force)
    • simulate: debugging option to simulate s3 upload
    • createOnly: skip file updates

Files that go through the stream receive extra properties:

  • s3.path: s3 path
  • s3.etag: file etag
  • s3.date: file last modified date
  • s3.state: publication state (create, update, put, delete, cache or skip)
  • s3.headers: s3 headers for this file. Defaults headers are:
    • Content-Type
    • Content-Length

Note: publish will never delete files remotely. To clean up unused remote files use sync.

publisher.cache()

Create a through stream that create or update a cache file using file s3 path and file etag. Consecutive runs of publish will use this file to avoid reuploading identical files.

Cache file is save in the current working dir and is named .awspublish-<bucket>. The cache file is flushed to disk every 10 files just to be safe.

Publisher.sync([prefix], [whitelistedFiles])

create a transform stream that delete old files from the bucket.

  • prefix: prefix to sync a specific directory
  • whitelistedFiles: array that can contain regular expressions or strings that match against filenames that should never be deleted from the bucket.

e.g.

// only directory bar will be synced
// files in folder /foo/bar and file baz.txt will not be removed from the bucket despite not being in your local folder
gulp
  .src('./public/*')
  .pipe(publisher.publish())
  .pipe(publisher.sync('bar', [/^foo\/bar/, 'baz.txt']))
  .pipe(awspublish.reporter());

warning sync will delete files in your bucket that are not in your local folder unless they're whitelisted.

// this will publish and sync bucket files with the one in your public directory
gulp
  .src('./public/*')
  .pipe(publisher.publish())
  .pipe(publisher.sync())
  .pipe(awspublish.reporter());

// output
// [gulp] [create] file1.js
// [gulp] [update] file2.js
// [gulp] [delete] file3.js
// ...

Publisher.client

The aws-sdk S3 client is exposed to let you do other s3 operations.

awspublish.reporter([options])

Create a reporter that logs s3.path and s3.state (delete, create, update, put, cache, skip).

Available options:

  • states: list of state to log (default to all)
// this will publish,sync bucket files and print created, updated and deleted files
gulp
  .src('./public/*')
  .pipe(publisher.publish())
  .pipe(publisher.sync())
  .pipe(
    awspublish.reporter({
      states: ['create', 'update', 'delete'],
    })
  );

Examples

You can use gulp-rename to rename your files on s3

// see examples/rename.js

gulp
  .src('examples/fixtures/*.js')
  .pipe(
    rename(function (path) {
      path.dirname += '/s3-examples';
      path.basename += '-s3';
    })
  )
  .pipe(publisher.publish())
  .pipe(awspublish.reporter());

// output
// [gulp] [create] s3-examples/bar-s3.js
// [gulp] [create] s3-examples/foo-s3.js

You can use concurrent-transform to upload files in parallel to your amazon bucket

var parallelize = require('concurrent-transform');

gulp
  .src('examples/fixtures/*.js')
  .pipe(parallelize(publisher.publish(), 10))
  .pipe(awspublish.reporter());

Upload both gzipped and plain files in one stream

You can use the merge-stream plugin to upload two streams in parallel, allowing sync to work with mixed file types

var merge = require('merge-stream');
var gzip = gulp.src('public/**/*.js').pipe(awspublish.gzip());
var plain = gulp.src(['public/**/*', '!public/**/*.js']);

merge(gzip, plain)
  .pipe(publisher.publish())
  .pipe(publisher.sync())
  .pipe(awspublish.reporter());

Plugins

gulp-awspublish-router

A router for defining file-specific rules https://www.npmjs.org/package/gulp-awspublish-router

gulp-cloudfront-invalidate-aws-publish

Invalidate cloudfront cache based on output from awspublish https://www.npmjs.com/package/gulp-cloudfront-invalidate-aws-publish

License

MIT License

gulp-awspublish's People

Contributors

alexgorbatchev avatar bradoyler avatar builtbylane avatar delta62 avatar dependabot[bot] avatar elchudi avatar jamesmcmahon avatar jedi4ever avatar jorrit avatar kkolstad avatar klaemo avatar koistya avatar levithomason avatar lpender avatar ludwigschubert avatar mediavrog avatar mshick avatar pgherveou avatar pioug avatar richardotvos avatar rmoskal avatar robbiet480 avatar roguenet avatar ruxta avatar sbalay avatar sylwit avatar thedancingcode avatar tmthrgd avatar tomsouthall avatar vlad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gulp-awspublish's Issues

UnexpectedParameter: Unexpected key 'AccessControlAllowOrigin' found in params

Getting UnexpectedParameter error when publishing with Access-Control extra headers. If I take them out and leave only Cache-Control it publishes but doesn't add the headers.

It used to work with previous versions of this module. Currently using latest of this and aws-sdk.

var headers = {
   'Cache-Control': 'max-age=315340000, no-transform, public',
   'Access-Control-Allow-Origin': '*',
   'Access-Control-Allow-Methods': 'GET, HEAD',
   'Access-Control-Max-Age': '3000'
};


// error:

node_modules/gulp-awspublish/node_modules/aws-sdk/lib/request.js:32
          throw err;
                ^
MultipleValidationErrors: There were 3 validation errors:
* UnexpectedParameter: Unexpected key 'AccessControlAllowOrigin' found in params
* UnexpectedParameter: Unexpected key 'AccessControlAllowMethods' found in params
* UnexpectedParameter: Unexpected key 'AccessControlMaxAge' found in params
  at [object Object].validate (node_modules/gulp-awspublish/node_modules/aws-sdk/lib/param_validator.js:16:30)
  at Request.VALIDATE_PARAMETERS (node_modules/gulp-awspublish/node_modules/aws-sdk/lib/event_listeners.js:88:32)

Throws TypeError('Uncaught, unspecified "error" event.') on S3 Access Denied

I am setting up gulp-awspublish for the first time and I have been receiving the following error trying to publish.

events.js:74
        throw TypeError('Uncaught, unspecified "error" event.');
              ^
TypeError: Uncaught, unspecified "error" event.
    at TypeError (<anonymous>)
    at Transform.EventEmitter.emit (events.js:74:15)
    at onwriteError (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:251:10)
    at onwrite (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:269:5)
    at WritableState.onwrite (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/through2/node_modules/readable-stream/lib/_stream_writable.js:107:5)
    at afterTransform (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/through2/node_modules/readable-stream/lib/_stream_transform.js:104:5)
    at TransformState.afterTransform (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/through2/node_modules/readable-stream/lib/_stream_transform.js:79:12)
    at /Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/lib/index.js:273:48
    at ClientRequest.onResponse (/Users/bensudbury/Dropbox/JSProjects/zohoccprocessing/node_modules/gulp-awspublish/node_modules/knox/lib/client.js:60:7)
    at ClientRequest.EventEmitter.emit (events.js:95:17)

When I looked into it further it seems it is occuring because I am receiving a 403 response code Access Denied from AWS.

Error handling

At present there seems to be very little error handling. For example, specifying an incorrect bucket name doesn't expose any of the 404 errors, etc, that Amazon S3 returns.

I'm keen to add something.

Have you had any thoughts on the best way to handle this?

.cache() doesn't take header changes into account

When changing headers on objects the files are not republished if passed through cache(). For example, changing the cache-control on an object result in the object being marked as "skipped" during the publish() call.

Might be doing something wrong but it seems like this is an omission with the current behavior. Thoughts?

Time out err

Hi, Any idea about following err? Was trying your first example

events.js:72
throw er; // Unhandled 'error' event
^
Error: connect ETIMEDOUT
at errnoException (net.js:901:11)
at Object.afterConnect as oncomplete

FYI: region

I needed 'region' in options for publish.

301 Redirect error

Whenever I use this, I keep getting the following error:

$ gulp publish
[19:42:26] Requiring external module coffee-script/register
[19:42:27] Using gulpfile ~/Development/gifs/Gulpfile.coffee
[19:42:27] Starting 'publish'...

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: HTTP 301 Response returned from S3

Here's my task so far

gulp.task 'publish', ->
    publisher = awspublish.create { key: env.S3_KEY,  secret: env.S3_SECRET, bucket: env.S3_BUCKET }
    headers = {'Cache-Control': 'max-age=315360000, no-transform, public'}

    gulp.src "#{OUTPUT}/**/*"
        .pipe awspublish.gzip {'ext': '.gz'}
        .pipe publisher.publish headers
        .pipe publisher.cache()
        .pipe awspublish.reporter()

And my env variables:

S3_BUCKET=gifs.joshhunt.is
S3_KEY=...
S3_SECRET=...

What would be causing this? Should gulp-awspublish/knox follow the redirect, or have I made an error somewhere?

Edit: Digging in some more, if I make a get request, I get this back:

<?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>PermanentRedirect</Code>
    <Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
    <RequestId>...</RequestId>
    <Bucket>gifs.joshhunt.is</Bucket>
    <HostId>...</HostId>
    <Endpoint>gifs.joshhunt.is.s3.amazonaws.com</Endpoint>
</Error>

Sync: entire bucket?

Seems odd to have sync call the entire bucket. Why can't we only sync a specific directory?

use aws-sdk instead of knox?

Thanks for this library, works great! I was having some troubles uploading to s3 when the bucket name contains hyphens. I get 403, which seems like a common issue in knox.

Would you consider a PR if I work on using aws-sdk?

Access Denied

Thanks for putting this together.

I am new to automated s3 publishing, so please forgive my ignorance. I am getting Access Denied from the aws-sdk and it is probably due to my policy configuration.

Here is my config in the gulpfile (hard-coded for testing only, will transition to aws credentials suggested scheme once i get it working):

var awsConfig = {
    params: {
        "Bucket": "bucket-1"
    },
    "accessKeyId": "...",
    "secretAccessKey": "...",
    "region": "", // needed for us-standard, failing if us-standard is used in the region
};

On AWS, I have an IAM policy for a group, and a user with the credentials provided in the awsConfig assigned to that group. Here is that policy: 

{
    "Version": "2015-6-8",
    "Statement": [
        {
            "Sid": "Stmt1396513442000",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-1/*",
                "arn:aws:s3:::bucket-1",
                "arn:aws:s3:::bucket-2/*",
                "arn:aws:s3:::bucket-2"
            ]
        }
    ]
}

Am I missing something?

Per-file configuration

First of all, this plugin rocks, thank you for making it!

Currently there's no mention of being able to specify headers, gzipping and target key on a per-file basis. This is currently possible by writing to the files s3 property, e.g.

gulp.src(files)
    .pipe(through.obj(file, encoding, callback) {
        file.s3 = {
            headers: {},
            path: file.relative
        };

        if ( path.extname(file.path) === ".jpg" ) {
            file.s3.headers["Cache-Control"] = "max-age=315360000, no-transform, public";
        }

        this.push(file);
        callback();
    }))
    .pipe(publisher.publish())
    .pipe(publisher.sync());

But that feels like messing with private properties, and could use some abstraction. If the initFile function was exposed as a gulp middleware of the publisher as well, it would be nicer to make plugins that offer a bit more abstraction, e.g. some kind of a router style middleware (I'd be happy to make one):

gulp.src(files)
    .pipe(publisher.prepare())
    .pipe(awspublishRouter({
        "^assets/(?:.+)\.(?:js|css|svg|ttf)$": {
            key: "$&",
            gzip: true,
            cacheTime: 630720000
        },
        "^assets/.+$": {
            key: "$&",
            cacheTime: 630720000
        },
        "^items/([^/]+)/([^/]+)/index\\.html": {
            key: "channels/$1/$2",
            cacheTime: 300
        }
    }))
    .pipe(publisher.publish())
    .pipe(publisher.sync());

Question: Is it possible to upload both gzipped and non-gzipped files?

It is possible to upload both gzipped and non-gzipped files in a single task?

Currently I have the following:

var gulp = require("gulp");
var awspublish = require('gulp-awspublish');
var rename = require("gulp-rename");

var publisher = awspublish.create({
    params: {
        Bucket: 'test-bucket'
    }
});

function prefixFile(path) {
    path.dirname = '/foo/' + path.dirname;
}

gulp.task('s3:gzip', function() {
    return gulp.src('public/**/*')
        .pipe(rename(prefixFile))
        .pipe(awspublish.gzip({ ext: '.gz' }))
        .pipe(publisher.cache())
        .pipe(publisher.publish())
        .pipe(awspublish.reporter());
});

gulp.task('s3:plain', function() {
    return gulp.src('public/**/*')
        .pipe(rename(prefixFile))
        .pipe(publisher.cache())
        .pipe(publisher.publish())
        .pipe(awspublish.reporter()); 
});

But if I try to add sync to either or these tasks they delete each other's files. Is it possible to do this is in single task so that sync respects that both versions of the file should be uploaded?

Unsure how to upload to specific folder

From reading the documentation I've only been able to upload to the root directory. I'm a bit nervous about experimenting with the particular aws server I'm publishing to. Could you perhaps include an example in the docs of uploading to a specific folder? thanks

EDIT: I must be blind:

// see examples/rename.js

gulp.src('examples/fixtures/*.js')
.pipe(rename(function (path) {
path.dirname += '/s3-examples';
path.basename += '-s3';
}))
.pipe(publisher.publish())
.pipe(awspublish.reporter());

// output
// [gulp] [create] s3-examples/bar-s3.js
// [gulp] [create] s3-examples/foo-s3.js

Crash when using sync()

Hi,

When I use sync it crashes when it tries to delete a file. Upload works fine:

My task:

var headers = {'Cache-Control': 'max-age=900, no-transform, public'};

var stream = merge(site, templates, build)
  .pipe(awspublish.gzip())
  .pipe(publisher.publish(headers))
  .pipe(publisher.sync())
  .pipe(awspublish.reporter());

The stack trace is quit long so i pasted it on pastebin:
http://pastebin.com/SKHKpzXV

This is in version 1.0.3. I just saw that you tagged 1.0.4, so I'll try that shortly.

Format of bucket name?

Hi,

I'm new to AWS and I tried to use gulp-awspublish to upload somes files on a s3 bucket.
Hower, I always get and error :

C:***\node_modules\gulp-awspublish\node_modules\aws-sdk\lib\request.js:32
throw err;
^
NoSuchBucket: The specified bucket does not exist

    var publisher = awspublish.create({
        key: '...',
        secret: '...',
        bucket: 'arn:aws:s3:::mybucket'
    });

I've tried with
mybucket (=> BadRequest: null)
arn:aws:s3:::mybucket (=> NoSuchBucket: The specified bucket does not exist)
mybucket.s3-website.eu-central-1.amazonaws.com (=> NoSuchBucket: The specified bucket does not exist)

Any idea ?

Great Work

I think you guys just have to work a bit on the documentation but the product is great, works great in my project.

It would be nice to have the newest feature of storing the zip file in an S3 bucket instead of local hosting and upload.

Best Regards,

Sync to multiple buckets problem with cache

As I understand the cache, it is caching the ETag of the S3 bucket files. If I sync to multiple different buckets, doesn't that mean that the cache will fail to upload to a second bucket since the ETag locally will not have changed?

Storing the destination bucket in the cache file would solve it, I think. Or getting the ETag fresh each time from S3 to compare it to.

error: uncaughtException: CERT_UNTRUSTED

any idea why I keep getting this? I got a new computer (5k mac) and I get this error. On my old mac laptop (same code) I do not get this error.

Here is the log:

error: uncaughtException: CERT_UNTRUSTED date=Mon Dec 22 2014 16:53:49 GMT-0600 (CST), pid=50917, uid=501, gid=20, cwd=/Users/ryan/projects/myapp-webclient, execPath=/usr/local/bin/node, version=v0.10.34, argv=[node, /usr/local/bin/gulp, deploy-staging-webclient], rss=123949056, heapTotal=97377280, heapUsed=52923208, loadavg=[2.28125, 2.2705078125, 2.2431640625], uptime=287088, trace=[column=32, file=tls.js, function=, line=1381, method=null, native=false, column=17, file=events.js, function=SecurePair.emit, line=92, method=emit, native=false, column=10, file=tls.js, function=SecurePair.maybeInitFinished, line=980, method=maybeInitFinished, native=false, column=13, file=tls.js, function=CleartextStream.read [as _read], line=472, method=read [as _read], native=false, column=10, file=_stream_readable.js, function=CleartextStream.Readable.read, line=341, method=Readable.read, native=false, column=25, file=tls.js, function=EncryptedStream.write [as _write], line=369, method=write [as _write], native=false, column=10, file=_stream_writable.js, function=doWrite, line=226, method=null, native=false, column=5, file=_stream_writable.js, function=writeOrBuffer, line=216, method=null, native=false, column=11, file=_stream_writable.js, function=EncryptedStream.Writable.write, line=183, method=Writable.write, native=false, column=24, file=_stream_readable.js, function=write, line=602, method=null, native=false, column=7, file=_stream_readable.js, function=flow, line=611, method=null, native=false, column=5, file=_stream_readable.js, function=Socket.pipeOnReadable, line=643, method=pipeOnReadable, native=false], stack=[Error: CERT_UNTRUSTED,     at SecurePair.<anonymous> (tls.js:1381:32),     at SecurePair.emit (events.js:92:17),     at SecurePair.maybeInitFinished (tls.js:980:10),     at CleartextStream.read [as _read] (tls.js:472:13),     at CleartextStream.Readable.read (_stream_readable.js:341:10),     at EncryptedStream.write [as _write] (tls.js:369:25),     at doWrite (_stream_writable.js:226:10),     at writeOrBuffer (_stream_writable.js:216:5),     at EncryptedStream.Writable.write (_stream_writable.js:183:11),     at write (_stream_readable.js:602:24),     at flow (_stream_readable.js:611:7),     at Socket.pipeOnReadable (_stream_readable.js:643:5)]

Here is part of my gulpfile where it is blowing up:

    var publisher = awspublish.create({ key: cfg.aws.accessKeyId,  secret: cfg.aws.secretAccessKey, bucket: 'assetcdn.myapp.com' });
    var headers = {
        'Cache-Control': 'max-age=315360000, no-transform, public'
    };

    gulp.src(['./app/public/images/*'])
        .pipe(rename(function (path) {
            path.dirname = '/images/' + path.dirname;
        }))
        .pipe(parallelize(publisher.publish(headers), concurrentUploaders))
        .pipe(awspublish.reporter())
        .on('error', gutil.log);

    gulp.src(['./app/public/font/*'])
        .pipe(rename(function (path) {
            path.dirname = '/font/' + path.dirname;
        }))
        .pipe(parallelize(publisher.publish(headers), concurrentUploaders))
        .pipe(awspublish.reporter())
        .on('error', gutil.log);

    return gulp.src(['./build/**/*.js','./build/**/*.css'])
        .pipe(awspublish.gzip())
        .pipe(parallelize(publisher.publish(headers), concurrentUploaders))
        .pipe(awspublish.reporter())
        .on('error', gutil.log);

thanks in advance

documentation of v2.0.0 config

I just tried updating my configuration to the v2.0.0 style. I didn't get very far however. I already learned that I need to use S3-SDK style params. But I couldn't find how to do what I did before.

This is my old code:

var publisher = awspublish.create({bucket: "my.bucket.name", profile: "my-profile", region: "eu-central-1"});

I know that the bucket now needs to be passed as {params: {Bucket: "my.bucket.name"}}. But how about the rest? Any pointer to the correct docs would be greatly appreciated. And I guess that an overview of the most common configs in the README would help a lot of other people as well.

Cheers,
Daniel

Retry?

Why is there no retry option?

Output log file of file.s3.state

One of the great features is seeing what files are created, skipped, updated, etc. For logging reasons, would it be of interest to write out the changed or updated cache items to a log file or to service hook like slack.

Better documentation

You really need some more helpful docs on how to set S3 folder path.

I just used this plugin with the sync option enabled and it wiped out some work on our S3 bucket!!! Unfortunately we didn't have versioning turned on so we've now lost that work for good.

There should be a warning to users to be cautious when using sync.

"TypeError: Object.keys called on non-object" after gzip.

Hey there! I'm sorry to be a bother, but I'm having some trouble coaxing this package into working for me and was hoping you might have some insight. From what I can make out of the backtrace, it's crashing just after gzip, with an error from Knox that reads "TypeError: Object.keys called on non-object." I added the backtrace, the Gulp task (mostly just janked from the documentation), and the results of calling console.log with the url object it's trying to read from to a gist here: https://gist.github.com/phyllisstein/aaa1e9c67e2b35c48376. But it's mostly Greek to me, so I thought I'd ask what-all you could make of it. Thanks in advance for any insights you can offer!

[doc] add region to awspublish parameters

Just as a reminder that could be in the docs : if you use a non-us bucket (eu-west-1 for example), you have to add the region parameter to awspublish.create

It's a common mistake, but as there are no errors on failures, it can be hard to diagnose.

Does not work correctly on EC2 instance with IAM Instance Profile

We have a project that uses gulp-awspublish to publish to S3, and we're trying to configure it on a build server that runs on an EC2 instance with an IAM Instance Profile assigned to it.

However, when executing our deploy task (which performs the upload), we getting a Bad or invalid credentials error:

...
[23:18:20] Starting 'dist:clean'...
[23:18:20] Finished 'dist:clean' after 3.51 ms
[23:18:20] Starting 'dist:copy'...
[23:18:20] Finished 'dist:copy' after 28 ms
[23:18:20] Starting 'dist:urls'...
[23:18:20] Finished 'dist:urls' after 2.77 ms
[23:18:20] Finished 'dist' after 9.45 s
[23:18:20] Starting 'deploy:clean'...
[23:18:21] Finished 'deploy:clean' after 655 ms
[23:18:21] Starting 'deploy:nogz'...
[23:18:21] 'deploy:nogz' errored after 666 μs
[23:18:21] Error in plugin 'gulp-awspublish'
Bad or invalid credentials
[23:18:21] 'deploy' errored after 10 s
[23:18:21] Error: [object Object]
    at formatError (/usr/lib/node_modules/gulp/bin/gulp.js:169:10)
    at Gulp.<anonymous> (/usr/lib/node_modules/gulp/bin/gulp.js:195:15)
    at Gulp.emit (events.js:117:20)
    at Gulp.Orchestrator._emitTaskDone (/home/ec2-user/dashboard/node_modules/gulp/node_modules/orchestrator/index.js:264:8)
    at /home/ec2-user/dashboard/node_modules/gulp/node_modules/orchestrator/index.js:275:23
    at finish (/home/ec2-user/dashboard/node_modules/gulp/node_modules/orchestrator/lib/runTask.js:21:8)
    at cb (/home/ec2-user/dashboard/node_modules/gulp/node_modules/orchestrator/lib/runTask.js:29:3)
    at finish (/home/ec2-user/dashboard/node_modules/run-sequence/index.js:48:5)
    at Gulp.onError (/home/ec2-user/dashboard/node_modules/run-sequence/index.js:55:4)
    at Gulp.emit (events.js:117:20)

It appears you are checking to see if accessKeyId is a member of opts:

if (opts && opts instanceof AWS.SharedIniFileCredentials && !opts.accessKeyId) {
return new gutil.PluginError({
plugin: PLUGIN_NAME,
message: 'Bad or invalid credentials'
});
}

The on an EC2 instance with an IAM Instance Profile, the AWS access id and secret key will not be set in the environment, and the AWS SDK will search for credentials in the instance-metadata. (It will also look in the environment, as well.) Since you have switched to use aws-sdk (vs knox), I don't believe this check is necessary.

If the credentials are incorrectly set by the user, why not let the error bubble up from the AWS SDK? (ie: Remove lines 38 thru 43 in function getCredentials())

Problems with file naming

Hi, I have two problems using the library.

I have a file structure like this

public/
  js/
    /runner
      file1.js
      file2.js

And using this code

gulp.src('public/js/runner/*.js', {base: 'public'})
        .pipe(publisher.publish(headers, {simulate:true}))
        .pipe(awspublish.reporter());

It resolves the file.s3.path to /hrajchert/project/public/js/runner/file1.js, which would be the absolute path to the file (clearly not what I want as s3 key)

If I add an empty gulp-rename like this

gulp.src('public/js/runner/*.js', {base: 'public'})
        .pipe(rename(function (path) {
            // This is weird, but is needed to make the file use the relative path...
        }))
        .pipe(publisher.publish(headers, {simulate:true}))
        .pipe(awspublish.reporter());

Now the path is /js/runner/file1.js which is almost what I wanted, but the initial / causes a problem in S3, in the sense that it creates an initial empty folder so my resource has to have two // like this myawsdomain//js/runner/file1.js

So the final solution is to have a trailing slash in my base

gulp.src('public/js/runner/*.js', {base: 'public/'})
        .pipe(rename(function (path) {
            // This is weird, but is needed to make the file use the relative path...
        }))
        .pipe(publisher.publish(headers, {simulate:true}))
        .pipe(awspublish.reporter());       

So there are basically two problems... I require to use gulp-rename to avoid using absolute path (dont know why), and you probably should add a check to never start a path with initial /, I doubt that would be useful in any case.

Thanks!

awspublish fails to sync files in Windows

It creates all the files correctly on S3 but with backslashes displayed on the console, e.g. app\index.js. It then deletes all the files it just created but with forward slash e.g. app/index.js, leaving the bucket empty except for root directory.

  return gulp.src('./dist/**/*.*')
    .pipe(awspublish.gzip())
    .pipe(publisher.publish(headers))
    .pipe(publisher.cache())
    .pipe(publisher.sync())    
    .pipe(awspublish.reporter())

Use the bucket listing

Hi! I've been running into some performance issues that have to do with making a separate headObject request for every file. Why not use the GET Bucket (List Objects) command and get the ETags for 1000 files at a time?

An issue with relative paths / glob on Windows

gulp.src('build/**')
    .pipe(publisher.gzip())
    .pipe(publisher.publish())

After publishing from windows, it creates files in S3 bucket with full paths leading to C:, for exmaple instead of uploading favicon.ico into the root, it creates a folder for it C:/Projects/Site/build/

And in the log it writes something like:

[01:49:04] [create] C:/Projects/Site/build/error.5988ca8a.html
[01:49:06] [create] C:/Projects/Site/build/favicon.8f384bfb.ico

instead of:

[01:49:04] [create] error.5988ca8a.html
[01:49:06] [create] favicon.8f384bfb.ico

deploying to bucket outside the default region us-east-1 fails with cryptic 301: null error

this is basically user error on my part, but forgetting to pass the AWS region in the config {} for non-us-east-1 buckets causes gulp-awspublish to die with a cryptic 301: null error. it would be nice if this error was more helpful, e.g.:

BUCKET NOT FOUND: please check your bucket name and AWS region

gulp-awspublish version:
% npm list | grep aws
├─┬ [email protected]
│ ├─┬ [email protected]
│ │ │ ├── [email protected]
error:
% gulp deploy
[15:11:18] Starting 'deploy'...
[15:11:18] Finished 'deploy' after 4.1 μs
gulp-awspublish/node_modules/aws-sdk/lib/request.js:32
          throw err;
                ^
301: null
    at Request.extractError (gulp-awspublish/node_modules/aws-sdk/lib/services/s3.js:359:35)
    at Request.callListeners (gulp-awspublish/node_modules/aws-sdk/lib/sequential_executor.js:100:18)
    at Request.emit (gulp-awspublish/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (gulp-awspublish/node_modules/aws-sdk/lib/request.js:604:14)
    at Request.transition (gulp-awspublish/node_modules/aws-sdk/lib/request.js:21:12)
    at AcceptorStateMachine.runTo (gulp-awspublish/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at gulp-awspublish/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (gulp-awspublish/node_modules/aws-sdk/lib/request.js:22:9)
    at Request.<anonymous> (gulp-awspublish/node_modules/aws-sdk/lib/request.js:606:12)
  at Request.callListeners (gulp-awspublish/node_modules/aws-sdk/lib/sequential_executor.js:104:18)

Push (and overwrite) only

Hey there,

It's seems as though the sync option is the only way to push the actual files across to S3. Is that correct?

I'm not fond of the deleting that sync does, and would prefer a push (and overwrite) only. Does this option exist?

cheers,
Scott.

[question] Shared cache file

I'm trying to wrap my head around how to use the cache on a project with multiple devs. Basically the question is, do you suggest keeping the cache file(s) in the repo so that everybody has the same cache file when running a deploy, or is there some other way of keeping things in sync?

Everybody having their own version of the cache file is bad, because it may cause some assets to not update when they should, while keeping it in the repo will add the need to create an extra commit after each deploy-build with the new asset cache.

Can we somehow keep the cache file in S3 as well and load it from there / write it there, or am I way off?

Thanks!

.on('end', function() {...}) not called

We're trying to upload our index.html file with one set of cache-control headers and then the rest of our resources with another, but after the first upload is complete, and the second upload (via callback) doesn't get called. I would assume that this would also mean task dependencies would fail.

Example:

gulp.task('deploy:production', function(callback){
    runSequence('clean', 'vendor', 'app', function(callback)
   {
       // create a new publisher
       var publisher = gulpAws.create({
           "key": "...",
           "secret": "...",
           "bucket": "...",
           "region": "..."
       });

    var resourceHeaders = {
        'Cache-Control': 'max-age=315360000, no-transform, public'
    };

    var indexHeaders = {
        'Cache-Control': 'max-age=0, no-transform, public'
    };

    gulp.src("./build/www/index.html")
        .pipe(publisher.publish(indexHeaders))
        .pipe(publisher.cache())
        .pipe(gulpAws.reporter())
        .on('end', function(callback){
            gulp.src(['./build/www/**/*', '!./build/www/index.html'], {read: false}) //Uses gulp-ignore to remove index.html from the pipe
                .pipe(publisher.publish(resourceHeaders))
                .pipe(publisher.cache())
                .pipe(gulpAws.reporter());
        });
   });
});

Usage of the prefix in sync()

Hello,

I wanted to scope the sync to a subdirectory of my bucket. For this I set the prefix like this :

gulp.src(DIST_DIR+'/**')
        .pipe(publisher.publish())
        .pipe(publisher.sync('v2'))
        .pipe(awspublish.reporter());

But the files are synced to the root of the bucket. Is this a bug or a misunderstanding on my part ?

Thanks

Parallel uploads

I know this is probably not a concern of this particular module, but I'm curious if any one of you has a good recipe for uploading some files in parallel?

Not Deploying on West-2 since this morning

Hey guys,
Some change happened since this morning I'm not able to deploy through this tool, I've to upload manually and that's with any function on west-2.
Thanks for your time.
Best Regards,

Error: Cannot find module 'event-stream'

Not sure if this is a bug or incorrect setup on my part. When I run my task I get an error about the event-stream.

Versions

Mac OSX 10.8.5
NPM 1.4.2

Code

// DEPLOY TO STAGING //
gulp.task('deploy-staging', function () {
  var es = require('event-stream'),gulp
      awspublish = require('gulp-awspublish'),
      publisher = awspublish({ key: 'xxxx', secret: 'xxx', bucket: 'xxx'}),
      headers = { 'Cache-Control': 'max-age=315360000, no-transform, public' };

  // publish all js files
  // Set Content-Length, Content-Type and Cache-Control headers
  // Set x-amz-acl to public-read by default
  var js = gulp.src('/dist/*.js')
    .pipe(publisher.publish(headers));

  // gzip and publish all js files
  // Content-Encoding headers will be added on top of other headers
  // uploaded files will have a jsgz extension
  var jsgz = gulp.src('/dist/*.js')
    .pipe(awspublish.gzip())
    .pipe(publisher.publish(headers));

  // sync content of s3 bucket with files in the stream
  // cache s3 etags locally to avoid unnecessary request next time
  // print progress with reporter
  es.merge(js, jsgz)
    .pipe(publisher.sync())
    .pipe(publisher.cache())
    .pipe(publisher.reporter());
}); // end images

Error

/Users/xxxxx/xxx/xxx/xxxxx/node_modules/gulp/node_modules/orchestrator/index.js:153
            throw err;
                  ^
Error: Cannot find module 'event-stream'
    at Function.Module._resolveFilename (module.js:338:15)
    at Function.Module._load (module.js:280:25)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Gulp.<anonymous> (/Users/xxxxx/xxx/xxxx/xxxxx/gulpfile.js:65:12)
    at module.exports (/Users/wrburgess/dev/jbrb/offbook-client/node_modules/gulp/node_modules/orchestrator/lib/runTask.js:31:7)
    at Gulp.Orchestrator._runTask (/Users/wrburgess/dev/jbrb/offbook-client/node_modules/gulp/node_modules/orchestrator/index.js:273:3)
    at Gulp.Orchestrator._runStep (/Users/wrburgess/dev/jbrb/offbook-client/node_modules/gulp/node_modules/orchestrator/index.js:214:10)
    at Gulp.Orchestrator.start (/Users/wrburgess/dev/jbrb/offbook-client/node_modules/gulp/node_modules/orchestrator/index.js:134:8)
    at startGulp (/usr/local/lib/node_modules/gulp/bin/gulp.js:150:26)

Allow to selectively specify HTTP headers

In a real-world scenario, you may want to set different 'mx-age' header values for different file types. Also you may want to explicitly set charset value for specific files or file types.

var headers = [
   { '*': { 'Cache-Control': 'max-age=315360000, no-transform, public' } },
   { 'favicon\\.ico': { 'Cache-Control': 'max-age=29030400, no-transform, public' } },
   { '\\.html$': { 'Content-Type': 'text/html; charset=utf-8' } }
];

publisher.publish(headers);

See: https://github.com/h5bp/server-configs-apache/blob/master/src/.htaccess#L651-L697

You can just add an additional check here:

if (!headers) headers = {};

..if headers is an array, an act accordingly. I guess, this won't be a breaking change.

Call sync without calling publish first

In the docs it says:

Publisher.sync([prefix])
create a transform stream that delete old files from the bucket. You can speficy a prefix to sync a specific directory.

In the code sample sync() is run after publish().

Is it possible to do the following?

gulp.task('deploy:clean', function () {
    var publisher = awspublish.create(awskeys);
    return gulp.src(path.join(args.d, '**'))
        .pipe(publisher.sync());
});

Using this task results in this error for me:

/PROJECT/node_modules/gulp-awspublish/lib/index.js:326
    newFiles[file.s3.path] = true;
                    ^
TypeError: Cannot read property 'path' of undefined
    at Transform.stream._transform (/PROJECT/node_modules/gulp-awspublish/lib/index.js:326:21)
    at Transform._read (_stream_transform.js:179:10)
    at Transform._write (_stream_transform.js:167:12)
    at doWrite (_stream_writable.js:225:10)
    at writeOrBuffer (_stream_writable.js:215:5)
    at Transform.Writable.write (_stream_writable.js:182:11)
    at write (/PROJECT/node_modules/gulp/node_modules/vinyl-fs/node_modules/through2/node_modules/readable-stream/lib/_stream_readable.js:623:24)
    at flow (/PROJECT/node_modules/gulp/node_modules/vinyl-fs/node_modules/through2/node_modules/readable-stream/lib/_stream_readable.js:632:7)
    at DestroyableTransform.pipeOnReadable (/PROJECT/node_modules/gulp/node_modules/vinyl-fs/node_modules/through2/node_modules/readable-stream/lib/_stream_readable.js:664:5)
    at DestroyableTransform.emit (events.js:92:17)

I don't call sync after publish because I publish multiple sets of my project with different headers in separate tasks.

Doesn't seem to work with Bucket's that have a "." in their name

I just ran a test with the aws-sdk and I'm able to upload to bucket's with a period in the name "example.com" for example. However, with gulp-awspublish, it fails. I removed the dot, it works, add it back, it fails and throws an error.

Please let me know if I'm doing something wrong, else this is a big issue since bucket names used for static hosting with route 53 need to match the zone files which are domain names.

Uploading with path prefix

Is there a way add an extra prefix to the S3 uploads/sync, the documentation doesn't make it quite clear?

E.g. I have a public function like this:

gulp.task('publish', function(){
return gulp.src('public/assets/**')
.pipe(publisher.publish())
.pipe(publisher.sync())
.pipe(awspublish.reporter());
});

I have a directory structure like this:

public/
assets/
js/
something.js

At current my uploads would only create objects with a path of /js/something.js where as I want it to be /public/assets/js/something.js

Doing a rename like
.pipe(rename(function (path) {
path.dirname += './public/assets';
}))

ends up creating js./public/assets/something.js

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.