Giter VIP home page Giter VIP logo

pkgcloud's Introduction

pkgcloud Build Status NPM version

Join the chat at https://gitter.im/pkgcloud/pkgcloud

pkgcloud is a standard library for node.js that abstracts away differences among multiple cloud providers.

Getting Started

You can install pkgcloud via npm or add to it to dependencies in your package.json file:

npm install pkgcloud

Currently there are nine service types which are handled by pkgcloud:

In our Roadmap, we plan to add support for more services, such as Queueing, Monitoring, and more. Additionally, we plan to implement more providers for the beta services, thus moving them out of beta.

User Agent

By default, all pkgcloud HTTP requests will have a user agent with the library and version: nodejs-pkgcloud/x.y.z where x.y.z is the current version.

You can get this from a client at any time by calling client.getUserAgent();. Some providers may have an additional suffix as a function of the underlying HTTP stacks.

You can also set a custom User Agent prefix:

client.setCustomUserAgent('my-app/1.2.3');

// returns "my-app/1.2.3 nodejs-pkgcloud/1.1.0"
client.getUserAgent();

Basic APIs for pkgcloud

Services provided by pkgcloud are exposed in two ways:

  • By service type: For example, if you wanted to create an API client to communicate with a compute service you could simply:
  var client = require('pkgcloud').compute.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });
  • By provider name: For example, if you knew the name of the provider you wished to communicate with you could do so directly:
  var client = require('pkgcloud').providers.openstack.compute.createClient({
    //
    // ... Provider specific credentials
    //
  });

All API clients exposed by pkgcloud can be instantiated through pkgcloud[serviceType].createClient({ ... }) or pkcloud.providers[provider][serviceType].createClient({ ... }).

Unified Vocabulary

Due to the differences between the vocabulary for each service provider, pkgcloud uses its own unified vocabulary.

Note: Unified vocabularies may not yet be defined for beta services.

Supported APIs

Supporting every API for every cloud service provider in Node.js is a huge undertaking, but that is the long-term goal of pkgcloud. Special attention has been made to ensure that each service type has enough providers for a critical mass of portability between providers (i.e. Each service implemented has multiple providers).

If a service does not have at least two providers, it is considered a beta interface; We reserve the right to improve the API as multiple providers will allow generalization to be better determined.

Compute

The pkgcloud.compute service is designed to make it easy to provision and work with VMs. To get started with a pkgcloud.compute client just create one:

  var client = require('pkgcloud').compute.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Each compute provider takes different credentials to authenticate; these details about each specific provider can be found below:

Each instance of pkgcloud.compute.Client returned from pkgcloud.compute.createClient has a set of uniform APIs:

Server

  • client.getServers(function (err, servers) { })
  • client.createServer(options, function (err, server) { })
  • client.destroyServer(serverId, function (err, server) { })
  • client.getServer(serverId, function (err, server) { })
  • client.rebootServer(server, function (err, server) { })

Image

  • client.getImages(function (err, images) { })
  • client.getImage(imageId, function (err, image) { })
  • client.destroyImage(image, function (err, ok) { })
  • client.createImage(options, function (err, image) { })

Flavor

  • client.getFlavors(function (err, flavors) { })
  • client.getFlavor(flavorId, function (err, flavor) { })

Storage

The pkgcloud.storage service is designed to make it easy to upload and download files to various infrastructure providers. Special attention has been paid so that methods are streams and pipe-capable.

To get started with a pkgcloud.storage client just create one:

  var client = require('pkgcloud').storage.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Each storage provider takes different credentials to authenticate; these details about each specific provider can be found below:

Each instance of pkgcloud.storage.Client returned from pkgcloud.storage.createClient has a set of uniform APIs:

Container

  • client.getContainers(function (err, containers) { })
  • client.createContainer(options, function (err, container) { })
  • client.destroyContainer(containerName, function (err) { })
  • client.getContainer(containerName, function (err, container) { })

File

  • client.upload(options)
  • client.download(options, function (err) { })
  • client.getFiles(container, function (err, files) { })
  • client.getFile(container, file, function (err, server) { })
  • client.removeFile(container, file, function (err) { })

Both the .upload(options) and .download(options) have had careful attention paid to make sure they are pipe and stream capable:

Upload a File

  var pkgcloud = require('pkgcloud'),
      fs = require('fs');

  var client = pkgcloud.storage.createClient({ /* ... */ });

  var readStream = fs.createReadStream('a-file.txt');
  var writeStream = client.upload({
    container: 'a-container',
    remote: 'remote-file-name.txt'
  });

  writeStream.on('error', function(err) {
    // handle your error case
  });

  writeStream.on('success', function(file) {
    // success, file will be a File model
  });

  readStream.pipe(writeStream);

Download a File

  var pkgcloud = require('pkgcloud'),
      fs = require('fs');

  var client = pkgcloud.storage.createClient({ /* ... */ });

  client.download({
    container: 'a-container',
    remote: 'remote-file-name.txt'
  }).pipe(fs.createWriteStream('a-file.txt'));

Databases

The pkgcloud.database service is designed to consistently work with a variety of Database-as-a-Service (DBaaS) providers.

To get started with a pkgcloud.storage client just create one:

  var client = require('pkgcloud').database.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Each database provider takes different credentials to authenticate; these details about each specific provider can be found below:

Due to the various differences in how these DBaaS providers provision databases only a small surface area of the API for instances of pkgcloud.database.Client returned from pkgcloud.database.createClient is consistent across all providers:

  • client.create(options, callback)

All of the individual methods are documented for each DBaaS provider listed above.

DNS -- Beta

Note: DNS is considered Beta until there are multiple providers; presently only Rackspace are supported.

The pkgcloud.dns service is designed to make it easy to manage DNS zones and records on various infrastructure providers. Special attention has been paid so that methods are streams and pipe-capable.

To get started with a pkgcloud.dns client just create one:

  var client = require('pkgcloud').dns.createClient({
    //
    // The name of the provider (e.g. "rackspace")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.dns.Client returned from pkgcloud.dns.createClient has a set of uniform APIs:

Zone

  • client.getZones(details, function (err, zones) { })
  • client.getZone(zone, function (err, zone) { })
  • client.createZone(details, function (err, zone) { })
  • client.updateZone(zone, function (err) { })
  • client.deleteZone(zone, function (err) { })

Record

  • client.getRecords(zone, function (err, records) { })
  • client.getRecord(zone, record, function (err, record) { })
  • client.createRecord(zone, record, function (err, record) { })
  • client.updateRecord(zone, record, function (err, record) { })
  • client.deleteRecord(zone, record, function (err) { })

Block Storage -- Beta

Note: Block Storage is considered Beta until there are multiple providers; presently only Openstack and Rackspace are supported.

The pkgcloud.blockstorage service is designed to make it easy to create and manage block storage volumes and snapshots.

To get started with a pkgcloud.blockstorage client just create one:

  var client = require('pkgcloud').blockstorage.createClient({
    //
    // The name of the provider (e.g. "rackspace")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.blockstorage.Client returned from pkgcloud.blockstorage.createClient has a set of uniform APIs:

Volume

  • client.getVolumes(options, function (err, volumes) { })
  • client.getVolume(volume, function (err, volume) { })
  • client.createVolume(details, function (err, volume) { })
  • client.updateVolume(volume, function (err, volume) { })
  • client.deleteVolume(volume, function (err) { })

Snapshot

  • client.getSnapshots(options, function (err, snapshots) { })
  • client.getSnapshot(snapshot, function (err, snapshot) { })
  • client.createSnapshot(details, function (err, snapshot) { })
  • client.updateSnapshot(snapshot, function (err, snapshot) { })
  • client.deleteSnapshot(snapshot, function (err) { })

Load Balancers -- Beta

Note: Load Balancers is considered Beta until there are multiple providers; presently only Rackspace are supported.

The pkgcloud.loadbalancer service is designed to make it easy to create and manage block storage volumes and snapshots.

To get started with a pkgcloud.loadbalancer client just create one:

  var client = require('pkgcloud').loadbalancer.createClient({
    //
    // The name of the provider (e.g. "rackspace")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.loadbalancer.Client returned from pkgcloud.loadbalancer.createClient has a set of uniform APIs:

LoadBalancers

  • client.getLoadBalancers(options, function (err, loadBalancers) { })
  • client.getLoadBalancer(loadBalancer, function (err, loadBalancer) { })
  • client.createLoadBalancer(details, function (err, loadBalancer) { })
  • client.updateLoadBalancer(loadBalancer, function (err) { })
  • client.deleteLoadBalancer(loadBalancer, function (err) { })

Nodes

  • client.getNodes(loadBalancer, function (err, nodes) { })
  • client.addNodes(loadBalancer, nodes, function (err, nodes) { })
  • client.updateNode(loadBalancer, node, function (err) { })
  • client.removeNode(loadBalancer, node, function (err) { })

Network -- Beta

Note: Network is considered Beta until there are multiple providers; presently only HP & Openstack providers are supported.

The pkgcloud.network service is designed to make it easy to create and manage networks.

To get started with a pkgcloud.network client just create one:

  var client = require('pkgcloud').network.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.network.Client returned from pkgcloud.network.createClient has a set of uniform APIs:

Networks

  • client.getNetworks(options, function (err, networks) { })
  • client.getNetwork(network, function (err, network) { })
  • client.createNetwork(options, function (err, network) { })
  • client.updateNetwork(network, function (err, network) { })
  • client.deleteNetwork(network, function (err, networkId) { })

Subnets

  • client.getSubnets(options, function (err, subnets) { })
  • client.getSubnet(subnet, function (err, subnet) { })
  • client.createSubnet(options, function (err, subnet) { })
  • client.updateSubnet(subnet, function (err, subnet) { })
  • client.deleteSubnet(subnet, function (err, subnetId) { })

Ports

  • client.getPorts(options, function (err, ports) { })
  • client.getPort(port, function (err, port) { })
  • client.createPort(options, function (err, port) { })
  • client.updatePort(port, function (err, port) { })
  • client.deletePort(port, function (err, portId) { })

Orchestration -- Beta

Note: Orchestration is considered Beta until there are multiple providers; presently only Openstack are supported.

The pkgcloud.orchestration service is designed to allow you to access Openstack Heat via node.js. You can manage stacks and resources from within any node.js application.

To get started with a pkgcloud.orchestration client just create one:

  var client = require('pkgcloud').orchestration.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.orchestration.Client returned from pkgcloud.orchestration.createClient has a set of uniform APIs:

Stack

  • client.getStack(stack, function (err, stack) { })
  • client.getStacks(options, function (err, stacks) { })
  • client.createStack(details, function (err, stack) { })
  • client.previewStack(details, function (err, stack) { })
  • client.adoptStack(details, function (err, stack) { })
  • client.updateStack(stack, function (err, stack) { })
  • client.deleteStack(stack, function (err) { })
  • client.abandonStack(stack, function (err, abandonedStack) { })
  • client.getTemplate(stack, function (err, template) { })

Resources

  • client.getResource(stack, resource, function (err, resource) { })
  • client.getResources(stack, function (err, resources) { })
  • client.getResourceTypes(function (err, resourceTypes) { })
  • client.getResourceSchema(resourceType, function (err, resourceSchema) { })
  • client.getResourceTemplate(resourceType, function (err, resourceTemplate) { })

Events

  • client.getEvent(stack, resource, eventId, function (err, event) { })
  • client.getEvents(stack, function (err, events) { })
  • client.getResourceEvents(stack, resource, function (err, events) { })

Templates

  • client.validateTemplate(template, function (err, template) { })

CDN -- Beta

Note: CDN is considered Beta until there are multiple providers; presently only Openstack and Rackspace are supported.

The pkgcloud.cdn service is designed to allow you to access Openstack Poppy via node.js. You can manage services and flavors from within any node.js application.

To get started with a pkgcloud.cdn client just create one:

  var client = require('pkgcloud').cdn.createClient({
    //
    // The name of the provider (e.g. "openstack")
    //
    provider: 'provider-name',

    //
    // ... Provider specific credentials
    //
  });

Providers

Each instance of pkgcloud.cdn.Client returned from pkgcloud.cdn.createClient has a set of uniform APIs:

Base

  • client.getHomeDocument(function (err, homeDocument) { })
  • client.getPing(function (err) { })

Service

  • client.getService(service, function (err, service) { })
  • client.getServices(options, function (err, services) { })
  • client.createService(details, function (err, service) { })
  • client.updateService(service, function (err, service) { })
  • client.deleteService(service, function (err) { })

Service Assets

  • client.deleteServiceCachedAssets(service, assetUrl, function(err) { })

Flavors

  • client.getFlavor(flavor, function (err, flavor) { })
  • client.getFlavors(options, function (err, flavors) { })

Installation

  $ npm install pkgcloud

Tests

To run the tests you will need [email protected] or higher. You may install all the requirements with:

 $ npm install

Then run the tests:

 $ npm test

The tests use the hock library for mock up the response of providers, so the tests run without do any connection to the providers, there is a notorius advantage of speed on that, also you can run the tests without Internet connection and also can highlight a change of API just disabling hock.

Running tests without mocks

By default the npm test command run the tests enabling hock. And sometimes you will want to test against the live provider, so you need to do this steps, in order to test without mocks.

  1. Copy a provider config file from test/configs/mock to test/configs
  2. Fill in with your own credentials for the provider.
  3. (Optional) The compute test suite run the common tests for all providers listed on test/configs/providers.json, there you can enable or disable providers.
  4. Run the tests using mocha.
Mocha installed globally
 $ mocha -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Linux/Mac - Mocha installed locally
 $ ./node_modules/.bin/mocha -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Windows - Mocha installed locally:
 $ node_modules\.bin\mocha.cmd -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Other ways to run the tests

Also you can run the tests directly using mocha with hock enabled:

Linux/Mac - Mocha installed globally:
 $ MOCK=on mocha -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Linux/Mac - Mocha installed locally:
 $ MOCK=on node_modules/.bin/mocha -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Windows - Mocha installed globally:
 $ set MOCK=on&mocha -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Windows - Mocha installed locally:
 $ set MOCK=on&node_modules\.bin\mocha.cmd -R spec test/*/*/*-test.js test/*/*/*/*-test.js

Even better, you can run the tests for some specific provider:

Linux/Mac - Mocha installed globally:
 $ MOCK=on mocha -R spec test/openstack/*/*-test.js

Linux/Mac - Mocha installed locally:
 $ MOCK=on ./node_modules/.bin/mocha -R spec test/openstack/*/*-test.js

Windows - Mocha installed globally:
 $ set MOCK=on&mocha -R spec test/openstack/*/*-test.js

Windows - Mocha installed locally:
 $ set MOCK=on&node_modules\.bin\mocha.cmd -R spec test/openstack/*/*-test.js

Logging

Any client you create with createClient can emit logging events. If you're interested in more detail from the internals of pkgcloud, you can wire up an event handler for log events.

var client = pkgcloud.compute.createClient(options);

client.on('log::*', function(message, object) {
  if (object) {
   console.log(this.event.split('::')[1] + ' ' + message);
   console.dir(object);
  }
  else {
    console.log(this.event.split('::')[1]  + ' ' + message);
  }
});

The valid log events raised are log::debug, log::verbose, log::info, log::warn, and log::error. There is also a more detailed logging example using pkgcloud with Winston.

Code Coverage

Run Coverage locally and send to coveralls.io

Travis takes care of coveralls, so this shouldn't be necessary unless you're troubleshooting a problem with Travis / Coveralls. You'll need to have access to the coveralls repo_token, which should only be visible to pkgcloud/pkgcloud admins.

  1. Create a .coveralls.yml containing the repo_token from https://coveralls.io/r/pkgcloud/pkgcloud
  2. Run the following:
npm test
npm run coverage

Contribute!

We welcome contribution to pkgcloud by any and all individuals or organizations. Before contributing please take a look at the Contribution Guidelines in CONTRIBUTING.md.

We are pretty flexible about these guidelines, but the closer you follow them the more likely we are to merge your pull-request.

License: MIT

pkgcloud's People

Contributors

alibazlamit avatar bmeck avatar bobdickinson avatar cronopio avatar dscape avatar evanlucas avatar fearphage avatar ghemingway avatar indexzero avatar indutny avatar ivancevich avatar jadchami avatar jcrugzz avatar jholthusen avatar jkorkalainen avatar kenperkins avatar manassornt avatar maxlinc avatar meteormatt avatar mmalecki avatar phanatic avatar rdodev avatar rgbkrk avatar robinqu avatar rossj avatar rosskukulinski avatar stammen avatar yawetse avatar ycombinator avatar zbal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pkgcloud's Issues

[rackspace] Add the support for pagination

Recently rackspace added support for pagination to all list operations such instances, databases and users. So I need to add pagination mechanism to pkgcloud on rackspace databases.

Rackspace use a limit parameter to limit the result set, if limit is not specify then 20 is the default value. Also if a result set has more of 20 results a next link is provided, otherwise not link is provided

Any suggestion for how you want the interface call?

Support more than 1000 servers in Joyent

Currently Joyent API limits number of returned servers to 1000. It also informs us if it returned all servers. We should decide on what to do here - waiting for, say, 8 requests to finish might take too long.

Build `node-cloudfiles` from `pkgcloud.rackspace.compute`

The Azure team at Microsoft is willing to make their node.js client for azure storage compliant with the pkgcloud API. There are also a lot of fixes in pkgcloud.rackspace.compute that are not in node-cloudfiles because it was a significant rewrite.

Errors while destroying Azure servers

{ [Error: azure Error (400): Bad Request]
  stack: 'Error: azure Error (400): Bad Request\n    at Request.handleRequest [as _callback] (/Users/maciej/dev/js/pkgcloud/lib/pkgcloud/core/base/client.js:94:27)\n    at Request.self.callback (/Users/maciej/dev/js/pkgcloud/node_modules/request/main.js:122:22)\n    at Request.EventEmitter.emit (events.js:97:17)\n    at Request.<anonymous> (/Users/maciej/dev/js/pkgcloud/node_modules/request/main.js:661:16)\n    at Request.EventEmitter.emit (events.js:124:20)\n    at IncomingMessage.<anonymous> (/Users/maciej/dev/js/pkgcloud/node_modules/request/main.js:623:14)\n    at IncomingMessage.EventEmitter.emit (events.js:124:20)\n    at _stream_readable.js:806:12\n    at process._tickCallback (node.js:427:13)',
  name: 'Error',
  provider: 'azure',
  failCode: 'Bad Request',
  result: { err: '<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><Code>BadRequest</Code><Message>A disk with name azure-east-us-nodejitsu0-azure-east-us-nodejitsu0-0-20130311141848 is currently in use by virtual machine azure-east-us-nodejitsu0 running within hosted service azure-east-us-nodejitsu0, deployment azure-east-us-nodejitsu0.</Message></Error>' } }

We should probably wait before trying to delete the disk/delete the service first.

/cc @stammen

Mock the databases tests

This was missed from the PR about mongolab. I opened this for not forget to add the mocks.

RENAMED

All others providers like mongohq, redistogo and iriscouch need to be mocked too.

Upload file when offline crashes

Hi:

I am trying to upload files to Rackspace, this works fine, but if the internet connection is down my server crashes with the following error:

/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/node_modules/request/main.js:519
  self.req = self.httpModule.request(self, function (response) {
                             ^
TypeError: Cannot call method 'request' of undefined
    at Request.start (/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/node_modules/request/main.js:519:30)
    at Request.write (/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/node_modules/request/main.js:949:28)
    at ondata (stream.js:38:26)
    at EventEmitter.emit (events.js:96:17)
    at Client.request.buffer.emit (/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/lib/pkgcloud/core/base/client.js:181:49)
    at BufferedStream.pipe.process.nextTick.self.size (/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/node_modules/morestreams/main.js:28:44)
    at Array.forEach (native)
    at BufferedStream.pipe.piped (/Users/digitaljohn/Documents/projects/rehabstudio/OtomoServer/node_modules/pkgcloud/node_modules/morestreams/main.js:28:17)
    at process.startup.processNextTick.process._tickCallback (node.js:244:9)
BeanBook:OtomoServer digitaljohn$ 

I am having issues finding a way to catch this error and abort the upload and try again later. One would expect the callback to be called with an error rather than totally crash.

Regards:

John

uploading stream with multiparty

I'm using version 0.6.12 with node 0.8.22 to attempt to upload files to rackspace through my server without storing them on the server disk first. I'm using https://github.com/superjoe30/node-multiparty to parse the request form data which produces parts as ReadableStreams. My upload handler (coffeescript) code looks like this:

app.post "/images", (req, res, next) ->
  uris = []

  form = new multiparty.Form()
  form.on 'part', (part) ->
    return unless part.filename?
    path = 'somerandompath'
    logger.debug "received part: #{part.filename}, uploading to rackspace at: #{path}"

    rackspace.upload({container: rackspaceImageContainer, remote: path, stream: part}, (err) ->
      return next err if err?
      uri = rackspaceCdnBaseUrl + "/#{path}"
      uris.push uri)

  form.on 'error', (err) ->
    next new Error err

  form.on 'close', ->
    res.send uris

  form.parse req

The code hits the debug "received part" output, but the file never gets created on rackspace and the request is eventually aborted from the client side. I have also tried removing setting the stream from the options and piping directly to upload function as shown in the examples:

 part.pipe rackspace.upload({....

To no avail. What am I missing?

Multiple Uploads cause Error Message

I am uploading multiple files using the async library, e.g.:

async.series(
    [
        function(callback){
            self._storage.getContainers(function(err, res){
                callback(err);
            });
        },
        function(callback){
            self.uploadFile(mp4hdSource, mp4hdDest, callback);
        },
        function(callback){
            self.uploadFile(mp4sdSource, mp4sdDest, callback);
        },
        function(callback){
            self.uploadFile(webmhdSource, webmhdDest, callback);
        },
        function(callback){
            self.uploadFile(webmsdSource, webmsdDest, callback);
        }
    ],
    // optional callback
    function(err, results){
        if(err)
        {
            self.onUploadError();
        }
        else
        {
            console.log(clc.magenta('Queue') + ' - S3 uploadComplete');
            //self.uploadComplete();
        }

    }
);

Also note the first task in the 'series' ensures that an internet connection is available, for some reason any type of stream related operation crashes my server if there is no internet connection.

and the uploadFile function is here:

Queue.prototype.uploadFile = function (from, to, callback) {
    var self = this;

    this._storage.upload({
        container: 'otomo',
        local: from,
        remote: to
    }, function(err, res){
        callback(err, res);
    });

}

For some reason the file was failing to upload when using the pipe method which is why I am using the 'local' option.

The error I get now is this:

You have already piped to this stream. Pipeing twice is likely to break the request.

Bootstrapper should optionally accept keys with `public` and `private` mappings

This will probably be inconsistent and breaking for Azure once #70 is merged due to how they treat .pem files.

We should take the advice from @stammen in this comment and implement it as accepting both an Array and an Object. e.g.:

New Behavior

  var bootstrapper = new Bootstrapper({
    keys: {
      private: '/path/to/key.pem'
      public: '/path/to/key.pub'
    }
  });

Existing Behavior

  var bootstrapper = new Bootstrapper({
    keys: ['/path/to/key.pem', '/path/to/key.pub']
  });

Rackspace CloudFiles Metadata empty

Dont now if this is a issue of Rackspace or pkgcloud.

With getFiles i get i list of files without additional data (null):

    "name": "f5/f53af8f0df3bd48a586d69554c6691.jpg",
    "etag": null,
    "contentType": null,
    "lastModified": null,
    "bytes": null,
    "size": null

Call for a single file with getFile i receive:

  "name": "f5/f53af8f0df3bd48a586d69554c6691.jpg",
  "etag": "5e3cbffb5b99c5cce2a49a293470b26d",
  "contentType": "image/jpeg",
  "lastModified": "2013-04-08T15:46:59.000Z",
  "bytes": 109213,
  "size": 109213

[api] Add support for MongoLab as mongo database provider

According to the needs under nodejitsu/nodejitsu#193 I open this ticket.

Is necessary add to pkgcloud support of MongoLab. The Docs suggest the following features:

  • Create database
  • Create database in differents datacenters (not for this milestone, maybe later)
  • List databases
  • View database
  • Update database (for change the password, or plan)
  • Delete database

I started the work under the mongolab branch, check for review or comments.

Support IrisCouch as a Redis Provider

Hi @cronopio !

Please also add IrisCouch as a Redis provider as part of this refactor.

Thank you,
Nuno

Iris Stack provisioning API.

You have a "partner" username, such as "somepartner" with a password. I would
email those to you. You can log in to Futon and change the password if you like.

https://central.iriscouch.net/_utils/

Anyway you would POST with basic auth, Content-Type:application/json,
to https://hosting.iriscouch.com/hosting_public. For example:

{ "_id":"Server/SERVERNAME",
, "partner": "somepartner"
, "creation": { "first_name": "John"
, "last_name": "Doe"
, "email": "[email protected]"
, "subdomain": "SERVERNAME"
}
}

That's it! The server will be up and we have everything we need to
identify partner servers.

If you query for this document again, it will have all of its data blanked out.
That confirms that the provisioning request is received.

Redis

For a Redis server, change the document id to "Redis/SERVERNAME", and include
an initial password in the creation data.

{ "_id":"Redis/SERVERNAME",
, "partner": "somepartner"
, "creation": { "first_name": "John"
, "last_name": "Doe"
, "email": "[email protected]"
, "subdomain": "SERVERNAME"
, "password": "secret"
}
}

Connect to SERVERNAME.redis.irstack.com on the standard port. Authenticate with
the string "SERVERNAME.redis.irstack.com:" + your password.

For example,

$ redis-cli -h example.redis.irstack.com \
            -a example.redis.irstack.com:secret \
            PING
PONG

Faster connection

We support Redis through a reverse-proxy. Many probaly don't notice, but for
the best performance, connect via the internal IP within the data center.

The [iris-redis][ir] project shows the best way to do this.

$ redis-cli -h example.redis.irstack.com \
            -a example.redis.irstack.com:secret
> keys _config:*
1) "_config:ip"
2) "_config:datacenter"
3) "_config:port"
4) "_config:max_memory"
5) "_config:server"

> get _config:datacenter
"dal01.sl"

> get _config:ip
"10.8.55.66"

> get _config:port
"19788"

Thus if you are in this data center (SoftLayer Dallas 1), then you can connect
instead to 10.8.55.66:19788 and circumvent our proxy. Authentication is the
same; it's just faster.

Redis Password

To change your password, you must make another API call, not currently
documented contact me for details. (It goes through the same system as
resetting CouchDB passwords.)

Data center selection

In the creation data, you may also select a data center. Data centers are only
whitelisted for certain situations (partners).

  • "east1.joyent"
  • "dal01.sl"
  • "us-east-1.aws"

I am on Freenode IRC as JasonSmith, usually in the #couchdb and #iriscouch
channels if you need to contact me.

OpenStack KeyPair API Support

(See: http://api.openstack.org/api-ref.html)

Generates, imports, and deletes SSH keys.

GET v2/{tenant_id}/os-keypairs
View a lists of keypairs associated with the account.

POST v2/{tenant_id}/os-keypairs
Generate or import a keypair.

DELETE v2/{tenant_id}/os-keypairs/{keypair_name}
Delete a keypair.

GET v2/{tenant_id}/os-keypairs/{keypair_name}
Show a keypair associated with the account.

We want this for upstream.

Improve testing - check for failures as well ?

I've noticed that most of the tests focus on successes, almost never on failures. Shouldn't it be something we should be checking as well ? That s a huge job I'm pretty sure of it...

Finalize Documentation

Make sure we have proper documentation, examples, code snippets, etc.

Docs for install, use, test, and bug fills, and contributing policy

[rackspace] CDN/Storage discussion

Over the past couple days I have been looking through Rackspace's Storage/CDN API trying to figure out a good way to setup the CDN API in pkgcloud. From what I have found, Rackspace setup their API in an odd manner. Both Storage and CDN use separate URLs, yet create a coupling in some of the functions. To better explain what I am talking about, lets go through a couple examples.

First Example

Here I am emulating the current storage API except using the CDN URL.

var pkgcloud = require('pkgcloud');

client = pkgcloud.cdn.createClient({
  provider: 'rackspace',
  username: 'nodejitsu',
  apiKey: 'xxxxxxxxxxxxxxxxxxxxxxx'
});

client.getContainers(function (err, containers) {
  if(err) {
    console.err(err);
  } else {
    console.log(container);
  }
});

So what happens in this example is as expected. It outputs an array of CDN enabled storage containers with all of the needed metadata. note This outputs only CDN enabled containers.

var pkgcloud = require('pkgcloud');

client = pkgcloud.storage.createClient({
  provider: 'rackspace',
  username: 'nodejitsu',
  apiKey: 'xxxxxxxxxxxxxxxxxxxxxxx'
});

client.getContainers(function (err, containers) {
  if(err) {
    console.err(err);
  } else {
    console.log(container);
  }
});

Now if you take this same example with the storage API it tells a bit of a different story. This example outputs an Array of all containers (both CDN enabled and not) with very little metadata for all.

Second Example

Now lets look at the case for creating a container. This time we will start with the storage API.

var pkgcloud = require('pkgcloud');

client = pkgcloud.storage.createClient({
  provider: 'rackspace',
  username: 'nodejitsu',
  apiKey: 'xxxxxxxxxxxxxxxxxxxxxxx'
});

client.createContainer('foo', function (err, container) {
  if(err) {
    console.err(err);
  } else {
    console.log(container);
  }
});

This function works as expected. It creates a new container named 'foo' and returns it in the callback.

var pkgcloud = require('pkgcloud');

client = pkgcloud.cdn.createClient({
  provider: 'rackspace',
  username: 'nodejitsu',
  apiKey: 'xxxxxxxxxxxxxxxxxxxxxxx'
});

client.createContainer('bar', function (err, container) {
  if(err) {
    console.err(err);
  } else {
    console.log(container);
  }
});

Now if this same API method was created using the CDN URL, it has some interesting behavior. This function would return a container 'bar' with all of the relevant CDN metadata. What it doesn't tell you is that if you look on the Rackspace website, the CDN container does not actually exist. In order for you as the user to create a CDN enabled container, you must first have created a storage container with the storage URL and then run a 'PUT' request with 'x-cdn-enabled' header set to true.

Now when attempting to follow the model of the storage API to implement CDN interaction, I ran into these oddities. Some can be circumvented with multi-request methods and some cannot (like the first example). Before this drove me crazy that Rackspace has it setup in this manner, I decided to start this issue to see what you guys think. I am relatively new to the project so I am sure you guys have already put in some forethought.

note My observations come from the testing with a live cloud files account

Storage API using Buffer instead of Stream

Hey guys,

I'm trying to use your storage lib with a buffer instead of a stream. I don't believe this is supported, but I think it can be done if the buffer has a wrapper that makes it look like a stream.

Any thoughts on how I can accomplish this? Here's what I started with:

function BufferStream (buffer) {
    this.data = new Buffer(buffer);
}

BufferStream.prototype.pipe = function(destination) {
      destination.emit('data', this.data);
      this.emit('close');
      this.emit('end');
};

Cheers!

Normalize File access API across all providers and storage APIs

When dealing with files in the cloud, pkgcloud should provide a unified API to consume.

The best remote storage API to build is one that matches Node's core Filesystem API as much as possible.

Here is an example using the new pkgcloud Dropbox API versus node's core File-system:

var dropbox = pkgcloud.storage.createClient(options);
//
// Read a remote file from the drop-box
//
dropbox.readFile('/public/test.txt', function(err, file){
  console.log(err, file.toString())
});

Reading the same file from the local file-system:

var fs = require('fs');
//
// Read a local file
//
fs.readFile('/public/test.txt', function(err, file){
  console.log(err, file.toString())
});

Note: You could also use dropbox.createReadStream or fs.createReadStream if you require a stream.

first upload takes 60 seconds (streaming)

I'm using node 0.8.22 and pkgcloud 0.6.12.

After my server has been started the first upload to rackspace takes exactly one minute. Subsequent uploads are sub 1 second. I have created a test server and mocha test that demonstrate the issue: https://gist.github.com/2fours/5427194.

Simply start the server, then run the test (bump the mocha timeout up to 100 seconds). You'll need a file named "testImage" in the same folder as the test. You will see that the first time the test is run after the server is started it takes one minute, then subsequent runnings are very fast. This does not seem to be an issue when using the default express multipart middleware which saves the upload to a file, and then passing this file's path as "local" in the options.

Invalid URI on Rackspace Storage?

I am attempting the list the available buckets on my Rackspace Account. As per the documentation I am doing something like this:

this._storage = pkgcloud.storage.createClient(settings.rackspace.server);

this._storage.getContainers(function (err, containers) {
    console.log( util.inspect( err ) );
})

My settings are similar to:

exports.rackspace = 
{
    // Server Settings
    server: {
        provider: 'rackspace',
        username: 'someusername',
        apiKey: 'somekey'
    }
}

But I appear to consistently be getting this error:

[Error: Invalid URI "?format=json"]

I have tried this command and also upload. All seem to be failing with the same/similar error. Is it missing the FULL URI?

Regards:

John

Not best practice: using for-in construct to iterate over array

This is a tricky bug, because I'm using another great library, Docpad, which sets Array.prototype.remove. I originally reported this issue here, docpad/docpad#441

This causes a problem in pkgcloud in at least place:

pkgcloud/lib/pkgcloud/core/base/client.js

  function sendRequest () {
    //
    // Setup any specific request options before 
    // making the request
    //
    if (self.before) {
      var errors = false;
      for (var i in self.before) {
        var fn = self.before[i];
        try {
          options = fn.call(self, options) || options;  // <----- fn ends up being Array::remove, causes exception
        // on errors do error handling, break.
        } catch (exc) { errs.handle(exc, errback); errors = true; break; }
      }
      if (errors) { return; }
    }

...

I think it is best practice to use "for (var i = 0; i != self.before.length; ++i) { ... }" rather than "for (var i in self.before)" for this reason. But I also know that modifying prototypes isn't always too cool (for exactly reasons like this) too... but, the
"remove" prototype that Docpad adds is a handy one.

Create server with OpenStack returns inconsistent response

This is actually somewhat expected as OpenStack requires an additional step after server creation (see: http://api.openstack.org/api-ref.html POST v2/{tenant_id}/servers)

Response JSON

{
    "server": {
        "adminPass": "MVk5HPrazHcG",
        "id": "5bbcc3c4-1da2-4437-a48a-66f15b1b13f9",
        "links": [
            {
                "href": "http://openstack.example.com/v2/openstack/servers/5bbcc3c4-1da2-4437-a48a-66f15b1b13f9",
                "rel": "self"
            },
            {
                "href": "http://openstack.example.com/openstack/servers/5bbcc3c4-1da2-4437-a48a-66f15b1b13f9",
                "rel": "bookmark"
            }
        ]
    }
}

Since there is no .addresses pkgcloud. It would be strange to have one provider always respond with addresses and another not. Maybe we should make this change for consistency?

[storage] Adding base storage tests

I'm currently using and mostly interested in Rackspace storage tests. I want to add additional tests to fill out the suite. However when I add a test and subsequently add/fix the Rackspace code to pass the tests, this leaves the other storage clients with failing tests. How should I best address/overcome this? I don't want to push code where I made tests fail.

.gitignore test/fixtures/testkey* breaks npm installed pkgcloud

In order to have a cross platform npm test command, we no longer generate ssh keys as part of the npm test script. This removes the dependency on openssh being installed on the user's computer (windows for example). Sample test keys have been checked into test/fixtures/testkey and test/fixtures/testkey.pub. We need to remove test/fixtures/testkey* in .gitignore so npm install will install the test/fixtures/testkey and test/fixtures/testkey.pub files.

To reproduce:

mkdir test
cd test
npm init test
npm install pkgcloud
cd node_modules/pkgcloud
npm install -d
npm test
Some tests will fail due to missing keys.

However if you git clone the pkgcloud repo the tests will pass as the testkey files are included in the repo.

Please delete test/fixtures/testkey* from .gitignore after reviewing this issue.

[Auth] Fix rackspace client to return err when unauthorized

from: https://github.com/nodejitsu/pkgcloud/blob/master/lib/pkgcloud/rackspace/client.js#L70-L82

We don't actually check that you've had a valid authentication:

  request(authOptions, function (err, res, body) {
    if (err) {
      return callback(err);
    }

    self.authorized = true;
    self.config.serverUrl = res.headers['x-server-management-url'];
    self.config.storageUrl = res.headers['x-storage-url'];
    self.config.cdnUrl = res.headers['x-cdn-management-url'];
    self.config.authToken = res.headers['x-auth-token'];

    callback(null, res);
  });

Make tests deterministic

Currently, the tests have a significantly high variability.

I routinely get different results on my local box, and it's even worse when using travis to valid a commit or PR.

I propose we rework the tests to try and address the non-deterministic nature of them.

Use ssh2

We should be using ssh2 by @mscdex instead of spawning ssh processes in our Bootstrapper

I'm unsure about the Bootstrapper use of scp, I've opened this issue on ssh2 to clarify. Worst case I suppose we could use sftp instead of scp, but I'm unsure of the implications.

Mock server for tests

The nock-based test fixtures for pkgcloud are becoming large enough that it's worthwhile to refactor them into a stand-alone module that accepts HTTP requests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.