Giter VIP home page Giter VIP logo

autocannon's Introduction

banner

autocannon

Node.js CI

demo

An HTTP/1.1 benchmarking tool written in node, greatly inspired by wrk and wrk2, with support for HTTP pipelining and HTTPS. On my box, autocannon can produce more load than wrk and wrk2, see limitations for more details.

Install

npm i autocannon -g

or if you want to use the API or as a dependency:

npm i autocannon --save

Usage

Command Line

Usage: autocannon [opts] URL

URL is any valid HTTP or HTTPS URL.
If the PORT environment variable is set, the URL can be a path. In that case 'http://localhost:$PORT/path' will be used as the URL.

Available options:

  -c/--connections NUM
        The number of concurrent connections to use. default: 10.
  -p/--pipelining NUM
        The number of pipelined requests to use. default: 1.
  -d/--duration SEC
        The number of seconds to run the autocannon. default: 10.
  -a/--amount NUM
        The number of requests to make before exiting the benchmark. If set, duration is ignored.
  -L NUM
        The number of milliseconds to elapse between taking samples. This controls the sample interval, & therefore the total number of samples, which affects statistical analyses. default: 1.
  -S/--socketPath
        A path to a Unix Domain Socket or a Windows Named Pipe. A URL is still required to send the correct Host header and path.
  -w/--workers
        Number of worker threads to use to fire requests.
  -W/--warmup
       Use a warm up interval before starting sampling.
       This enables startup processes to finish and traffic to normalize before sampling begins
       use -c and -d sub args e.g. `--warmup [ -c 1 -d 3 ]`
  --on-port
        Start the command listed after -- on the command line. When it starts listening on a port,
        start sending requests to that port. A URL is still required to send requests to
        the correct path. The hostname can be omitted, `localhost` will be used by default.
  -m/--method METHOD
        The HTTP method to use. default: 'GET'.
  -t/--timeout NUM
        The number of seconds before timing out and resetting a connection. default: 10
  -T/--title TITLE
        The title to place in the results for identification.
  -b/--body BODY
        The body of the request.
        NOTE: This option needs to be used with the '-H/--headers' option in some frameworks
  -F/--form FORM
        Upload a form (multipart/form-data). The form options can be a JSON string like
        '{ "field 1": { "type": "text", "value": "a text value"}, "field 2": { "type": "file", "path": "path to the file" } }'
        or a path to a JSON file containing the form options.
        When uploading a file the default filename value can be overridden by using the corresponding option:
        '{ "field name": { "type": "file", "path": "path to the file", "options": { "filename": "myfilename" } } }'
        Passing the filepath to the form can be done by using the corresponding option:
        '{ "field name": { "type": "file", "path": "path to the file", "options": { "filepath": "/some/path/myfilename" } } }'
  -i/--input FILE
        The body of the request. See '-b/body' for more details.
  -H/--headers K=V
        The request headers.
  --har FILE
        When provided, Autocannon will use requests from the HAR file.
        CAUTION: you have to specify one or more domains using URL option: only the HAR requests to the same domains will be considered.
        NOTE: you can still add extra headers with -H/--headers but -m/--method, -F/--form, -i/--input -b/--body will be ignored.
  -B/--bailout NUM
        The number of failures before initiating a bailout.
  -M/--maxConnectionRequests NUM
        The max number of requests to make per connection to the server.
  -O/--maxOverallRequests NUM
        The max number of requests to make overall to the server.
  -r/--connectionRate NUM
        The max number of requests to make per second from an individual connection.
  -R/--overallRate NUM
        The max number of requests to make per second from all connections.
        connection rate will take precedence if both are set.
        NOTE: if using rate limiting and a very large rate is entered which cannot be met, Autocannon will do as many requests as possible per second.
        Also, latency data will be corrected to compensate for the effects of the coordinated omission issue.
        If you are not familiar with the coordinated omission issue, you should probably read [this article](http://highscalability.com/blog/2015/10/5/your-load-generator-is-probably-lying-to-you-take-the-red-pi.html) or watch this [Gil Tene's talk](https://www.youtube.com/watch?v=lJ8ydIuPFeU) on the topic.
  -C/--ignoreCoordinatedOmission
        Ignore the coordinated omission issue when requests should be sent at a fixed rate using 'connectionRate' or 'overallRate'.
        NOTE: it is not recommended to enable this option.
        When the request rate cannot be met because the server is too slow, many request latencies might be missing and Autocannon might report a misleading latency distribution.
  -D/--reconnectRate NUM
        The number of requests to make before resetting a connections connection to the
        server.
  -n/--no-progress
        Don't render the progress bar. default: false.
  -l/--latency
        Print all the latency data. default: false.
  -I/--idReplacement
        Enable replacement of `[<id>]` with a randomly generated ID within the request body. e.g. `/items/[<id>]`. default: false.
  -j/--json
        Print the output as newline delimited JSON. This will cause the progress bar and results not to be rendered. default: false.
  -f/--forever
        Run the benchmark forever. Efficiently restarts the benchmark on completion. default: false.
  -s/--servername
        Server name for the SNI (Server Name Indication) TLS extension. Defaults to the hostname of the URL when it is not an IP address.
  -x/--excludeErrorStats
        Exclude error statistics (non-2xx HTTP responses) from the final latency and bytes per second averages. default: false.
  -E/--expectBody EXPECTED
        Ensure the body matches this value. If enabled, mismatches count towards bailout.
        Enabling this option will slow down the load testing.
  --renderStatusCodes
        Print status codes and their respective statistics.
  --cert
        Path to cert chain in pem format
  --key
        Path to private key for specified cert in pem format
  --ca
        Path to trusted ca certificates for the test. This argument accepts both a single file as well as a list of files
  --debug
        Print connection errors to stderr.
  -v/--version
        Print the version number.
  -V/--verbose
        Print the table with results. default: true.
  -h/--help
        Print this menu.

autocannon outputs data in tables like this:

Running 10s test @ http://localhost:3000
10 connections

┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬──────────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%  │ Avg     │ Stdev   │ Max      │
├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼──────────┤
│ Latency │ 0 ms │ 0 ms │ 0 ms  │ 1 ms │ 0.02 ms │ 0.16 ms │ 16.45 ms │
└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴──────────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg     │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec   │ 20623   │ 20623   │ 25583   │ 26271   │ 25131.2 │ 1540.94 │ 20615   │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 2.29 MB │ 2.29 MB │ 2.84 MB │ 2.92 MB │ 2.79 MB │ 171 kB  │ 2.29 MB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

251k requests in 10.05s, 27.9 MB read

There are two tables: one for the request latency, and one for the request volume.

The latency table lists the request times at the 2.5% percentile, the fast outliers; at 50%, the median; at 97.5%, the slow outliers; at 99%, the very slowest outliers. Here, lower means faster.

The request volume table lists the number of requests sent and the number of bytes downloaded. These values are sampled once per second. Higher values mean more requests were processed. In the above example, 2.29 MB was downloaded in 1 second in the worst case (slowest 1%). Since we only ran for 10 seconds, there are just 10 samples, the Min value and the 1% and 2.5% percentiles are all the same sample. With longer durations these numbers will differ more.

When passing the -l flag, a third table lists all the latency percentiles recorded by autocannon:

┌────────────┬──────────────┐
│ Percentile │ Latency (ms) │
├────────────┼──────────────┤
│ 0.001      │ 0            │
├────────────┼──────────────┤
│ 0.01       │ 0            │
├────────────┼──────────────┤
│ 0.1        │ 0            │
├────────────┼──────────────┤
│ 1          │ 0            │
├────────────┼──────────────┤
│ 2.5        │ 0            │
├────────────┼──────────────┤
│ 10         │ 0            │
├────────────┼──────────────┤
│ 25         │ 0            │
├────────────┼──────────────┤
│ 50         │ 0            │
├────────────┼──────────────┤
│ 75         │ 0            │
├────────────┼──────────────┤
│ 90         │ 0            │
├────────────┼──────────────┤
│ 97.5       │ 0            │
├────────────┼──────────────┤
│ 99         │ 1            │
├────────────┼──────────────┤
│ 99.9       │ 1            │
├────────────┼──────────────┤
│ 99.99      │ 3            │
├────────────┼──────────────┤
│ 99.999     │ 15           │
└────────────┴──────────────┘

This can give some more insight if a lot (millions) of requests were sent.

Programmatically

'use strict'

const autocannon = require('autocannon')

autocannon({
  url: 'http://localhost:3000',
  connections: 10, //default
  pipelining: 1, // default
  duration: 10 // default
}, console.log)

// async/await
async function foo () {
  const result = await autocannon({
    url: 'http://localhost:3000',
    connections: 10, //default
    pipelining: 1, // default
    duration: 10 // default
  })
  console.log(result)
}

Workers

In workers mode, autocannon uses instances of Node's Worker class to execute the load tests in multiple threads.

The amount and connections parameters are divided amongst the workers. If either parameter is not integer divisible by the number of workers, the per-worker value is rounded to the lowest integer, or set to 1, whichever is the higher. All other parameters are applied per-worker as if the test were single-threaded.

NOTE: Unlike amount and connections, the "overall" parameters, maxOverallRequests and overallRate, are applied per worker. For example, if you set connections to 4, workers to 2 and maxOverallRequests to 10, each worker will receive 2 connections and a maxOverallRequests of 10, resulting in 20 requests being sent.

'use strict'

const autocannon = require('autocannon')

autocannon({
  url: 'http://localhost:3000',
  connections: 10, //default
  pipelining: 1, // default
  duration: 10, // default
  workers: 4
}, console.log)

NOTE: When in workers mode, you need to pass in an absolute file path to all the options that accept a function. This is because a function passed into the main process can not be cloned and passed to the worker. So instead, it needs a file that it can require. The options with this behaviour are shown in the below example

'use strict'

const autocannon = require('autocannon')

autocannon({
  // ...
  workers: 4,
  setupClient: '/full/path/to/setup-client.js',
  verifyBody: '/full/path/to/verify-body.js'
  requests: [
    {
      // ...
      onResponse: '/full/path/to/on-response.js'
    },
    {
      // ...
      setupRequest: '/full/path/to/setup-request.js'
    }
  ]
}, console.log)

API

autocannon(opts[, cb])

Start autocannon against the given target.

  • opts: Configuration options for the autocannon instance. This can have the following attributes. REQUIRED.
    • url: The given target. Can be HTTP or HTTPS. More than one URL is allowed, but it is recommended that the number of connections is an integer multiple of the URL. REQUIRED.
    • socketPath: A path to a Unix Domain Socket or a Windows Named Pipe. A url is still required to send the correct Host header and path. OPTIONAL.
    • workers: Number of worker threads to use to fire requests.
    • connections: The number of concurrent connections. OPTIONAL default: 10.
    • duration: The number of seconds to run the autocannon. Can be a timestring. OPTIONAL default: 10.
    • amount: A Number stating the number of requests to make before ending the test. This overrides duration and takes precedence, so the test won't end until the number of requests needed to be completed is completed. OPTIONAL.
    • sampleInt: The number of milliseconds to elapse between taking samples. This controls the sample interval, & therefore the total number of samples, which affects statistical analyses. default: 1.
    • timeout: The number of seconds to wait for a response before. OPTIONAL default: 10.
    • pipelining: The number of pipelined requests for each connection. Will cause the Client API to throw when greater than 1. OPTIONAL default: 1.
    • bailout: The threshold of the number of errors when making the requests to the server before this instance bail's out. This instance will take all existing results so far and aggregate them into the results. If none passed here, the instance will ignore errors and never bail out. OPTIONAL default: undefined.
    • method: The HTTP method to use. OPTIONAL default: 'GET'.
    • title: A String to be added to the results for identification. OPTIONAL default: undefined.
    • body: A String or a Buffer containing the body of the request. Insert one or more randomly generated IDs into the body by including [<id>] where the randomly generated ID should be inserted (Must also set idReplacement to true). This can be useful in soak testing POST endpoints where one or more fields must be unique. Leave undefined for an empty body. OPTIONAL default: undefined.
    • form: A String or an Object containing the multipart/form-data options or a path to the JSON file containing them
    • headers: An Object containing the headers of the request. OPTIONAL default: {}.
    • initialContext: An object that you'd like to initialize your context with. Check out an example of initializing context. OPTIONAL
    • setupClient: A Function which will be passed the Client object for each connection to be made. This can be used to customise each individual connection headers and body using the API shown below. The changes you make to the client in this function will take precedence over the default body and headers you pass in here. There is an example of this in the samples folder. OPTIONAL default: function noop () {}. When using workers, you need to supply a file path that default exports a function instead (Check out the workers section for more details).
    • verifyBody: A Function which will be passed the response body for each completed request. Each request, whose verifyBody function does not return a truthy value, is counted in mismatches. This function will take precedence over the expectBody. There is an example of this in the samples folder. When using workers, you need to supply a file path that default exports a function (Check out the workers section for more details).
    • maxConnectionRequests: A Number stating the max requests to make per connection. amount takes precedence if both are set. OPTIONAL
    • maxOverallRequests: A Number stating the max requests to make overall. Can't be less than connections. maxConnectionRequests takes precedence if both are set. OPTIONAL
    • connectionRate: A Number stating the rate of requests to make per second from each individual connection. No rate limiting by default. OPTIONAL
    • overallRate: A Number stating the rate of requests to make per second from all connections. connectionRate takes precedence if both are set. No rate limiting by default. OPTIONAL
    • ignoreCoordinatedOmission: A Boolean which disables the correction of latencies to compensate for the coordinated omission issue. Does not make sense when no rate of requests has been specified (connectionRate or overallRate). OPTIONAL default: false.
    • reconnectRate: A Number that makes the individual connections disconnect and reconnect to the server whenever it has sent that number of requests. OPTIONAL
    • requests: An Array of Objects which represents the sequence of requests to make while benchmarking. Can be used in conjunction with the body, headers and method params above. Check the samples folder for an example of how this might be used. OPTIONAL. Contained objects can have these attributes:
      • body: When present, will override opts.body. OPTIONAL
      • headers: When present, will override opts.headers. OPTIONAL
      • method: When present, will override opts.method. OPTIONAL
      • path: When present, will override opts.path. OPTIONAL
      • setupRequest: A Function you may provide to mutate the raw request object, e.g. request.method = 'GET'. It takes request (Object) and context (Object) parameters, and must return the modified request. When it returns a falsey value, autocannon will restart from first request. When using workers, you need to supply a file path that default exports a function instead (Check out workers section for more details) OPTIONAL
      • onResponse: A Function you may provide to process the received response. It takes status (Number), body (String) context (Object) parameters and headers (Key-Value Object). When using workers, you need to supply a file path that default exports a function instead (Check out workers section for more details) OPTIONAL
    • har: an Object of parsed HAR content. Autocannon will extra and use entries.request: requests, method, form and body options will be ignored. NOTE: you must ensure that entries are targeting the same domain as url option. OPTIONAL
    • idReplacement: A Boolean which enables the replacement of [<id>] tags within the request body with a randomly generated ID, allowing for unique fields to be sent with requests. Check out an example of programmatic usage that can be found in the samples. OPTIONAL default: false
    • forever: A Boolean which allows you to setup an instance of autocannon that restarts indefinitely after emitting results with the done event. Useful for efficiently restarting your instance. To stop running forever, you must cause a SIGINT or call the .stop() function on your instance. OPTIONAL default: false
    • servername: A String identifying the server name for the SNI (Server Name Indication) TLS extension. OPTIONAL default: Defaults to the hostname of the URL when it is not an IP address.
    • excludeErrorStats: A Boolean which allows you to disable tracking non-2xx code responses in latency and bytes per second calculations. OPTIONAL default: false.
    • expectBody: A String representing the expected response body. Each request whose response body is not equal to expectBodyis counted in mismatches. If enabled, mismatches count towards bailout. OPTIONAL
    • tlsOptions: An Object that is passed into tls.connect call (Full list of options). Note: this only applies if your URL is secure.
    • skipAggregateResult: A Boolean which allows you to disable the aggregate result phase of an instance run. See autocannon.aggregateResult
  • cb: The callback which is called on completion of a benchmark. Takes the following params. OPTIONAL.
    • err: If there was an error encountered with the run.
    • results: The results of the run.

Returns an instance/event emitter for tracking progress, etc. If cb is omitted, the return value can also be used as a Promise.

Customizing sent requests

When running, autocannon will create as many Client objects as desired connections. They will run in parallel until the benchmark is over (duration or total number of requests). Each client will loop over the requests array, would it contain one or several requests.

While going through available requests, the client will maintain a context: an object you can use in onResponse and setupRequest functions, to store and read some contextual data. Please check the request-context.js file in samples.

Note that context object will be reset to initialContext (or {} it is not provided) when restarting to the first available request, ensuring similar runs.

Combining connections, overallRate and amount

When combining a fixed amount of requests with concurrent connections and an overallRate limit, autocannon will distribute the requests and the intended rate over all connections. If the overallRate is not integer divisible, autocannon will configure some connection clients with a higher and some with a lower number of requests/second rate. If now the amount is integer divisible, all connection clients get the same number of requests. This means that the clients with a higher request rate will finish earlier, than the others, leading to a drop in the perceived request rate.

Example: connections = 10, overallRate = 17, amount = 5000

autocannon.track(instance[, opts])

Track the progress of your autocannon, programmatically.

  • instance: The instance of autocannon. REQUIRED.
  • opts: Configuration options for tracking. This can have the following attributes. OPTIONAL.
    • outputStream: The stream to output to. default: process.stderr.
    • renderProgressBar: A truthy value to enable the rendering of the progress bar. default: true.
    • renderResultsTable: A truthy value to enable the rendering of the results table. default: true.
    • renderLatencyTable: A truthy value to enable the rendering of the advanced latency table. default: false.
    • progressBarString: A string defining the format of the progress display output. Must be valid input for the progress bar module. default: 'running [:bar] :percent'.

Example that just prints the table of results on completion:

'use strict'

const autocannon = require('autocannon')

const instance = autocannon({
  url: 'http://localhost:3000'
}, console.log)

// this is used to kill the instance on CTRL-C
process.once('SIGINT', () => {
  instance.stop()
})

// just render results
autocannon.track(instance, {renderProgressBar: false})

Check out this example to see it in use, as well.

autocannon.printResult(resultObject[, opts])

Returns a text string containing the result tables.

  • resultObject: The result object of autocannon. REQUIRED.
  • opts: Configuration options for generating the tables. These may include the following attributes. OPTIONAL.
    • outputStream: The stream to which output is directed. It is primarily used to check if the terminal supports color. default: process.stderr.
    • renderResultsTable: A truthy value to enable the creation of the results table. default: true.
    • renderLatencyTable: A truthy value to enable the creation of the latency table. default: false.

Example:

"use strict";

const { stdout } = require("node:process");
const autocannon = require("autocannon");

function print(result) {
  stdout.write(autocannon.printResult(result));
}

autocannon({ url: "http://localhost:3000" }, (err, result) => print(result));

autocannon.aggregateResult(results[, opts])

Aggregate the results of one or more autocannon instance runs, where the instances of autocannon have been run with the skipAggregateResult option.

This is an advanced use case, where you might be running a load test using autocannon across multiple machines and therefore need to defer aggregating the results to a later time.

  • results: An array of autocannon instance results, where the instances have been run with the skipAggregateResult option set to true. REQUIRED.
  • opts: This is a subset of the options you would pass to the main autocannon API, so you could use the same options object as the one used to run the instances. See autocannon for full descriptions of the options. REQUIRED.
    • url: REQUIRED
    • title: OPTIONAL default: undefined
    • socketPath: OPTIONAL
    • connections: OPTIONAL default: 10.
    • sampleInt: OPTIONAL default: 1
    • pipelining: OPTIONAL default: 1
    • workers: OPTIONAL default: undefined

Autocannon events

Because an autocannon instance is an EventEmitter, it emits several events. these are below:

  • start: Emitted once everything has been setup in your autocannon instance and it has started. Useful for if running the instance forever.
  • tick: Emitted every second this autocannon is running a benchmark. Useful for displaying stats, etc. Used by the track function. The tick event propagates an object containing the counter and bytes values, which can be used for extended reports.
  • done: Emitted when the autocannon finishes a benchmark. passes the results as an argument to the callback.
  • response: Emitted when the autocannons http-client gets an HTTP response from the server. This passes the following arguments to the callback:
    • client: The http-client itself. Can be used to modify the headers and body the client will send to the server. API below.
    • statusCode: The HTTP status code of the response.
    • resBytes: The response byte length.
    • responseTime: The time taken to get a response after initiating the request.
  • reqError: Emitted in the case of a request error e.g. a timeout.
  • error: Emitted if there is an error during the setup phase of autocannon.

Results

The results object emitted by done and passed to the autocannon() callback has these properties:

  • title: Value of the title option passed to autocannon().
  • url: The URL that was targeted.
  • socketPath: The UNIX Domain Socket or Windows Named Pipe that was targeted, or undefined.
  • requests: A histogram object containing statistics about the number of requests that were sent per second.
  • latency: A histogram object containing statistics about response latency.
  • throughput: A histogram object containing statistics about the response data throughput per second.
  • duration: The amount of time the test took, in seconds.
  • errors: The number of connection errors (including timeouts) that occurred.
  • timeouts: The number of connection timeouts that occurred.
  • mismatches: The number of requests with a mismatched body.
  • start: A Date object representing when the test started.
  • finish: A Date object representing when the test ended.
  • connections: The amount of connections used (value of opts.connections).
  • pipelining: The number of pipelined requests used per connection (value of opts.pipelining).
  • non2xx: The number of non-2xx response status codes received.
  • resets: How many times the requests pipeline was reset due to setupRequest returning a falsey value.
  • statusCodeStats: Requests counter per status code (e.g. { "200": { "count": "500" } })

The histogram objects for requests, latency and throughput are hdr-histogram-percentiles-obj objects and have this shape:

  • min: The lowest value for this statistic.
  • max: The highest value for this statistic.
  • average: The average (mean) value.
  • stddev: The standard deviation.
  • p*: The XXth percentile value for this statistic. The percentile properties are: p2_5, p50, p75, p90, p97_5, p99, p99_9, p99_99, p99_999.

Client API

This object is passed as the first parameter of both the setupClient function and the response event from an autocannon instance. You can use this to modify the requests you are sending while benchmarking. This is also an EventEmitter, with the events and their params listed below.

  • client.setHeaders(headers): Used to modify the headers of the request this client iterator is currently on. headers should be an Object, or undefined if you want to remove your headers.
  • client.setBody(body): Used to modify the body of the request this client iterator is currently on. body should be a String or Buffer, or undefined if you want to remove the body.
  • client.setHeadersAndBody(headers, body): Used to modify both the headers and body this client iterator is currently on. headers and body should take the same form as above.
  • client.setRequest(request): Used to modify the entire request that this client iterator is currently on. Can have headers, body, method, or path as attributes. Defaults to the values passed into the autocannon instance when it was created. Note: call this when modifying multiple request values for faster encoding
  • client.setRequests(newRequests): Used to overwrite the entire requests array that was passed into the instance on initiation. Note: call this when modifying multiple requests for faster encoding

Client events

The events a Client can emit are listed here:

  • headers: Emitted when a request sent from this client has received the headers of its reply. This received an Object as the parameter.
  • body: Emitted when a request sent from this client has received the body of a reply. This receives a Buffer as the parameter.
  • response: Emitted when the client has received a completed response for a request it made. This is passed the following arguments:
    • statusCode: The HTTP status code of the response.
    • resBytes: The response byte length.
    • responseTime: The time taken to get a response after initiating the request.
  • reset: Emitted when the requests pipeline was reset due to setupRequest returning a falsey value.

Example using the autocannon events and the client API and events:

'use strict'

const autocannon = require('autocannon')

const instance = autocannon({
  url: 'http://localhost:3000',
  setupClient: setupClient
}, (err, result) => handleResults(result))
// results passed to the callback are the same as those emitted from the done events
instance.on('done', handleResults)

instance.on('tick', () => console.log('ticking'))

instance.on('response', handleResponse)

function setupClient (client) {
  client.on('body', console.log) // console.log a response body when its received
}

function handleResponse (client, statusCode, resBytes, responseTime) {
  console.log(`Got response with code ${statusCode} in ${responseTime} milliseconds`)
  console.log(`response: ${resBytes.toString()}`)

  //update the body or headers
  client.setHeaders({new: 'header'})
  client.setBody('new body')
  client.setHeadersAndBody({new: 'header'}, 'new body')
}

function handleResults(result) {
  // ...
}

Limitations

Autocannon is written in JavaScript for the Node.js runtime and it is CPU-bound. We have verified that it yields comparable results with wrk when benchmarking Node.js applications using the http module. Nevertheless, it uses significantly more CPU than other tools that compiles to a binary such as wrk. Autocannon can saturate the CPU, e.g. the autocannon process reaches 100%: in those cases, we recommend using wrk2.

As an example, let's consider a run with 1000 connections on a server with 4 cores with hyperthreading:

  • wrk uses 2 threads (by default) and an auxiliary one to collect the metrics with a total load of the CPU of 20% + 20% + 40%.
  • autocannon uses a single thread at 80% CPU load.

Both saturates a Node.js process at around 41k req/sec, however, autocannon can saturate sooner because it is single-threaded.

Note that wrk does not support HTTP/1.1 pipelining. As a result, autocannon can create more load on the server than wrk for each open connection.

Acknowledgements

This project was kindly sponsored by nearForm.

Logo and identity designed by Cosmic Fox Design: https://www.behance.net/cosmicfox.

wrk and wrk2 provided great inspiration.

Chat on Gitter

If you are using autocannon or you have any questions, let us know: Gitter

Contributors

License

Copyright Matteo Collina and other contributors, Licensed under MIT.

autocannon's People

Contributors

03k64 avatar 10xlacroixdrinker avatar alexvictoor avatar allevo avatar chiragpat avatar davealbert avatar dependabot-preview[bot] avatar dependabot[bot] avatar dnlup avatar doesdev avatar drifkin avatar elirandav avatar fdawgs avatar feugy avatar glentiki avatar goto-bus-stop avatar greenkeeper[bot] avatar homura avatar mcollina avatar milesnash avatar nherment avatar phra avatar rafaelgss avatar salmanm avatar sebbarg avatar silverwind avatar simoneb avatar skywickenden avatar watson avatar zerodom30 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autocannon's Issues

disconnect and reconnect clients after a while

We should be able to force a dns lookup each X requests to support AWS ELBs:

Single Client Tests
In single client tests, a load generator will issue requests to the server using the same client. In most cases, the client does not re-resolve the DNS after the test is initiated. A good example of this type of test is Apache Bench (ab). If you are using this type of testing tool, there are two options:

  • Write discrete tests at a given sample size and concurrency then start a new set of tests and invoke these tests separately so that the client will re-resolve the DNS (possibly even forcing a re-resolution depending on the operating system and other factors). It is important to save detailed request logs so you can aggregate the logs for a single view of the request details.
  • Launch multiple client instances and initiate the tests on the different instances at approximately the same time. AWS CloudFormation can help you do this by creating your load test script and launching multiple clients, then simply issuing remote SSH commands to initiate the tests. You may want to bring up more load generating clients based on Auto Scaling rules (for example, when the average CPU of other load clients reaches a certain level, add more load generators).

However, depending on the load you are trying to generate, your test may be constrained by the client's ability to issue requests. In this case, you need to consider distributed tests.

Failing test on Node v6

Issue seems to stem from hdr histogram. Testing as appropriate, will report findings after.

Output:

Glens-MacBook-Pro-2:autocannon glen$ npm run test

> [email protected] test /Users/glen/work/nearform/os/autocannon
> standard && tap test/*.test.js

test/format.test.js ................................... 4/4
test/myhttp.test.js ................................. 29/29
test/run.test.js .................................... 47/48
  run
  not ok throughput.stddev exists
    at:
      line: 34
      column: 7
      file: test/run.test.js
    stack: |
      test/run.test.js:34:7
      Timeout.setInterval (lib/run.js:100:7)
    source: |
      t.ok(result.throughput.stddev, 'throughput.stddev exists')

total ............................................... 80/81


  80 passing (5s)
  1 failing


npm ERR! Darwin 15.5.0
npm ERR! argv "/Users/glen/.nvm/versions/node/v6.2.2/bin/node" "/Users/glen/.nvm/versions/node/v6.2.2/bin/npm" "run" "test"
npm ERR! node v6.2.2
npm ERR! npm  v3.9.5
npm ERR! code ELIFECYCLE
npm ERR! [email protected] test: `standard && tap test/*.test.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] test script 'standard && tap test/*.test.js'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the autocannon package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     standard && tap test/*.test.js
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs autocannon
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls autocannon
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/glen/work/nearform/os/autocannon/npm-debug.log

error when redirecting output

Under Linux, with node 6.3.0. Also happens on master:

autocannon -d 1 example.com > out 2>&1

Result - error in node_modules/progress/lib/node-progress.js:177:

TypeError: this.stream.clearLine is not a function

Happens only if both stdout and stderr are redirected.

requests per connection option

overrides duration, set amount of requests to make per conn, and finish when all requests are complete

for instance, "I want to see how long it take for 100 connection to concurrently make one request"

Extra stderr output when json is piped to file

Under Linux:
autocannon -d -j example.com > stdout 2> stderr

Result: JSON is stored in stdout. But default output (i.e. one with progress, results table etc.) is stored in stderr. This does not happen neither when output is not piped to file, nor when -j is not used.

A newline is added when writing to a file

Wrong space when writing to file:

$ autocannon -c 100 http://localhost:3000
Running 10s test @ http://localhost:3000
100 connections

Stat         Avg      Stdev    Max
Latency (ms) 5.11     3.04     96
Req/Sec      17832.41 2170.06  20671
Bytes/Sec    1.99 MB  241.6 kB 2.36 MB

178k requests in 10s, 19.79 MB read

vs

$ autocannon -c 100 -j http://localhost:3000 > out.json

Running 10s test @ http://localhost:3000
100 connections

Stat         Avg     Stdev     Max
Latency (ms) 4.53    2.34      96
Req/Sec      19916.8 1427.08   21599
Bytes/Sec    2.22 MB 160.66 kB 2.49 MB

199k requests in 10s, 22.11 MB read

env: node\r: No such file or directory

Hi!
I've installed autocannon but during the installation I get this:

delvedor ~ $ sudo npm i autocannon -g
/usr/local/bin/autocannon -> /usr/local/lib/node_modules/autocannon/autocannon.js

> [email protected] install /usr/local/lib/node_modules/autocannon/node_modules/native-hdr-histogram
> node-gyp rebuild

  CC(target) Release/obj.target/zlib/zlib/adler32.o
  CC(target) Release/obj.target/zlib/zlib/compress.o
  CC(target) Release/obj.target/zlib/zlib/crc32.o
  CC(target) Release/obj.target/zlib/zlib/deflate.o
  CC(target) Release/obj.target/zlib/zlib/gzclose.o
  CC(target) Release/obj.target/zlib/zlib/gzlib.o
  CC(target) Release/obj.target/zlib/zlib/gzread.o
  CC(target) Release/obj.target/zlib/zlib/gzwrite.o
  CC(target) Release/obj.target/zlib/zlib/infback.o
  CC(target) Release/obj.target/zlib/zlib/inffast.o
  CC(target) Release/obj.target/zlib/zlib/inflate.o
../zlib/inflate.c:1507:61: warning: shifting a negative signed value is undefined [-Wshift-negative-value]
    if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16;
                                                        ~~~ ^
1 warning generated.
  CC(target) Release/obj.target/zlib/zlib/inftrees.o
  CC(target) Release/obj.target/zlib/zlib/trees.o
  CC(target) Release/obj.target/zlib/zlib/uncompr.o
  CC(target) Release/obj.target/zlib/zlib/zutil.o
  LIBTOOL-STATIC Release/zlib.a
  CC(target) Release/obj.target/histogram/src/hdr_encoding.o
  CC(target) Release/obj.target/histogram/src/hdr_histogram.o
  CC(target) Release/obj.target/histogram/src/hdr_histogram_log.o
  CXX(target) Release/obj.target/histogram/hdr_histogram_wrap.o
  CXX(target) Release/obj.target/histogram/histogram.o
  SOLINK_MODULE(target) Release/histogram.node
/usr/local/lib
└─┬ [email protected] 
  ├─┬ [email protected] 
  │ ├── [email protected] 
  │ ├── [email protected] 
  │ ├─┬ [email protected] 
  │ │ └── [email protected] 
  │ ├── [email protected] 
  │ └── [email protected] 
  ├── [email protected] 
  ├── [email protected] 
  ├─┬ [email protected] 
  │ ├── [email protected] 
  │ └── [email protected] 
  ├─┬ [email protected] 
  │ └── [email protected] 
  ├── [email protected] 
  ├── [email protected] 
  └─┬ [email protected] 
    ├── [email protected] 
    ├── [email protected] 
    ├── [email protected] 
    ├─┬ [email protected] 
    │ ├── [email protected] 
    │ └── [email protected] 
    ├── [email protected] 
    └── [email protected] 

As you can see the installation terminates without errors.

If I use autocannon by requiring it I have no problem, but if I use it via command line I get:
env: node\r: No such file or directory.

I've tried both via local and global installation and I'm using autocannon 0.3.0

add a --name (or --title) swith

The name (or title) switch should be emitted in the reaults. We may even add the time at which it finished. This makes the json results easier to debug.

Add an example using track

se should remember to add the following:

process.once('SIGINT', () => {
  instance.stop()
})

to correctly handle CTRL-C

Should we ensure strings start with `http`?

I ask because if you do something like autocannon google.com it doesn't recognise the actual address. Simple enough to fix, just check url.indexOf('http') === 0 and if not, prepend 'http://' before it's parsed by node's internal url module.

This check would also pass for https, hence why I don't check for http://

Can't install on Node 5 NVM 3.3.12

Just hangs,

  • Hangs when installing via npm (any version)
  • Hangs when installing locally to link (master branch)

I'm trying to get it working with Node 4 now via local, I'll ping when updated. (Although it looks to be hanging too)

Module parse failed importing autocannon on a electron app with webpack

I'm building an electron app, I'm trying to use this library but I got the following error when trying to import it.

ERROR in ./app/~/autocannon/autocannon.js
         Module parse failed: /Users/ndelvalle/workspaces/muzen/app/node_modules/autocannon/autocannon.js Unexpected character '#' (1:0)
         You may need an appropriate loader to handle this file type.
         SyntaxError: Unexpected character '#' (1:0)
             at Parser.pp$4.raise (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2221:15)
             at Parser.pp$7.getTokenFromCode (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2756:10)
             at Parser.pp$7.readToken (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2477:17)
             at Parser.pp$7.nextToken (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2468:15)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:515:10)
             at Object.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:3098:39)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/Parser.js:902:15)
             at DependenciesBlock.<anonymous> (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/NormalModule.js:104:16)
             at DependenciesBlock.onModuleBuild (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:310:10)
             at nextLoader (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:275:25)
             at /Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:259:5
             at Storage.finished (/Users/ndelvalle/workspaces/muzen/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:38:16)
             at /Users/ndelvalle/workspaces/muzen/node_modules/graceful-fs/graceful-fs.js:78:16
             at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:445:3)
          @ ./~/babel-loader!./~/vue-loader/lib/selector.js?type=script&index=0!./app/src/components/main/harass/Harass.vue 8:17-38

Removing the shebang line didn't solve it, It creates lots of warnings like:

WARNING in ./app/~/native-hdr-histogram/histogram.js
         Critical dependencies:
         6:16-37 the request of a dependency is an expression
          @ ./app/~/native-hdr-histogram/histogram.js 6:16-37

         WARNING in ./app/~/native-hdr-histogram/LICENSE
         Module parse failed: /Users/ndelvalle/workspaces/muzen/app/node_modules/native-hdr-histogram/LICENSE Unexpected token (1:21)
         You may need an appropriate loader to handle this file type.
         SyntaxError: Unexpected token (1:21)
             at Parser.pp$4.raise (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2221:15)
             at Parser.pp.unexpected (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:603:10)
             at Parser.pp.semicolon (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:581:61)
             at Parser.pp$1.parseExpressionStatement (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:966:10)
             at Parser.pp$1.parseStatement (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:730:24)
             at Parser.pp$1.parseTopLevel (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:638:25)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:516:17)
             at Object.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:3098:39)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/Parser.js:902:15)
             at DependenciesBlock.<anonymous> (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/NormalModule.js:104:16)
             at DependenciesBlock.onModuleBuild (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:310:10)
             at nextLoader (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:275:25)
             at /Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:259:5
             at Storage.finished (/Users/ndelvalle/workspaces/muzen/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:38:16)
             at /Users/ndelvalle/workspaces/muzen/node_modules/graceful-fs/graceful-fs.js:78:16
             at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:445:3)
          @ ./app/~/native-hdr-histogram ^\.\/.*$

         WARNING in ./app/~/native-hdr-histogram/Makefile
         Module parse failed: /Users/ndelvalle/workspaces/muzen/app/node_modules/native-hdr-histogram/Makefile Unexpected character '#' (1:0)
         You may need an appropriate loader to handle this file type.
         SyntaxError: Unexpected character '#' (1:0)
             at Parser.pp$4.raise (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2221:15)
             at Parser.pp$7.getTokenFromCode (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2756:10)
             at Parser.pp$7.readToken (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2477:17)
             at Parser.pp$7.nextToken (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:2468:15)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:515:10)
             at Object.parse (/Users/ndelvalle/workspaces/muzen/node_modules/acorn/dist/acorn.js:3098:39)
             at Parser.parse (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/Parser.js:902:15)
             at DependenciesBlock.<anonymous> (/Users/ndelvalle/workspaces/muzen/node_modules/webpack/lib/NormalModule.js:104:16)
             at DependenciesBlock.onModuleBuild (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:310:10)
             at nextLoader (/Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:275:25)
             at /Users/ndelvalle/workspaces/muzen/node_modules/webpack-core/lib/NormalModuleMixin.js:259:5
             at Storage.finished (/Users/ndelvalle/workspaces/muzen/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:38:16)
             at /Users/ndelvalle/workspaces/muzen/node_modules/graceful-fs/graceful-fs.js:78:16
             at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:445:3)
          @ ./app/~/native-hdr-histogram ^\.\/.*$

Any ideas that could help me out using the lib in this environment?

Allow users to specify a rate of requests per seconds

Many benchmarking tools allow users to specify the rate of requests per second. It would be nice to support this. If the rate cannot be met, we should just do as many requests as we can. If no rate is specified, we should do as many requests as possible.

@mcollina We'll need to chat about this, as I'm not sure how the code for this would look or work. Rate limiting on a huge rate that cannot be met will add an unnecessary overhead.

Add a bailout option

If running a test against a server which dies, maybe you want your test to stop?

In this case, it might be useful to allow a user to specify a threshold of errors within a specific timeframe to cause a bailout. This would surely damage perf though A better way of managing this is to just exit after the error threshold is hit (not needing an error timeframe).

Examples:

Say you have a server you are benchmarking for half an hour. The server dies after 10 minutes. To detect this, we could either:
Detect if there was 1000 errors from the server within the last second OR we could detect if there was 10000 errors overall.

cli: --data

as well as loading body from file, allow a --data flag for body to be set inline
dependant on #1

Docs

We're nearing a stage where this is close to a 1.0.0 release, we should do some indepth docs for new users, along with more samples.

https support?

Needed or not? This tool is obviously meant to be used for systems in development, so not supporting https makes perfect sense, but we need to document it.

provide html generator

I think we can build another tool that takes autocannon json and produces an html page, possibly with comparisons with a previous run. We may want to add some metadata to the CLI to add a name to the given bench.

Also, we can print the latency using a histogram and so on.. :D.

Change the headers/payload

There should be a synchronous API to change the payload if we want to, something like

const autocannon = require('autocannon')

const instance = autocannon({
    url: 'http://localhost:3000',
    connections: 1000,
    duration: 20
}, console.log)

instance.on('response', function (client, statusCode, returnBytes, responseTime) {
  client.setHeaders({ .. })
  client.body = ...
}) 

Documentation and validation work

Currently, the documentation doesn't show how to customise the headers on the request.

For the duration option, browsing the code, it is required, but never checked, @mcollina, should we put a check in for the duration and test it, or default it to some value, eg. 10? Obviously, this will need a unit test and documentation work, too.

Supported HTTP methods should be documented.

Show the default value for the http pipelining value.

Update the .track doc, I left outputStream listed as a param name when I should have listed opts.

TypeError: Expected a number, got number

When benchmarking this node.js script with:
autocannon -d 1 localhost:1234

Result:

Running 1s test @ http://localhost:1234
10 connections

/usr/local/lib/node_modules/autocannon/node_modules/pretty-bytes/index.js:6
        throw new TypeError('Expected a number, got ' + typeof num);
        ^

TypeError: Expected a number, got number
    at module.exports (/usr/local/lib/node_modules/autocannon/node_modules/pretty-bytes/index.js:6:9)
    at asBytes (/usr/local/lib/node_modules/autocannon/lib/progressTracker.js:170:20)
    at EventEmitter.instance.on (/usr/local/lib/node_modules/autocannon/lib/progressTracker.js:59:38)
    at emitOne (events.js:101:20)
    at EventEmitter.emit (events.js:188:7)
    at Timeout.setInterval (/usr/local/lib/node_modules/autocannon/lib/run.js:181:15)
    at Timeout.wrapper (timers.js:425:11)
    at tryOnTimeout (timers.js:232:11)
    at Timer.listOnTimeout (timers.js:202:5)

This does not happen, when -j is used, which produces:

{"url":"http://localhost:1234","requests":{"average":1680,"mean":1680,"stddev":0,"min":1680,"max":1680,"total":1680,"sent":1690},"latency":{"average":5.54,"mean":5.54,"stddev":2.36,"min":3,"max":19,"p50":4,"p75":7,"p90":9,"p99":14,"p999":17,"p9999":19,"p99999":19},"throughput":{"average":null,"mean":null,"stddev":null,"min":-1,"max":0,"total":1761814320},"errors":0,"timeouts":0,"duration":1,"start":"2016-08-08T10:58:21.518Z","finish":"2016-08-08T10:58:22.540Z","connections":10,"pipelining":1,"non2xx":0,"1xx":0,"2xx":1680,"3xx":0,"4xx":0,"5xx":0}

only count 2xx's

currently all response types are counted, this throws averages off (e.g. a 502 response could be much faster than 200)

just count 2xx responses

Max throughput is reported as negative

This is the server:

const http = require('http');
var len = 1024 * 1024 * 1024;
var chunk = Buffer.alloc(len, 'x');

var server = http.createServer(function(req, res) {
  res.write(chunk);
  res.end()
});

server.listen(1234, function() {
});

And I am using this line: ./autocannon.js -d 10 -j localhost:1234

Running 10s test @ http://localhost:1234
10 connections

Stat         Avg    Stdev   Max
Latency (ms) 7344   1129.22 9497
Req/Sec      1      1.85    6
Bytes/Sec    1.1 GB 2.02 GB -1.88 GB

10 requests in 10s, 10.74 GB read

I think the problem lies in the int32 conversion:

https://github.com/mcollina/native-hdr-histogram/blob/master/hdr_histogram_wrap.cc#L91-L107

Damn, I remember thinking I will never hit that bug when I wrote those lines.....

Allow users to supply an array of `requests` to use when benching

Currently, you can supply the request headers and body when creating an autocannon instance. In future, I would like to see the ability to supply an array of objects with headers, body, method and path attributes. This might look like so:

autocannon({
  url: ...,
  requests: [
    {
      method: 'POST',
      headers: {Login form data}
      path: '/api/login'
    },
    {
      method: 'GET',
      path: '/api/user/me/sensitiveData',
      headers: {auth headers}
    },
    ......
  ]
}, cb)

This would allow autocannon to pre-generate the http request data, and then create connections, pass in the requests, and each connection can run through the requests array repeatedly until completion.

Because of the nature of this being a load testing tool, you can not use data returned from one request in a subsequent request. This is because requests are prebuilt and cached/passed around. This would mean you must pregenerate any needed auth tokens or w/e ahead of time and embed in the testing tool.

README.md typo

instance.on('response', handleResonse)

Missing a "p" in handleResponse

API review

As per @davidmarkclements comment in #54, we should review the API (and do a general code cleanup).

The amount of options a user now has available to them has grown greatly over the last week or so, and with that growth came a lot more code, and less API thought/considerations.

broken progress

I installed autocannon as local module (it had some issues with global installation),
and getting js error while trying to run simple test:

$ autocannon -c 1 -d 1s http://n...
/home/alexi/autocannon/node_modules/autocannon/node_modules/progress/lib/node-progress.js:51
    if ('number' != typeof options.total) throw new Error('total required');
                                          ^

Error: total required
    at new ProgressBar (/home/alexi/autocannon/node_modules/autocannon/node_modules/progress/lib/node-progress.js:51:49)
    at track (/home/alexi/autocannon/node_modules/autocannon/lib/progressTracker.js:28:25)
    at start (/home/alexi/autocannon/node_modules/autocannon/autocannon.js:92:5)
    at Object.<anonymous> (/home/alexi/autocannon/node_modules/autocannon/autocannon.js:101:3)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Function.Module.runMain (module.js:441:10)
    at startup (node.js:139:18)

Thank you.

PS. Linux, [email protected], [email protected]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.