Giter VIP home page Giter VIP logo

bench-rest's People

Contributors

bruce17 avatar ffoysal avatar jeffbski avatar kevinjalbert avatar kintel avatar marcelh89 avatar orthographic-pedant avatar saqib-ahmed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bench-rest's Issues

Socket hang up

I always get this error 'Failed in main with err: { [Error: socket hang up] code: 'ECONNRESET' }' anytime i run my bench rest script. I don't really understand why.

Basically the end point I am using bench rest to load carries out some asynchronous tasks. So I looping through any array and I am creating some users in my database and so no response will be sent until that array has completed. Do you think this is the reason why it is hanging up?

Worst performance results than ab

Hi @jeffbski ,

I need to make really hard performance test against a node with express services. Doing an initial test against a hello world service I have very significant differences between using loadtest and the traditional ab

(By the way, I am trying bench-rest after testing loadtest. I also found different results than using ab -Apache Benchmark).

Testing a "Hello World" get service with ab:

captura de pantalla de 2015-03-09 16 13 38

The same using bench-rest:

captura de pantalla de 2015-03-09 16 17 24

With bench-rest I have 4x less requests per second than using ab? Do you know why is this happening?

best regards,

How to use runOptions

I have setup a script with runOptions but I am not sure how do I execute the script to use runOptions rather than using bench-rest -n 1000 -c 50 perf_scripts.js ? Am quite new to javascript so apologise if its a very silly question

Questions about upload a file

Try to upload a file.
var formData = new FormData();
formData.append('field', '13');
formData.append('file', fs.createReadStream(__dirname + '/test.jpg'));

post: 'http://server.com',
beforeHooks: [
function (all) {
var Authorization = all.iterCtx.Authorization;
all.requestOptions.headers.Authorization = Authorization;
all.requestOptions.preambleCRLF = true;
all.requestOptions.postambleCRLF = true;
all.requestOptions.headers['Content-type'] = 'multipart/form-data'
all.requestOptions.formData = formData;
return all; // always return all if you want it to continue
}
],

The error is :


Error: Multipart: Boundary not found
at new Multipart (/node_modules/connect-busboy/node_modules/busboy/lib/types/multipart.js:58:11)
at Multipart (


Can you please help?
Thanks.

On progress event not logging output

Hi,
From the code below, I noticed that the on progress event never outputs anything. I was wondering why this is the case. Anytime i run my bench-rest script in the node terminal, i don't see anything until the whole iteration is completed.

 benchrest(flow, runOptions)
    .on('error', function (err, ctxName) { 
      console.error('Failed in %s with err: ', ctxName, err); 
    })
    .on('progress', function (stats, percent, concurrent, ips) {
      console.log('Progress: %s complete', percent);
    });

It would have been nice to see some output as its progressing so I have a sense of what is going on.

Thanks

bench-rest not able to run second bench-marking in same nodejs process

I'm running bench-rest as web-app together with express and I found a problem when trying to run the bench-rest more than once I see the nodejs process never kicks in again, so probably something is hanging after the process is finished ? is there a way to ensure gc or release the threads ?

Thanks

This is the code im running:

app.get('/benchtest', function(req, res) {
console.log('running benchmark test for site:');
console.log('site: '+req.query.site);
console.log('limit: '+req.query.limit);
console.log('iterations: '+req.query.iterations);
console.log('prealloc: '+req.query.prealloc);
var benchrest = require('bench-rest');

          var flow = {
            before: [],      // operations to do before anything
            beforeMain: [],  // operations to do before each iteration
            main: [  // the main flow for each iteration, #{INDEX} is unique iteration counter token

// { put: 'http://localhost:8000/foo_#{INDEX}', json: 'mydata_#{INDEX}' },
{ get: req.query.site }
],
afterMain: [], // operations to do after each iteration
after: [] // operations to do after everything is done
};
var runOptions = {
progress: 1000,
limit: req.query.limit, // concurrent connections
iterations: req.query.iterations, // number of iterations to perform
// prealloc: req.query.prealloc // only preallocate up to 100 before starting
};
var errors = [];
benchrest(flow, runOptions)
.on('error', function (err, ctxName) { console.error('Failed in %s with err: ', ctxName, err); })

            .on('progress', function (stats, percent, concurrent, ips) {
              console.log('Progress: %s complete', percent);
              if(percent==0){
                  errors.push(percent);
              }
              if(errors.length > 10){
                  res.send({result: {stats: null, errorCount:1}});
              }
            })

            .on('end', function (stats, errorCount) {
              console.log('error count: ', errorCount);
              console.log('stats', stats);
              res.send({result: {stats: stats, errorCount:errorCount}});
            });

    });

this is the logs:

Progress: 0 complete
Progress: 60 complete
error count: 0
stats { totalElapsed: 238068,
main:
{ meter:
{ mean: 3.3783783783783785,
count: 5,
currentRate: 2.785515320334262,
'1MinuteRate': 0,
'5MinuteRate': 0,
'15MinuteRate': 0 },
histogram:
{ min: 236587,
max: 238068,
sum: 1185539,
variance: 473156.20000000537,
mean: 237107.8,
stddev: 687.8635039017591,
count: 5,
median: 236636,
p75: 237840,
p95: 238068,
p99: 238068,
p999: 238068 } } }
running benchmark test for site:
site: http://google.com
limit: 1
iterations: 1
prealloc: 1
Progress: 0 complete
Progress: 0 complete
error count: 0
stats { totalElapsed: 2274,
main:
{ meter:
{ mean: 1000,
count: 1,
currentRate: 1000,
'1MinuteRate': 0,
'5MinuteRate': 0,
'15MinuteRate': 0 },
histogram:
{ min: 2274,
max: 2274,
sum: 2274,
variance: null,
mean: 2274,
stddev: 0,
count: 1,
median: 2274,
p75: 2274,
p95: 2274,
p99: 2274,
p999: 2274 } } }
running benchmark test for site:
site: http://google.com
limit: 1
iterations: 1
prealloc: 1
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete
Progress: 0 complete

Get with headers

how to api call with headers ?
Type erros:

Object.keys(obj).forEach(function (i) {
         ^
TypeError: Object.keys called on non-object

Dependent flow actions

Is it possible to make the flow actions dependent on each other?
I'm trying to write a test where the output of one request is used as the input to the next request. For example:

  1. POST /user
  2. Get the returned user id from step 1 and use that to POST /foo
    etc.

Https examples please?

Hello,

This project claims that it is a benchmark solution for both http and https. Would you be so kind to provide an example including authorization ? I'm trying to use Hawk authorization and I can't see how or where I can inject the required headers, which requires recalculation on each request.

Regards

Question about using for a smoketest

Hi,

I was reading about your library and it seems like it might fit the bill for a smoke-test I'm implementing.

I'm looking to make ~50-100 requests to the server to check the basic health of it after a deployment - I was wondering if you could let me know if this is feasible?

  • Test 12 servers in parallel - each has a unique test url for internal use
  • 50 tests per server - can be parallel
  • Retry of 5x of test suite looking for a full set of 200s

At the moment I have the test running but it takes an age because it runs against server1, then server2, etc and with warmup of a new application deployment usually takes around 1 minute per server. I feel like I should be able to run these all in parallel though.

The benchmarking seems great too and would definitely be good to track over time

Thanks for any help

How different is this from artillery?

Hello,
Thank you for such a nice benchmarking tool. I tried to use artillery before, but I found it a bit complex. Your approach seems simpler and easier to use. Do you know artillery? https://artillery.io/ How different is your project from it? Could you point some features your project has over it?

Maybe mentioning them on the docs could be a good idea.
Thanks in advance

number of requests per second

Hi,
Just want to ask a question. Does the stats.main.meter.mean refer to the number of requests per second the server was able to do? Because the word you used on the page was iterations per second, so i don't really know if they are the same thing.

Thanks

Rate limiting to X requests per interval

Hi @jeffbski , great work on this--it's really useful!

Any best practices for rate-limiting the request flow by some wall-time measure, e.g. 100 requests per minute? For my application this would provide a more accurate simulation of real traffic--especially if for each minute the requests were scattered throughout the interval randomly rather than 100 async workers all spawning at once.

Currently I'm approximating this by putting a synchronous delay in a lambda in the the afterHooks array, but this is pretty hacky, and also messes up the stats output (since the delay time is counted towards the response time for each iteration).

If you have any suggestions for how to accomplish this with bench-rest as it stands, I'm all ears. Or if you have thoughts on how best to architect this as a new feature in the module, I'm happy to try my hand at it and send you a PR. Thanks!

Problems using bench-rest with Dreamfactory

I am attempting to use bench-rest to benchmark the performance of Dreamfactory (https://github.com/dreamfactorysoftware/dreamfactory). When I use other tools such as curl, Advanced Rest Client or our built in test tool, I am able to connect to the REST api and perform any action. However, even the simplest actions, such as getting a list of database tables fails when using bench-rest with a 500 error. Using other tools, I get the expected response:

{
"resource": [
{
"name": "test_table"
}
]
}

Using the simple get request capabilities of bench-rest, this is the result (username and password obfuscated)

bench-rest -u XXX -p XXX http://52.20.104.223/api/v2/DBTest/_table
Benchmarking 1 iteration(s) using up to 1 concurrent connections using basic auth user [email protected]

flow: http://52.20.104.223/api/v2/DBTest/_table

Failed in main, err: [Error: error statusCode: 500]
Progress [=======================================] 100% 0.0s conc:0 4/s

errors: 1
stats: { totalElapsed: 282.82533200085163,
main:
{ meter:
{ mean: 3.5350537770750914,
count: 1,
currentRate: 3.5350037788768995,
'1MinuteRate': 0,
'5MinuteRate': 0,
'15MinuteRate': 0 },
histogram:
{ min: 279.80505799874663,
max: 279.80505799874663,
sum: 279.80505799874663,
variance: null,
mean: 279.80505799874663,
stddev: 0,
count: 1,
median: 279.80505799874663,
p75: 279.80505799874663,
p95: 279.80505799874663,
p99: 279.80505799874663,
p999: 279.80505799874663 } } }

Using the exact same credentials and URL, both cURL, Advanced Rest Client and our own REST test application all work and return the list of tables. I've used wireshark to sniff the connection, and I can't see any significant differences between it and the cURL request.

I would be happy to provide you with the proper credentials to troubleshoot this. I'm fresh out of ideas and would appreciate any insight you could provide.

Client Polling

Hi Jeff,

Great node module you made here! Very impressive work.

I have a rest work flow that involves a client starting with a post request to submit a job, getting redirected to a job status page, and then polling that page (get) until a certain json element on it has the value "Complete".

I see everything I need in your API except the client polling part.

I don't see a way to tell bench-rest to loop and perform the same get request over and over again in the after hook until the verification step passes (job is complete). Maybe I'm missing something

Any advice you have would be most appreciated.

Thanks again for a creating a great node module!

Cheers,
Scott Ellis

Question on Histogram with Respect to Concurrent Connections

I expect the mean histogram value to be close to the mean meter. However, if we doubled the concurrent connection, it doubles the histogram mean value. If we keep doubling concurrent connections, the histogram mean keeps doubling as if we are adding them up. What am I missing?

Global beforeHooks

Hello,

Sometimes there are some hooks that I would like to run for every request on the stream (for example, for include certain common header).

Currently there is no way to do this. At first instance I though that beforeMain was for that, but this is a mistake from my side ,probably because the total absence of a clear usage example of beforeMain. BeforeMain should be a list of REST request, probably for getting some cookie or something like that.

Would it be possible to impelemnt a beforeHooks configurable at flow level insted of just request?

Thanks and regards

Ignore errors

I think it would be useful to have a feature whereby errors are ignored. The use case: I want to measure my server's overall performance, in the presence of errors, because the code path in that case is different and it might change the results.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.