Giter VIP home page Giter VIP logo

benchpress's Introduction

benchpress's People

Contributors

jbdeboer avatar jeffbcross avatar jsayol avatar thompsnm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchpress's Issues

before and after sample mode

For purposes of calculating more accurate deltas between CLs, and accounting for different test environments, benchpress should support running tests for current revision and previous revision to calculate relative results as well as absolute results.

I'm opening this issue as a reminder that this should be supported, though it may not make sense as part of the existing benchpress stack (e.g. maybe it makes more sense as a third-party script).

Where are the Docs?

Hi. It seems like the docs were moved, but the links don't point to anything.

How do I install benchpress? How do I run it?

Aren't those important things to know that should be in the README somewhere?

improve UI for comparing steps

Often it is useful to see how a specific pattern works given a specific amount of data as in many situations some cliff exists and one approach may be much worse then another.

I have found myself doing:

[100, 1000, 10000, 100000].forEach(function(n) {
  window.benchmarkSteps.push({
    name: 'do something (' + n + ')',
    description: ' test the of ' + n + 'xxxx somethings',
    fn: function() {
       // something based on N;
    }
  })
});

unfortunately the output becomes pretty gnarly, and it is hard to see the trends between the varying sizes of N.

Potential solution:

allow toggling visibility of: '.row.scrollable'
adjusting the styles to reduce step row height.

Show browser warnings at top of UI

An alert should be shown at the top of the UI warning the user if browser has not been started with correct flags to enable high resolution memory measurement and manual garbage collection.

Understanding the Output?

I finally was able to get a clean run from benchpress! yay!!

But now the problem is that I have no idea how to interpret this output:

benchpress-tree_ -bash _217x51

Is there some explanation for these numbers somewhere and how to understand the output?
Thanks! :)

Support ES6 modules

Usecase: benchmarks are ES6 modules that are loaded asynchronously and therefore cannot add entries to the global window.benchmarkSteps.

Solution idea:
Add flag manualBootstrap to the config object. This will expose a global function startBenchpress(steps) that starts the benchpress ui with the given benchmark steps.

Alternate solution idea:

  • Support ES6 modules natively, i.e. support a modules array in the config object as well, then call System.load (the ES6 module loader) for those modules and wait until they have been loaded.
  • This still needs to support scripts as the ES6 module loader need a polyfill right now, which should be provided by the user.

Bug deploy npm version "0.1.2"

I did few minutes ago
npm install -g angular-benchpress
and as result I got this

As you can see the first thing notice it's that the package.json downlod this dependencies

"dependencies": {
    "bootstrap": "^3.2.0",
    "http-server": "^0.6.1",
    "minimist": "^1.1.0",
    "mkdirp": "^0.5.0",
    "rimraf": "^2.2.8",
    "underscore": "^1.6.0"
 }

instead

"dependencies": {
    "bootstrap": "^3.2.0",
    "express": "^4.8.6",
    "minimist": "^1.1.0",
    "mkdirp": "^0.5.0",
    "rimraf": "^2.2.8",
    "underscore": "^1.6.0"
}

The second weird thing is the file lib/cli.js on the line 87/90 load these lines

if (!benchmarks || !benchmarks.length) {
  throw new Error('No benchmark directories found in ' benchmarksPath);
}

As you can see this js error cause a bug during the benchpress build

$ benchpress build

[My machine]/node_modules/angular-benchpress/lib/cli.js:88
          throw new Error('No benchmark directories found in ' benchmarksPath)
                                                               ^^^^^^^^^^^^^^

I did double check your latest version of cli.js https://github.com/angular/benchpress/blob/master/lib/cli.js#L89 and it's totally different.

In the end I think the last npm publish did't work correcty with the latest version.
Maybe one cause could be you the missing github tag https://github.com/angular/benchpress/releases

I hope did explain correctly my issue :)
Ciao

RAIL performance goals

is there any way to track the RAIL performance goals using benchpress?

screen shot 2015-08-14 at 8 22 43 pm

How Users Perceive the Speed of The Web - Paul Irish (Google) keynote
RAIL performance goals

export reports

Currently, the only way to view reports is in the html table included in the compiled benchpress runner page. Benchpress should provide a pluggable way to export JSON reports, and do something with them.

Runner should be more variable-aware

Most benchmarks in https://github.com/angular/angular.js are adding a list of radio buttons to the UI to allow testing the benchmark with any number of variables. Benchpress should provide an API for these variables, and should allow options to be selected by adding params to the query string, i.e.:

http://localhost/benchmarks/some-benchmark?variant=baseline

remove traceur runtime requirement

As discussed in #21, it's problematic for benchpress runtime libraries to add $traceurRuntime and System to the window, as code under test may be using a conflicting version of Traceur. In the case of benchpress, the dependency injection library requires the traceur runtime to be added.

Proposed fix: write a lightweight DI implementation with subset of the angular/di.js API, that will ship with the bp.js runtime, but continue to use the real di.js for testing since it has nice mock utilities.

Get rid of build step

Here's why there currently is a build step (where benchmarks get compiled into an executable app):

  • This project was originally part of AngularDart, and the code needed to be built before it could be tested.
  • The reporting app and code under test are currently run in the same document, but that will go away with #5.
  • The build step looks at what scripts are specified in bp.conf.js, and adds them to the document (with the ability to override scripts via query params).

1 is no longer a concern, 2 will go away soon, and 3 can just let users use the script loading API provided by Benchpress (Ie bp._scripts.addMany([{"id":"jquery","src":"jquery-noop.js"},{"id":"angular","src":"/build/angular.js"},{"src":"app.js"}])) in their benchmark's index.html.

Since users shouldn't to have to include the reporting/running UI in their benchmarks code, the benchpress server (benchpress run) should load the reporting UI when navigating to benchmarkhost/someBenchmark/. The reporting application can then open the benchmark in an iframe or new window (default) when the user wants to collect samples.

Since some users may prefer to serve the benchmark from their own server, the reporting&control app should allow specifying an arbitrary host+path to load & execute the benchmark.

So the new directory structure for defining a benchmark:

Project
 |
 +-- benchmarks/
 |   | 
 |   +-- largetable/
 |   |   |
 |   |   +-- main.html
 |   |   +-- some-script.js

Tasks:

  • Make bp.js available at root of server when using benchpress run (and document that this is the path to use in benchmarks).
  • Make a pre-build dist version of bp.js available for auto-run benchmarks. Ie something that could be included in karma config's files[].
  • Add an input in the control&report app that allows specifying a custom URL at which the benchmark can be executed, so that users can use the benchpress server to run the reporting app, but use any arbitrary server to run the benchmark itself.
  • Update documentation to reflect deprecation of bp.conf.js in favor of creating fully-executable main.html, which uses bp.scripts API to add scripts
  • Remove _ prefix from bp._scripts since it will now be public
  • Convert benchpress run to open the reporting app when navigating

does not work when using with new `traceur-runtime.js`

Benchpress includes di.js which includes traceur-runtime.js in version 0.0.25. However, Angular2 is using traceur 0.0.74, and the two traceur-runtime.js are not compatible either way (code generated for the on does not run with the other).

I think we should either get rid of traceur-runtime within bp.js, or implement the frame approach, i.e. there won't be a bp.js in the application under test, only in the parent frame.

looser coupling between running and reporting UIs

Currently, the build process compiles the markup from main.html with the benchpress harness html in template.html, outputting it as a single index.html in the build folder. Because the code under test and the reporting UI are all in the same context, certain constraints are placed on the reporting UI that make it difficult to make great. For example:

  • The reporting app can't be an Angular app, because it can't rely on a certain version of Angular being loaded, which adds up to a lot of imperative JavaScript to make the UI interactive. This also makes testing of the app difficult.
  • Same with using other libraries like jQuery, bootstrap, etc.
  • CSS used for the reporting dashboard is also applied to the markup under test, potentially impacting performance

It would make the development experience of benchpress better if the benchmark code could be executed in an iframe with only the code under test + more limited version of the bp.js lib. This would require a more intelligent server component to run the benchmarks, which could manage messaging between the runner and the reporting app.

run benchmarks from CLI/node

The only way to actually run and sample a benchmark right now is to click buttons in a Web UI. For the sake of continuous sampling, there should be a way to run a benchmark in a real browser via command line or nodejs script.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.