Giter VIP home page Giter VIP logo

jsperf.com's Introduction

jsPerf.com source code

jsPerf.com runs on a server with Apache or Lighttpd, MySQL and PHP installed.

How to run a local copy of jsPerf for testing/debugging

  1. Download the source code, as located within this repository.

  2. The code expects things to be hosted at / and not a subdirectory. You might want to create a new virtual host, e.g. dev.jsperf.com.

  3. Use _tmp/database.sql to create the jsPerf tables in a database of choice.

  4. Rename _inc/config.sample.php to _inc/config.php and enter your database credentials and other info.

  5. For the Browserscope integration to work, you’ll need a Browserscope API key. To get one, sign in at Browserscope.org and then browse to the settings page.

  6. If you are using Apache, edit .htaccess (especially the first few lines) so it matches your current setup. If you are using Lighttpd, set up the dev.jsperf.com virtual host using the sample in _inc/lighttpd.conf.

  7. If you plan on using the update script for Benchmark.js/Platform.js (_tmp/build.php), you’ll need to make some files writable.

    chmod 666 _js/benchmark.js _js/benchmark.src.js _js/platform.src.js\
    _js/ui.browserscope.src.js _js/ui.src.js _inc/version.txt

Note that both include_path and open_basedir must be set to ./:./_tpl/:./_tpl/tpl-inc/:./_inc/.

With PHP 5.4 you can easily start an HTTP web server with these settings for quick testing:

php -d include_path="./:./_tpl/:./_tpl/tpl-inc/:./_inc/" -d open_basedir="./:./_tpl/:./_tpl/tpl-inc/:./_inc/" -d session.save_path="_session" -S localhost:8000

License

The source code for jsPerf is copyright © Mathias Bynens and dual-licensed under the MIT and GPL licenses.

You don’t have to do anything special to choose one license or the other and you don’t have to notify anyone which license you are using. You are free to re-use parts of this code in commercial projects as long as the copyright header (as mentioned in GPL-LICENSE.txt and MIT-LICENSE.txt) is left intact.

jsperf.com's People

Contributors

andydavies avatar arkahnx avatar arthurvr avatar bnjmnt4n avatar corburn avatar janmoesen avatar jdalton avatar mathiasbynens avatar mikesherov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jsperf.com's Issues

Asynchronous preparation

The scenario is simple - you need to run the test case where the preparation code is async. The real life example is - I want to test how the size of the image affects the speed of drawImage. In the preparation code I need to load few images and I can't start tests before they are loaded.
If I'm not missing something it will be nice to add the feature similar to the deferred.resolve(). If there's a already a way to do so - would be good to add to FAQ.

Auto-repeat tests

Could you consider adding a checkbox or something which lets you keep re-running the tests once the suite has completed? So I can start up a lot of browsers and leave them running, collecting data.

At the moment I time how long my suite takes to run, then add the below to the preparation code;

window.repeatTest = setInterval(function ()
{
    $('#run').trigger('click');
}, 40 * 1000);

Thanks

Facebook "Like"

I think a lot of people would like to see a feature where you can "Like" certain tests on Facebook.

Thank you.

UX feedback by Hammad Tariq

Nice work but there are some UX issues.

Say, I am adding a test, the two buttons at the end, Add code snippet and Save test case are confusing. First of all they are so close to each other, first time instead of clicking on Save, I clicked Add code snippet and then I guess there is no way to get rid of it, as it become mandatory and while I had no 3rd test code, I ended up refreshing and giving all my input again.

Add could be a button on the side somewhere and there could be a button to remove extra code snippet if you don't want it. Save test case itself is a bit confusing as newbie will not know what will happen if I save it, I honestly straight away wanted to test my loops, may be it can be renamed as Save & Test .. the other issue is with difference browsers, I tried looking at it (not that hard) but still I am not sure how I can add different browsers to my test case, like at the moment it works for only Chrome, what about if I want to add IE in the test. How do I do that? I have seen people doing it on their tests, maybe there is a degree of difficulty for a newbie to understand how it works, workflow can be a made a bit more smooth.

Ability to group test case results

Only real way to explain what I mean here is an example: http://jsperf.com/cssprops

This perf tests jquery vs. a patched version of jquery for 3 different scenarios. When the test is run, all of the perf numbers are scored and reported relative to each other. However, what I'd really like to know is how scenario A performs with jquery vs. patched, scenario B performs jquery vs. patched, etc...

I know I could write 3 different perfs for this, but I'd like to be able to post one url and say: "hey, look how my patch performs against the current library".

Filtering by tests and relative graphs

I'd love to be able to filter by individual tests (e.g. show only graph for tests 2, 4 and 5) and the ability to normalize the graphs so that each browser results is "streched" all the way, when you're only interested in relative differences.

Preview without saving?

It would be nice if it were possible to preview without saving so you don't end up saving tests that are just completely broken.

Details about machines

It would be nice to know a little more about the machines that ran the tests in addition to just the browser version. If I'm running IE9 on a 5 year old laptop, it's not exactly apples-to-apples with chrome14 on a brand new desktop. Also with new browsers starting to HW accelerate canvas operations, what GPU you have also matters and whether GPU acceleration is actually enabled. You can't tell these things from just a browser version.

At the very least it would be nice to know something about the basic performance of a machines CPU. You could perhaps do something more with a Java plugin to figure some of that out? At least the performance of Java execution should be a little more uniform than JS across a wide variety of machines. Then you could attach that benchmark number to every result.. And maybe have a graphing option to normalize results based on that standard benchmark.

I realize it would be tough to get right, but it would definitely improve the ability to use jsbench as a crowd-sourcing benchmark tool.

Integrate with Google/Twitter accounts

Let users login/authenticate themselves using external services. This has several benefits;

  • jsperf can load/prefill data using them (doesn't have to be on the same computer, etc.)
  • You can automatically attach contact information so users can talk to/contact the author
  • A voting system for revisions can be implemented, which could (or hopefully, would) decrease the number of erroneous test cases by showing the "best" revision as default
  • Statistics! How many test cases did X or Y do (list all by user etc.), are the tests this user created reliable (propability % by votes etc.)

TestSwarm integration / add a way to run many benchmarks

Would be awesome if jsperf could somehow use TestSwarm to gather performance results. I could just leave the browser open and automagically provide people with performance results just like I do with the jQuery test suite. This would make jsperf even more useful, since you would get a lot more performance results.

Add a button to run many benchmarks

I see that most tests on jsperf don't have a whole lot of results.
One of the awesome things about jsperf is that anyone can contribute results. But I don't see a lot of that happening because you have to dig down into each one.
How about tapping into people's willingness to help each other out by putting a big button on the front page that runs a selected set of benchmarks that could use input from the browser the visitor is running? Perhaps test writers could opt in to having their tests included in this crowd-sourcing effort. Some people may prefer to keep their tests mostly under their own control, so that they know the machine specs behind the results.

Browse test cases RSS feed is broken

Tittle and Summary fields are empty

http://jsperf.com/browse.atom

 <entry>
  <title></title>
  <author>
   <name>jsPerf</name>
  </author>
  <link rel="alternate" type="text/html" href="http://jsperf.com/single-closure-vs-this" />
  <summary></summary>
  <id>tag:jsperf.com,2010:/single-closure-vs-this</id>
  <published>2011-10-13T05:21:27+02:00</published>
  <updated>2011-10-13T05:21:27+02:00</updated>
 </entry>

There should be a "Remove Snippet" button

Our keyboards have a backspace and delete button because we all suck at getting things right the first time.

Can we have a "Remove Snippet" button for each snippet please?

define.amd breaks jsperf

If jsperf.com is used with a AMD module loader (RequireJS, Dojo 1.7, bdLoad, Curl, or any other, that are an increasing number out there), then there is a define.amd global and when define.amd==true Benchmark.js will register itself as a module instead of as a global. The quick fix is to make sure define.amd is falsy before running Benchmark.js. There may be a more robust solution.

Tests do not run in Opera Mini

It's not possible to run tests on Opera Mini 6.1 on the iPad, I tried to get more info for you with Firebug Lite - but that didn't work either I'm afraid.

JSHint integration

Might be cool to add the option to run JSHint on code snippets in the edit/add interface.

colors and bars

I have 3 suggestions.

  1. bars' height is too tiny for many browsers test.
  2. blue and baby blue are too close to each other, hard to distinguish them.
  3. add color legend to left column of test table as well.

thank you

Setup & lexical scope

Since setup and teardown snippets are placed in separate functions (distinct from each other and from the test snippets), data shared with the test snippet must use a global variable. In many VMs access to global variables can be slower than access to lexically scoped variables, distorting measurements for snippets that use variables initialized during setup.

Workarounds are possible, but they are ugly and decrease the quality of the measurement:

http://jsperf.com/current-time

One solution would be to define the iteration functions (and teardown) within the lexical scope of the setup function.

Firefox and IE don't show up in Browserscope results by default

The Browserscope charts seem to be filtering out Firefox and IE results (possibly others, these are just the ones I noticed) by default. You have to click the "all" link next to "Filter" in order to see these browsers.

This seems like a very strange default to me. Bug or intentional?

Add FAQ item

https://twitter.com/igorminar/status/51182149277196288
Suggestion for jsPerf: ability to execute code (e.g. prepare DOM) before each cycle outside of the timed code region.

This already is possible using Benchmark#setup. Add a FAQ item about it, and maybe link to an example test case.

Add ui.benchmarks[0].setup = function(){ … }; to your Preparation Code for a specific test, or Benchmark.prototype.setup = …; for all tests.

/embed (for charts)

Allow embedding test results iframe-style (useful for blog posts etc.). Fancy charts!

Add #nopost to FAQ

Add #nopost to jsPerf.com urls to disable Browserscope posts. Nice for experiments w/ FF about:config javascript.options.methodjit.content.

Protection of revisions

Please close if I'm wrong about this, but I'm pretty sure I've had this happen to me.

I create a revision for a test - spend quite some time running it on all kinds of browsers, then someone (rightly) wants to contribute, so they go to edit - but a new revision isn't made, the same one is updated and all the test results are cleared out.

Could it be made that if a different person comes to edit a test suite, that it always results in a new revision?

Thanks.

Readme updates for setup on own host

I've just installed jsperf on one of my machines and would suggest to add the following information to the README:

  • the README.md should state that _inc/config.sample.php should be renamed to _inc/config.php to avoid confusion
  • how to obtain a browserscope api key

Highlight diffs between revisions

I absolutely love the site, however, I find it difficult to determine what changed between one revision and the next. Either I am not looking in the right place, or this is not a feature.

Thanks again for your work on the tool.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.