Giter VIP home page Giter VIP logo

jmh-visualizer's Issues

Custom Reports

You can select data:

  • from different runs
  • from different benchmarks (across classes)
  • from different metrics (score, gc, โ€ฆ)
  • from diiferent type (score, error, min/max, iterations)

And display them as one ore more reports, where one report is a mix out of text and selected data pieces.
Data can be displayed:

  • in table format
  • charts of different type and arrangement

You should be able to fixate those reports through exporting the definition of it and include them:

  • in the Jenkins configuration
  • as part of a gradle setup
  • via parameter ?

(Optionally) Export.
Not sure:

  • maybe markdown (no charts, just tables)
  • pdf (if not too complex)
  • single webpage ?

SingleRun charts should be sortable for best-first

For single-run class charts there should be an option to order the bars based on the their score vs their natural occurrence. That is helpful for making comparisons between almost equal methods esp. in bigger charts.

Might be more difficult for parameterized benchmarks...

Accept jmh files by URL

Hi,

I would like to know if the UI could accept jmh files by the address.

What do you think?

Juan Antonio

Automate local file visualizing flow

I've run into use case, when I made the whole process of benchmarking up to creating jmh-result.json the only thing missed is that I need to manually drag-n-drop file onto page.

It will be nice to have a possibility to make a URL that will lead page to source json from another resource, like http://jmh.morethan.io/?results=https%3A%2F%2Fsomesite%2Fjmh-result.json. Local file can be uploaded to third-party resource and then a link can be used.

Cannot use URL from openjdk code review server

I work on OpenJDK, and I'm trying to publish a JMH report using your great tool. I've uploaded my JMH json files here:

https://cr.openjdk.org/~mcimadamore/jdk/8331865/

Ideally, I'd like to pass the two URLs to the visualizer, so that I can then share the resulting page. But I'm not having luck with using such URLs. Using gist works (or uploading locally) - but I wonder if there's a reason as to why the URL function doesn't seem to work? (and, maybe also making sure it's not an issue on our end, e.g. our code review server).

Doesn't work when source URL is a URL shortener

http://jmh.morethan.io/?sources=https://raw.githubusercontent.com/tlaplus/tlaplus/master/tlatools/test-benchmark/tlc2/tool/ModuleOverwrites-1531220029-80dc6de2b.json is too long but the shortened http://jmh.morethan.io/?sources=https://git.io/fjjn2 won't load.

Don't want to use an external shortener for the full URL to avoid an external dependency.

Support for log-scale

Especially when two or more results get compared, an optional log-scale for the x-axis of the bar charts would be useful.

Issues building on Windows?

Hi, thanks for creating this! It's great to use to show people the results of various benchmarks without making them look at a bunch of JSON or CLI output :)

I was hoping to play around with this (our team would like to be able to display score percentile information), but I've been having issues building from a fresh clone? Hopefully I'm just doing something dumb, I'm new to the whole npm/modern day web development thing.

I'm on Windows 10 with Nodejs 10.16.3.

My build steps were

npm install
npm run build

I'm getting this message:

$ npm run build

> [email protected] build C:\dev\jmh-visualizer
> webpack --mode development

Hash: 069b980a70045607657a
Version: webpack 4.35.3
Time: 384ms
Built at: 10/07/2019 11:00:49 PM
               Asset       Size  Chunks             Chunk Names
           bundle.js   4.13 KiB     app  [emitted]  app
favicons/favicon.ico  318 bytes          [emitted]
          index.html   1.13 KiB          [emitted]
         provided.js   2.21 KiB          [emitted]
         settings.js  146 bytes          [emitted]
Entrypoint app = bundle.js
[./javascript/entry.jsx] 308 bytes {app} [built] [failed] [1 error]

ERROR in ./javascript/entry.jsx 12:4
Module parse failed: Unexpected token (12:4)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| ReactDOM.render(
>     <Provider>
|         <App />
|     </Provider>
Child html-webpack-plugin for "index.html":
     1 asset
    Entrypoint undefined = index.html
    [../node_modules/html-webpack-plugin/lib/loader.js!./index.html] 1.31 KiB {0} [built]
        + 3 hidden modules
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] build: `webpack --mode development`
npm ERR! Exit status 2

I haven't made any code changes. Any idea what's going on? I did some quick googling, but all of the common fixes (changing webpack.config.js to include babel for jsx, making sure .babelrc has eslint) already seem to be in place.

Make showing of menu optional

Showing the 'JMH Visualizer menu' should be optional for use with external URLS and Gists.

From @skjolber (#9):

As aleksandr-vin, I would be very happy with a simple page displaying just benchmarks - no drag'n drop, menu or sidebar alternatives.

I have managed to slim down the default template to what I'd like to see:

https://skjolber.github.io/xml-log-filter/docs/benchmark/jmh/index.html

I guess linking the file via URL parameter is useful, I'd like also to be able to link json files containing links to the benchmark results and perhaps some crude coloring scheme.

The order of two input gists is not as specified via URL

When requesting the comparison of two json gists such as this:
https://jmh.morethan.io/?gists=902f4b43519c4f96c7abcd14cdc2d27d,ac490481e3001c710d75d6071c10b23a
...then order of comparison displayed is non-deterministic. Sometimes it honors the order in the URL sometimes it is reversed. If I reload the page it usually gets ordered as specified with URL but not always. This happens on Mozilla Firefox 92.0 on Linux.
It would be nice if order specified via URL would be honored always.

dark mode

๐Ÿ–ฅ๏ธ๐Ÿ‘๏ธ๐Ÿ”ฅ

Error bars don't make sense.

Hi, it seems, that there is something wrong with the CI, that is displayed:

  1. it's hugely inflated
  2. it is not symmetric around the avg. score

While the inflated size could be justified, if it was defined somewhere, as to what kind of CI it is (in case it wouldn't be the standard 95% CI), the asymmetric shape doesn't make sense to me, at all.

example-data in jmh-visualizer.

I'm attaching my own visualization (which is much more ugly, but with correct error-bars): example-data with correct error-bars.

As jmh provides the std-error in an aggregated field, already, I think this should also be visualized (if anything at all). I think 99% of the people, used to interpreting CIs at all, are used to seeing plain SEs (like jmh is providing) and can then derive their own CIs from it, intuitively - at whatever confidence-level they would like to see applied (probably the 95% CIs for most people, also = ~ +/- 1.96, assuming a normal distribution).

Show nano fractions

My measurements have very small nano results, e.g. 4.123456ns. The visualizer omits the fractional part and renders this as 4, thus making it impossible to view small benchmark variations.

You can see this behaviour in the GetterBenchmark at https://jmh.morethan.io/?source=https://raw.githubusercontent.com/chrisgleissner/benchmarks/master/jmh-result.json

The rendered measurements for the direct and lambdaMetaFactoryForGetter benchmark tests both appear as 4ns when their raw values from the Json files differ.

Would it be possible to show at least one fractional digit for nano measurements? For example, a raw value of 4.123456ns from the json could be rendered as 4.1.

If you are concerned about the visual overhead this feature would add, I see two possible workarounds:

  • The number of fractional digits for nanos could be configurable via a request parameter, defaulting to 0.
  • Alternatively, the fraction could only be displayed if the value is below a threshold, e.g. 100. Thus, 100.1234 would be rendered as 100, but 99.1234 would be displayed as 99.1.

Thanks

Support for Docker Image

We can add support for docker image so that this project is deployable and usable easily for the end user in any environment.

Cannot read property 'forEach' of undefined

Hi and thanks for your awesome project!

I think I have encountered a bug when analyzing some of my benchmark results:

http://jmh.morethan.io/?sources=https://gist.githubusercontent.com/felixbarny/34dbb685a63ca56c55ec3d2dc5177151/raw/4e57b0c59505c5bd84410213af90039adbfd6350/synchronized-map.json

When clicking on the magnifying glass, I get a blank page and this error in the console in the latest version of Chrome:

bundle.js:formatted:19990 TypeError: Cannot read property 'forEach' of undefined
    at t.value (bundle.js:formatted:65932)
    at bundle.js:formatted:64463
    at Array.forEach (<anonymous>)
    at bundle.js:formatted:64459
    at Array.map (<anonymous>)
    at Ex (bundle.js:formatted:64454)
    at bundle.js:formatted:64805
    at t.value (bundle.js:formatted:64810)
    at Co (bundle.js:formatted:19527)
    at Bo (bundle.js:formatted:19724)
Uo @ bundle.js:formatted:19990
Vo.n.callback @ bundle.js:formatted:20206
Wa @ bundle.js:formatted:18998
Ha @ bundle.js:formatted:19014
$i @ bundle.js:formatted:20907
Xi @ bundle.js:formatted:20656
Vi @ bundle.js:formatted:20615
Ki @ bundle.js:formatted:20603
ns @ bundle.js:formatted:21007
In @ bundle.js:formatted:17753

Add an always-accessible drop zone, instead of Reset & Upload New

First of all, thank you for making such an awesome tool. I use this every day, and it has motivated me to use JMH more, leading to better understanding of the performance of my code.

My current workflow:

  1. View benchmark results.
  2. Update the code (or the benchmark) and run it again.
  3. Menu > Reset & Upload New
  4. Drag new JSON files into the drop zone.
  5. Repeat.

I think step 3 could be eliminated if there was an always-accessible drop zone. In other words, allow JSON files to be dropped onto the JMH visualizer while it is displaying results.

Comparison chart with bars

Thanks for such a nice project! I'm using JMH for quite a while and only just now found it :)

I'm trying to use visualizer to compare values for different flavors of the same code, not several runs in the optimization process. For example, an http server (ktor.io) using different engines such as Netty, Jetty or Coroutines. Another example are multiplatform benchmarks for Kotlin JS, Native & JVM.

It would be nice to have a different comparison rendering which would show differently colored bars with legend for the same test (with a vague, may be configurable definition of "same"). Graph bar as it is now makes no sense for such comparisons.

sort order of multi run broken

use case: release 0.7.3 downloaded and docs-folder local unzipped. using a nanohttpd server with two wwwroots, one for jmh-visualizer-docs and the other for some jmh json reports with simplified run1-runX-naming convention. the display order differs sometimes (5%-10% of all forced page reloads) from the url sortorder leading to declined / improved benchmark results.

any idea?

thanks
me

Auto-scroll to benchmark class heading by anchor in URL

I tried adding a heading anchor reference in the URL of a Gist-based benchmark result, like so:

https://jmh.morethan.io/?gist=a1c976a7a3fedd8f0314ed295f5209a0#org.udtopia.recycle.JavaAllocBenchmark

But it doesn't work. (Well, it kinda works, but not consistently.)
It would be great to automatically scroll down to the benchmark class referenced by the anchor in the URL.

Allow usage of gists

It would be an insanely cool feature, if this tool could also work with gists. Like when the URL is https://gist,github.com/xyz/abc then http://jmh.morethan.io/xyz/abc or http://jmh.morethan.io/gist/xyz/abc will use the JSON in that gist.

This would be useful for sharing results and automatically creating graphs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.