jzillmann / jmh-visualizer Goto Github PK
View Code? Open in Web Editor NEWVisually explore your JMH Benchmarks
Home Page: https://jmh.morethan.io
License: GNU Affero General Public License v3.0
Visually explore your JMH Benchmarks
Home Page: https://jmh.morethan.io
License: GNU Affero General Public License v3.0
You can select data:
And display them as one ore more reports, where one report is a mix out of text and selected data pieces.
Data can be displayed:
You should be able to fixate those reports through exporting the definition of it and include them:
(Optionally) Export.
Not sure:
For single-run class charts there should be an option to order the bars based on the their score vs their natural occurrence. That is helpful for making comparisons between almost equal methods esp. in bigger charts.
Might be more difficult for parameterized benchmarks...
Hi,
I would like to know if the UI could accept jmh files by the address.
What do you think?
Juan Antonio
I've run into use case, when I made the whole process of benchmarking up to creating jmh-result.json
the only thing missed is that I need to manually drag-n-drop file onto page.
It will be nice to have a possibility to make a URL that will lead page to source json from another resource, like http://jmh.morethan.io/?results=https%3A%2F%2Fsomesite%2Fjmh-result.json
. Local file can be uploaded to third-party resource and then a link can be used.
I work on OpenJDK, and I'm trying to publish a JMH report using your great tool. I've uploaded my JMH json files here:
https://cr.openjdk.org/~mcimadamore/jdk/8331865/
Ideally, I'd like to pass the two URLs to the visualizer, so that I can then share the resulting page. But I'm not having luck with using such URLs. Using gist works (or uploading locally) - but I wonder if there's a reason as to why the URL function doesn't seem to work? (and, maybe also making sure it's not an issue on our end, e.g. our code review server).
http://jmh.morethan.io/?sources=https://raw.githubusercontent.com/tlaplus/tlaplus/master/tlatools/test-benchmark/tlc2/tool/ModuleOverwrites-1531220029-80dc6de2b.json
is too long but the shortened http://jmh.morethan.io/?sources=https://git.io/fjjn2
won't load.
Don't want to use an external shortener for the full URL to avoid an external dependency.
This is coming from jzillmann/gradle-jmh-report#2.
If the scores are very large numbers like in
Then we could shorten the x-axis ticks with a magnitude label, e.g. 35M, 70M, ...
Especially when two or more results get compared, an optional log-scale for the x-axis of the bar charts would be useful.
Feedback from @plokhotnyuk:
Also, green/red colors for scoreDiff and errorDiff are little bit misleading. I would prefer to see green/red for positive/negative scores, and some other colors for error diffs...
Hi, thanks for creating this! It's great to use to show people the results of various benchmarks without making them look at a bunch of JSON or CLI output :)
I was hoping to play around with this (our team would like to be able to display score percentile information), but I've been having issues building from a fresh clone? Hopefully I'm just doing something dumb, I'm new to the whole npm/modern day web development thing.
I'm on Windows 10 with Nodejs 10.16.3.
My build steps were
npm install
npm run build
I'm getting this message:
$ npm run build
> [email protected] build C:\dev\jmh-visualizer
> webpack --mode development
Hash: 069b980a70045607657a
Version: webpack 4.35.3
Time: 384ms
Built at: 10/07/2019 11:00:49 PM
Asset Size Chunks Chunk Names
bundle.js 4.13 KiB app [emitted] app
favicons/favicon.ico 318 bytes [emitted]
index.html 1.13 KiB [emitted]
provided.js 2.21 KiB [emitted]
settings.js 146 bytes [emitted]
Entrypoint app = bundle.js
[./javascript/entry.jsx] 308 bytes {app} [built] [failed] [1 error]
ERROR in ./javascript/entry.jsx 12:4
Module parse failed: Unexpected token (12:4)
You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
|
| ReactDOM.render(
> <Provider>
| <App />
| </Provider>
Child html-webpack-plugin for "index.html":
1 asset
Entrypoint undefined = index.html
[../node_modules/html-webpack-plugin/lib/loader.js!./index.html] 1.31 KiB {0} [built]
+ 3 hidden modules
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] build: `webpack --mode development`
npm ERR! Exit status 2
I haven't made any code changes. Any idea what's going on? I did some quick googling, but all of the common fixes (changing webpack.config.js to include babel for jsx, making sure .babelrc has eslint) already seem to be in place.
I typically prefer the sample benchmark over the throughput one. However results are usually around below 1 ms/op. The graph then shows all values as zeros. Decimal values should be supportted in graphing!
Example json: https://gist.github.com/BrainStone/1ec997aaf3cbd07ac925bd6e66d9bcfb
Showing the 'JMH Visualizer menu' should be optional for use with external URLS and Gists.
As aleksandr-vin, I would be very happy with a simple page displaying just benchmarks - no drag'n drop, menu or sidebar alternatives.
I have managed to slim down the default template to what I'd like to see:
https://skjolber.github.io/xml-log-filter/docs/benchmark/jmh/index.html
I guess linking the file via URL parameter is useful, I'd like also to be able to link json files containing links to the benchmark results and perhaps some crude coloring scheme.
When the type is sample data for an histogram is collected. Showing that data would be cool!
Even though the parameter names are included in the json input, the bar charts only show the parameter values but omit the label (name) (e.g. see http://jmh.morethan.io/?gist=https://gist.githubusercontent.com/lemmy/30cc904814b563d9df33b7aa8640ad07/raw/bf4cd9b48b0bda91a07d7d59d754dd7d78d19464/Randomization-1530197117-39ace79bf.json which should include "numOfElements" and "size"). This makes it unnecessarily hard to decipher a chart.
When requesting the comparison of two json gists such as this:
https://jmh.morethan.io/?gists=902f4b43519c4f96c7abcd14cdc2d27d,ac490481e3001c710d75d6071c10b23a
...then order of comparison displayed is non-deterministic. Sometimes it honors the order in the URL sometimes it is reversed. If I reload the page it usually gets ordered as specified with URL but not always. This happens on Mozilla Firefox 92.0 on Linux.
It would be nice if order specified via URL would be honored always.
There should be a help-page where JMH elements or behaviors can be explained.
E.g. the benchmark-modes.
๐ฅ๏ธ๐๏ธ๐ฅ
Hi, it seems, that there is something wrong with the CI, that is displayed:
While the inflated size could be justified, if it was defined somewhere, as to what kind of CI it is (in case it wouldn't be the standard 95% CI), the asymmetric shape doesn't make sense to me, at all.
example-data in jmh-visualizer.
I'm attaching my own visualization (which is much more ugly, but with correct error-bars): example-data with correct error-bars.
As jmh provides the std-error in an aggregated field, already, I think this should also be visualized (if anything at all). I think 99% of the people, used to interpreting CIs at all, are used to seeing plain SEs (like jmh is providing) and can then derive their own CIs from it, intuitively - at whatever confidence-level they would like to see applied (probably the 95% CIs for most people, also = ~ +/- 1.96, assuming a normal distribution).
This json file gets incorrectly visualized when uploading to https://jmh.morethan.io/:
My measurements have very small nano results, e.g. 4.123456ns. The visualizer omits the fractional part and renders this as 4, thus making it impossible to view small benchmark variations.
You can see this behaviour in the GetterBenchmark at https://jmh.morethan.io/?source=https://raw.githubusercontent.com/chrisgleissner/benchmarks/master/jmh-result.json
The rendered measurements for the direct and lambdaMetaFactoryForGetter benchmark tests both appear as 4ns when their raw values from the Json files differ.
Would it be possible to show at least one fractional digit for nano measurements? For example, a raw value of 4.123456ns from the json could be rendered as 4.1.
If you are concerned about the visual overhead this feature would add, I see two possible workarounds:
Thanks
We can add support for docker image so that this project is deployable and usable easily for the end user in any environment.
It seems that JMH reports error range too broadly, and on plotter chart we can see a black line that crosses whole bar from left to right margin: https://screenshots.firefox.com/yttrsDp8HUoPP289/plokhotnyuk.github.io
IMHO instead that provided value we can use min/max values of data series.
Hi and thanks for your awesome project!
I think I have encountered a bug when analyzing some of my benchmark results:
When clicking on the magnifying glass, I get a blank page and this error in the console in the latest version of Chrome:
bundle.js:formatted:19990 TypeError: Cannot read property 'forEach' of undefined
at t.value (bundle.js:formatted:65932)
at bundle.js:formatted:64463
at Array.forEach (<anonymous>)
at bundle.js:formatted:64459
at Array.map (<anonymous>)
at Ex (bundle.js:formatted:64454)
at bundle.js:formatted:64805
at t.value (bundle.js:formatted:64810)
at Co (bundle.js:formatted:19527)
at Bo (bundle.js:formatted:19724)
Uo @ bundle.js:formatted:19990
Vo.n.callback @ bundle.js:formatted:20206
Wa @ bundle.js:formatted:18998
Ha @ bundle.js:formatted:19014
$i @ bundle.js:formatted:20907
Xi @ bundle.js:formatted:20656
Vi @ bundle.js:formatted:20615
Ki @ bundle.js:formatted:20603
ns @ bundle.js:formatted:21007
In @ bundle.js:formatted:17753
If a benchmark runs in all modes (avg, thrpt, etc...) then the benchmark names are all the same, just the mode differs. See jzillmann/gradle-jmh-report#7
Cope with this!
In case the benchmarks contain data from the gc profiler or other profilers, visualize that as well!
First of all, thank you for making such an awesome tool. I use this every day, and it has motivated me to use JMH more, leading to better understanding of the performance of my code.
My current workflow:
I think step 3 could be eliminated if there was an always-accessible drop zone. In other words, allow JSON files to be dropped onto the JMH visualizer while it is displaying results.
Tries to get the value of params[0][1] when params is null
Thanks for such a nice project! I'm using JMH for quite a while and only just now found it :)
I'm trying to use visualizer to compare values for different flavors of the same code, not several runs in the optimization process. For example, an http server (ktor.io) using different engines such as Netty, Jetty or Coroutines. Another example are multiplatform benchmarks for Kotlin JS, Native & JVM.
It would be nice to have a different comparison rendering which would show differently colored bars with legend for the same test (with a vague, may be configurable definition of "same"). Graph bar as it is now makes no sense for such comparisons.
It would be nice if the visualizer would support multi file gists, and treat them as if you've added separate gists.
For that you'd have to query the API to discover if a gist has multiple files https://docs.github.com/en/rest/reference/gists#get-a-gist and also to get the individual file links.
use case: release 0.7.3 downloaded and docs-folder local unzipped. using a nanohttpd server with two wwwroots, one for jmh-visualizer-docs and the other for some jmh json reports with simplified run1-runX-naming convention. the display order differs sometimes (5%-10% of all forced page reloads) from the url sortorder leading to declined / improved benchmark results.
any idea?
thanks
me
I tried adding a heading anchor reference in the URL of a Gist-based benchmark result, like so:
https://jmh.morethan.io/?gist=a1c976a7a3fedd8f0314ed295f5209a0#org.udtopia.recycle.JavaAllocBenchmark
But it doesn't work. (Well, it kinda works, but not consistently.)
It would be great to automatically scroll down to the benchmark class referenced by the anchor in the URL.
It would be an insanely cool feature, if this tool could also work with gists. Like when the URL is https://gist,github.com/xyz/abc
then http://jmh.morethan.io/xyz/abc
or http://jmh.morethan.io/gist/xyz/abc
will use the JSON in that gist.
This would be useful for sharing results and automatically creating graphs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.