alco / benchfella Goto Github PK
View Code? Open in Web Editor NEWMicrobenchmarking tool for Elixir
License: MIT License
Microbenchmarking tool for Elixir
License: MIT License
It just keeps on running
I have a test that is working on large data set to ensure the programm is efficient on both big and small data. This test should be run for more than one second, while I'd not like to enlarge duration for all another tests.
Maybe something like module attribute would work for this.
@duration 5
bench "some long test:", do ...
Output from each invocation of mix bench
will be stored in a new file, possibly tagged with a number, date, indicator whether the whole suite was run.
I don't know if memory stats are actually fully implemented yet, but:
$ mix bench --mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}
$ mix bench --sys-mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}
I think this has to do with the fact that one of the benchmarks in the comparison has a test that the other does not. The mix bench.cmp
command handled this just fine, but the mix bench.graph
command results in an HTML page that doesn't display a graph.
The full error from the JS console:
Uncaught TypeError: Cannot read property 'elapsed' of null
at file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:168:42
at Function.m.map.m.collect (http://underscorejs.org/underscore-min.js:5:2566)
at m.(anonymous function) [as map] (http://underscorejs.org/underscore-min.js:5:15545)
at make_comparison_chart (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:168:10)
at add_comparison_chart (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:157:5)
at redrawCharts (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:320:5)
at HTMLDocument.<anonymous> (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:329:5)
at j (http://code.jquery.com/jquery-2.1.1.min.js:2:26860)
at Object.fireWith [as resolveWith] (http://code.jquery.com/jquery-2.1.1.min.js:2:27673)
at Function.ready (http://code.jquery.com/jquery-2.1.1.min.js:2:29467)
The snapshots are put into a gist: https://gist.github.com/awochna/9e010affe57ad9df6ffbdaf144f6c9fa
An "autobench" would intelligently choose input sizes and measure code execution time (over some period of time) in attempt to provide enough visual clues of the algorithm's time complexity.
mix.exs
needs to be updated to work with Elixir ~> 0.15.0
.IO.ANSI
usage needs to be updated.lib/benchfella.ex:207: warning: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
This is in v0.2.1.
I think this has been fixed in master, could we get a new hex release that fixes this please? :)
This needs direct support in the API so that generating input data doesn't count into the running time of the code under test.
Location: c:/Users/Charl/.mix/archives/benchfella-0.2.1.ez/benchfella-0.2.1/ebin
D:\Elixir>mix bench "d:\elixir\bench\basic_bench.exs"
** (Mix) Could not find a Mix.Project, please ensure a mix.exs file is available
D:\Elixir>
The following gist does the same benchmark twice. In the first it calls --
on two lists directly inside the bench. In the second it calls a function which calls on the same lists.
defmodule ProblemBench do
use Benchfella
@list1 1..10_000 |> Enum.to_list
@list2 1..10_000 |> Enum.to_list |> Enum.reverse
bench "-- directly" do
@list1 -- @list2
end
bench "-- indirectly" do
subtract(@list1, @list2)
end
def subtract(list1, list2) do
list1 -- list2
end
end
Results:
## ProblemBench
-- directly 1000000000 0.01 µs/op
-- indirectly 10 213363.50 µs/op
The indirectly bench is returning the correct result. --
on two long lists is indeed a very slow operation. I cannot fathom why the case of using --
directly makes any difference. I've tried doing
bench "-- directly" do
foo = @list1
bar = @list2
foo -- bar
end
but that had no effect. This is unfortunately a rather serious issue as I think it's mislead people about the performance characteristics of --
in some cases. I haven't yet had time to see if it extends beyond that particular operator, my guess is it does.
I tried to run the example as described in README,but got error:
== Compilation error on file lib/benchmark.ex ==
** (RuntimeError) Benchfella is not started
lib/benchfella.ex:359: Benchfella.add_bench/2
lib/benchmark.ex:7: (module)
(stdlib) erl_eval.erl:669: :erl_eval.do_apply/6
Hello and thank you for your work!
It's very useful to me that I can distinguish test runs and normal runs using Mix.env
. Is it possible to run benchmarks in a different environment? Or distinguish benchmark runs another way?
What I'd use a lot is setup
macro ran before each bench. The simplest use case is to set a common random seed.
The graph is empty both in Chromium and Firefox. I've installed latest version 0.3.3.
$ mix bench.cmp
bench/snapshots/2016-12-19_03-30-53.snapshot
## LifeGame.WorldBench
benchmark name iterations average time
next_step/1 for Glider figure 50000 34.29 µs/op
$ mix bench.graph
Wrote bench/graphs/index.html
<script id="json-data" type="application/json" charset="utf-8">{"bench/snapshots/2016-12-19_03-30-53.snapshot": {
"options": {"duration":1.0,"mem stats":false,"sys mem stats":false},
"tests": [{"elapsed":1714685,"iter":50000,"module":"LifeGame.WorldBench","tags":[],"test":"next_step/1 for Glider figure"}]
}}</script>
Full page code: https://gist.github.com/artemrizhov/ff882e54aba9cc6e34d65b177d4ca1fa
Similar to mix test file:line
, e.g.:
mix bench bench/template_bench.exs:37
Similar to how mix test
runs all tests in each of the subprojects in my apps
folder, I would like mix bench
to run all benchmarks across subprojects when run in the root folder of my umbrella projects.
If run from within a project.
It's a little awkward seeing EncoderBench.Poison (strings)
What is the purpose of the dot?
E.g.
group "Strings" do
bench "Poison" do
# ...
end
end
EncoderBench.Strings.Poison: 5000 384.85 µs/op
Hi!
This is really cool piece of code :)
Now the thing with mem stats - even when using -f machine
option, they go out in the human readable format, screwing the bench.cmp.
Can they be ignored in machine format or put in the form which is parseable then by bench.cmp?
0.1.0
is a little dated at this point, and I'd love to use the new features.
raise "Got different result between iterations"
Would be great it this would output the last and the different result.
This topic needs some more research into the tooling available in Erlang/OTP. At the very least we should show the number of allocations and GC cycles performed during a test to give some indication of how two given tests compare memory-wise.
Related resources:
Привіт! :)
I recently discovered the parameterless bench.cmp
, comparing last two snapshots.
This is very cool and intuitive, but! I believe the order of these two snapshots is wrong:
% mix bench; mix bench; mix bench.cmp -d percent
Settings:
duration: 1.0 s
## Extt.Bench
[20:23:25] 1/1: auth
Finished in 2.46 seconds
## Extt.Bench
auth 10000 214.48 µs/op
Settings:
duration: 1.0 s
## Extt.Bench
[20:23:28] 1/1: auth
Finished in 1.99 seconds
## Extt.Bench
auth 10000 173.41 µs/op
bench/snapshots/2016-02-04_20-23-30.snapshot vs
bench/snapshots/2016-02-04_20-23-27.snapshot
## Extt.Bench
auth +23.69%
You see: my 1st result was worse, 2nd was better, but the report (red-bad in this case) is as it'd be other way around :)
Comparison should go in a way: earlier vs later ...
cheers!
Wojtek
** (FunctionClauseError) no function clause matching in anonymous fn/1 in Benchfella.Snapshot.parse/1
lib/benchfella/snapshot.ex:22: anonymous fn(["EncoderBench", "string escaping (JSEX", " unsupported)", "", "1", "1001019"]) in Benchfella.Snapshot.parse/1
(elixir) lib/enum.ex:977: Enum."-map/2-lc$^0/1-0-"/2
(elixir) lib/enum.ex:977: Enum."-map/2-lc$^0/1-0-"/2
lib/benchfella/snapshot.ex:22: Benchfella.Snapshot.parse/1
lib/benchfella.ex:114: Benchfella.print_formatted_data/3
(elixir) lib/kernel/cli.ex:70: anonymous fn/3 in Kernel.CLI.exec_fun/2
When using {:benchfella, "~> 0.2.0"}
I get an error when running a benchmark on Windows when trying to write the snapshot
The error message is:
Finished in 4.81 seconds
** (File.Error) could not write to file bench/snapshots/2015-07-30T10:40:52.snapshot: I/O error
(elixir) lib/file.ex:635: File.write!/3
lib/benchfella.ex:184: Benchfella.print_formatted_data/3
(elixir) lib/kernel/cli.ex:70: anonymous fn/3 in Kernel.CLI.exec_fun/2
Erlang 19. Elixir 1.3.2
** (CompileError) lib/benchfella/snapshot.ex:46: Benchfella.Snapshot.__struct__/1 is undefined, cannot expand struct Benchfella.Snapshot
E.g. mix bench -g string
It should be possible to have a setup phase, running phase, and teardown phase, all measured separately.
This will allow comparing sequential algorithms to parallel ones without adding the overhead of spawning processes into every iteration.
Benchfella generates an intermediate file bench/snapshots
on the file system, though Benchfella cannot run on Nerves because the file system of Nerves is immutable, that is, the file cannot be generate.
Thus, I propose Benchfella should generate not any intermediate files but some data on local data base such as mnesia, ETS and so on.
How about it?
I noticed that memory benchmarking is disabled in the library.
if mem_stats or sys_mem_stats do
log ">> 'mem stats' flag is currently ignored"
end
Are there any problems with the current logic ?
I am ready to work on them and fix them, but will need some help.
Also, Is this calculation correct ?
mem_used = mem_after - mem_before
More of a n00b question actually, perhaps resulting in an addition to README 😉
I created a bench in my phoenix app to see how a particular query was going.
At first, the bench just did import Ecto.Query
and went for it.
To make it work, I did:
setup_all do: Application.ensure_all_started(:my_app)
teardown_all _, do: Application.stop(:my_app)
Is this how you would do as well?
Perhaps this would be a question from others, valuable to the README, that as for now uses only simple elixir as example, not interacting with an app.
I'd be glad to contribute to the docs if this is the expected approach.
Thanks!
EDIT: this ended up being a more elegant way:
mix do app.start, bench
lib/mix/tasks/bench_graph.ex:54: warning: variable no_js is unused
lib/benchfella.ex:148: warning: function b2kib/1 is unused
lib/benchfella.ex:112: warning: function print_mem_stats/3 is unused
Hi,
currently bench.graph
does not generate ui.css
nor ui.js
in any way - in separate files nor embedded into html page - also regardless of the --no-js
option set or not.
Provide a tool for generating HTML/SVG charts from bench snapshots.
It should also be able to visualize difference between multiple snapshots (as a speedup or slowdown) and build a performance chart over a period of time (to track the history of a particular algorithm's regressions or improvements).
@alco, thanks for creating this great project.
This isn't a real issue but just a request for a new release archive. I know this project is still in prerelease but would it be possible to get an updated release archive? One that includes 1bbe6b2 would be helpful.
I started out with benchfella by following the readme and installing benchfella-0.0.2.ez. But I couldn't run any of my code in the benchmark. I did a little digging around until I finally realized that the support to load the application modules was added after the release archive was built.
I'm up and running now after building and install from ToT but a new release might help others.
Thanks,
Joseph Kain
bench "String Escaping (JSEX)", [string: gen_string] do
# JSX doesn't support escaping unicode, it's unsupported
:timer.sleep(1000)
end
Maybe color it red in the results?
The package details say anything above Elixir v1.0, but there are a number of stdlib functions used which were introduced in v1.3 (String.trim*
).
Either the details should be updated, or the incompatible functions removed.
Right now it's very cumbersome to run benchfiles:
$ elixir -pa _build/bench/consolidated -pa _build/bench/lib/poison/ebin -pa _build/bench/lib/jiffy/ebin -pa _build/bench/lib/jsx/ebin -pa _build/bench/lib/jsex/ebin -pa _build/bench/lib/jazz/ebin -S mix bench bench/encoder_bench.exs
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.