Giter VIP home page Giter VIP logo

benchfella's People

Contributors

alco avatar bbense avatar duff avatar ggpasqualino avatar jdemaris avatar kianmeng avatar lau avatar lexmag avatar pragtob avatar rossjones avatar tsubery avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchfella's Issues

Allow to specify duration per test

I have a test that is working on large data set to ensure the programm is efficient on both big and small data. This test should be run for more than one second, while I'd not like to enlarge duration for all another tests.
Maybe something like module attribute would work for this.

@duration 5
bench "some long test:", do ...

--mem-stats breaks

I don't know if memory stats are actually fully implemented yet, but:

$ mix bench --mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}
$ mix bench --sys-mem-stats
** (exit) bad cast: {:remote_dispatch, Binary}

No graph with `TypeError: cannot read property 'elapsed' of null`

I think this has to do with the fact that one of the benchmarks in the comparison has a test that the other does not. The mix bench.cmp command handled this just fine, but the mix bench.graph command results in an HTML page that doesn't display a graph.

The full error from the JS console:

Uncaught TypeError: Cannot read property 'elapsed' of null
    at file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:168:42
    at Function.m.map.m.collect (http://underscorejs.org/underscore-min.js:5:2566)
    at m.(anonymous function) [as map] (http://underscorejs.org/underscore-min.js:5:15545)
    at make_comparison_chart (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:168:10)
    at add_comparison_chart (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:157:5)
    at redrawCharts (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:320:5)
    at HTMLDocument.<anonymous> (file:///home/awynter/projects/personal/breaker/bench/graphs/index.html:329:5)
    at j (http://code.jquery.com/jquery-2.1.1.min.js:2:26860)
    at Object.fireWith [as resolveWith] (http://code.jquery.com/jquery-2.1.1.min.js:2:27673)
    at Function.ready (http://code.jquery.com/jquery-2.1.1.min.js:2:29467)

The snapshots are put into a gist: https://gist.github.com/awochna/9e010affe57ad9df6ffbdaf144f6c9fa

Compile warning: warning: erlang:now/0: Deprecated BIF. in v0.2.1

lib/benchfella.ex:207: warning: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.

This is in v0.2.1.

I think this has been fixed in master, could we get a new hex release that fixes this please? :)

Please what am i doing wrong here?

Location: c:/Users/Charl/.mix/archives/benchfella-0.2.1.ez/benchfella-0.2.1/ebin

D:\Elixir>mix bench "d:\elixir\bench\basic_bench.exs"
** (Mix) Could not find a Mix.Project, please ensure a mix.exs file is available

D:\Elixir>

Major issue: bench block contents not referentially transparent

The following gist does the same benchmark twice. In the first it calls -- on two lists directly inside the bench. In the second it calls a function which calls on the same lists.

defmodule ProblemBench do
  use Benchfella
  @list1 1..10_000 |> Enum.to_list
  @list2 1..10_000 |> Enum.to_list |> Enum.reverse

  bench "-- directly" do
    @list1 -- @list2
  end

  bench "-- indirectly" do
    subtract(@list1, @list2)
  end

  def subtract(list1, list2) do
    list1 -- list2
  end
end

Results:

## ProblemBench
-- directly    1000000000   0.01 µs/op
-- indirectly          10   213363.50 µs/op

The indirectly bench is returning the correct result. -- on two long lists is indeed a very slow operation. I cannot fathom why the case of using -- directly makes any difference. I've tried doing

  bench "-- directly" do
    foo = @list1
    bar = @list2
    foo -- bar
  end

but that had no effect. This is unfortunately a rather serious issue as I think it's mislead people about the performance characteristics of -- in some cases. I haven't yet had time to see if it extends beyond that particular operator, my guess is it does.

mix bench throw error

I tried to run the example as described in README,but got error:

== Compilation error on file lib/benchmark.ex ==
** (RuntimeError) Benchfella is not started
    lib/benchfella.ex:359: Benchfella.add_bench/2
    lib/benchmark.ex:7: (module)
    (stdlib) erl_eval.erl:669: :erl_eval.do_apply/6

Is there a way to distinguish normal run and bench?

Hello and thank you for your work!
It's very useful to me that I can distinguish test runs and normal runs using Mix.env. Is it possible to run benchmarks in a different environment? Or distinguish benchmark runs another way?

setup(..) before each bench

What I'd use a lot is setup macro ran before each bench. The simplest use case is to set a common random seed.

Graph is empty

The graph is empty both in Chromium and Firefox. I've installed latest version 0.3.3.

$ mix bench.cmp
bench/snapshots/2016-12-19_03-30-53.snapshot

## LifeGame.WorldBench
benchmark name                 iterations   average time 
next_step/1 for Glider figure       50000   34.29 µs/op

$ mix bench.graph
Wrote bench/graphs/index.html

screenshot_elixir_bench

screenshot_elixir_bench1

<script id="json-data" type="application/json" charset="utf-8">{"bench/snapshots/2016-12-19_03-30-53.snapshot": {
  "options": {"duration":1.0,"mem stats":false,"sys mem stats":false},
  "tests": [{"elapsed":1714685,"iter":50000,"module":"LifeGame.WorldBench","tags":[],"test":"next_step/1 for Glider figure"}]
}}</script>

Full page code: https://gist.github.com/artemrizhov/ff882e54aba9cc6e34d65b177d4ca1fa

Allow `mix bench` to run on umbrella projects

Similar to how mix test runs all tests in each of the subprojects in my apps folder, I would like mix bench to run all benchmarks across subprojects when run in the root folder of my umbrella projects.

Allow grouping of benches

E.g.

group "Strings" do
  bench "Poison" do
    # ...
  end
end
EncoderBench.Strings.Poison:              5000   384.85 µs/op

Mem stats go to the machine format in a human-readable form

Hi!
This is really cool piece of code :)

Now the thing with mem stats - even when using -f machine option, they go out in the human readable format, screwing the bench.cmp.

Can they be ignored in machine format or put in the form which is parseable then by bench.cmp?

Make a new release?

0.1.0 is a little dated at this point, and I'd love to use the new features.

Wrong order of last 2 snapshots in bench.cmp

Привіт! :)

I recently discovered the parameterless bench.cmp, comparing last two snapshots.

This is very cool and intuitive, but! I believe the order of these two snapshots is wrong:

% mix bench; mix bench; mix bench.cmp -d percent
Settings:
  duration:      1.0 s

## Extt.Bench
[20:23:25] 1/1: auth

Finished in 2.46 seconds

## Extt.Bench
auth       10000   214.48 µs/op

Settings:
  duration:      1.0 s

## Extt.Bench
[20:23:28] 1/1: auth

Finished in 1.99 seconds

## Extt.Bench
auth       10000   173.41 µs/op

bench/snapshots/2016-02-04_20-23-30.snapshot vs
bench/snapshots/2016-02-04_20-23-27.snapshot

## Extt.Bench
auth    +23.69%

You see: my 1st result was worse, 2nd was better, but the report (red-bad in this case) is as it'd be other way around :)

Comparison should go in a way: earlier vs later ...

cheers!

Wojtek

Bench names cannot contain ";"

** (FunctionClauseError) no function clause matching in anonymous fn/1 in Benchfella.Snapshot.parse/1
    lib/benchfella/snapshot.ex:22: anonymous fn(["EncoderBench", "string escaping (JSEX", " unsupported)", "", "1", "1001019"]) in Benchfella.Snapshot.parse/1
    (elixir) lib/enum.ex:977: Enum."-map/2-lc$^0/1-0-"/2
    (elixir) lib/enum.ex:977: Enum."-map/2-lc$^0/1-0-"/2
    lib/benchfella/snapshot.ex:22: Benchfella.Snapshot.parse/1
    lib/benchfella.ex:114: Benchfella.print_formatted_data/3
    (elixir) lib/kernel/cli.ex:70: anonymous fn/3 in Kernel.CLI.exec_fun/2

Error when writing snapshot on Windows.

When using {:benchfella, "~> 0.2.0"} I get an error when running a benchmark on Windows when trying to write the snapshot

The error message is:

Finished in 4.81 seconds
** (File.Error) could not write to file bench/snapshots/2015-07-30T10:40:52.snapshot: I/O error
    (elixir) lib/file.ex:635: File.write!/3
    lib/benchfella.ex:184: Benchfella.print_formatted_data/3
    (elixir) lib/kernel/cli.ex:70: anonymous fn/3 in Kernel.CLI.exec_fun/2

Does not compile

Erlang 19. Elixir 1.3.2

** (CompileError) lib/benchfella/snapshot.ex:46: Benchfella.Snapshot.__struct__/1 is undefined, cannot expand struct Benchfella.Snapshot

Implement test phases

It should be possible to have a setup phase, running phase, and teardown phase, all measured separately.

This will allow comparing sequential algorithms to parallel ones without adding the overhead of spawning processes into every iteration.

How about running Benchfella on Nerves?

Benchfella generates an intermediate file bench/snapshots on the file system, though Benchfella cannot run on Nerves because the file system of Nerves is immutable, that is, the file cannot be generate.

Thus, I propose Benchfella should generate not any intermediate files but some data on local data base such as mnesia, ETS and so on.

How about it?

Why is Memory benchmarking disabled ?

I noticed that memory benchmarking is disabled in the library.

if mem_stats or sys_mem_stats do
   log ">> 'mem stats' flag is currently ignored"
end

Are there any problems with the current logic ?
I am ready to work on them and fix them, but will need some help.

Also, Is this calculation correct ?

mem_used = mem_after - mem_before

Benchmarking with Ecto

More of a n00b question actually, perhaps resulting in an addition to README 😉

I created a bench in my phoenix app to see how a particular query was going.

At first, the bench just did import Ecto.Query and went for it.

To make it work, I did:

  setup_all do: Application.ensure_all_started(:my_app)
  teardown_all _, do: Application.stop(:my_app)

Is this how you would do as well?

Perhaps this would be a question from others, valuable to the README, that as for now uses only simple elixir as example, not interacting with an app.

I'd be glad to contribute to the docs if this is the expected approach.

Thanks!


EDIT: this ended up being a more elegant way:

mix do app.start, bench

Unused variable warnings

lib/mix/tasks/bench_graph.ex:54: warning: variable no_js is unused
lib/benchfella.ex:148: warning: function b2kib/1 is unused
lib/benchfella.ex:112: warning: function print_mem_stats/3 is unused

bench.graph doesn't generate js/css

Hi,

currently bench.graph does not generate ui.css nor ui.js in any way - in separate files nor embedded into html page - also regardless of the --no-js option set or not.

Builtin charting tool

Provide a tool for generating HTML/SVG charts from bench snapshots.

It should also be able to visualize difference between multiple snapshots (as a speedup or slowdown) and build a performance chart over a period of time (to track the history of a particular algorithm's regressions or improvements).

New Release

@alco, thanks for creating this great project.

This isn't a real issue but just a request for a new release archive. I know this project is still in prerelease but would it be possible to get an updated release archive? One that includes 1bbe6b2 would be helpful.

I started out with benchfella by following the readme and installing benchfella-0.0.2.ez. But I couldn't run any of my code in the benchmark. I did a little digging around until I finally realized that the support to load the application modules was added after the release archive was built.

I'm up and running now after building and install from ToT but a new release might help others.

Thanks,
Joseph Kain

Allow functionality to skip a bench

  bench "String Escaping (JSEX)", [string: gen_string] do
    # JSX doesn't support escaping unicode, it's unsupported
    :timer.sleep(1000)
  end

Maybe color it red in the results?

Elixir version compatibility is incorrect

The package details say anything above Elixir v1.0, but there are a number of stdlib functions used which were introduced in v1.3 (String.trim*).

Either the details should be updated, or the incompatible functions removed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.