Comments (4)
It is. Before running the actual performance experiment, BenchmarkTools runs a "tuning" process in which the provided kernel is executed several times in order to gauge how many kernel evaluations per sample is needed to minimize error due to timer precision/accuracy.
See https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#introduction
from benchmarktools.jl.
Would you please keep this discussion open for a while?
I don't think it is reasonable for the benchmark to conclude that only a single sample is sufficient. Clearly even when the tuning decides that the execution time for one sample is reasonably long, there is still scatter. In the referenced example, the scatterer is actually between something like 9.6 and 10.1 over five runs. In my opinion that is significant.
from benchmarktools.jl.
Would you please keep this discussion open for a while?
People can still comment on a closed issue, so feel free to discuss. But this is intended behavior, so there's no action item, and hence I closed the issue.
I don't think it is reasonable for the benchmark to conclude that only a single sample is sufficient.
BenchmarkTools doesn't ever make any statistical decisions about how many samples to take. It takes at least one sample, and then as many additional samples as possible within the user-provided time and sample budget. As described in the docs, the default time budget is 5 seconds, so BenchmarkTools took as many samples as it could (which, of course, is only one).
from benchmarktools.jl.
I see. I missed the part about being able to control the number of samples. Thanks a bunch.
from benchmarktools.jl.
Related Issues (20)
- Automatically create keys in BenchmarkGroup
- Memory estimate?
- Re-edit docs for multiple setup
- Add logo? HOT 26
- Run docs examples during CI instead of copypasting outputs
- `tune!` on benchmarkable with evals set. HOT 4
- @benchmark creates new symbol for each interpolation of a symbol HOT 3
- Removing `leaves` export HOT 5
- Remember keyword parameters for `tune!` HOT 3
- [Feature Request] Comparing two functions HOT 1
- Memory leak when repeatedly benchmarking HOT 1
- is `@benchmarkset` usable ? HOT 1
- Tag non-breaking release of BenchmarkTools.jl? HOT 5
- Feature request: asynchronously build a benchmark group HOT 2
- Should hardcoded version number be updated?
- cannot read benchmark JSON if run & written with `seconds=Inf`
- `judge` lacks a docstring HOT 1
- `@btime` reports wrong allocation HOT 1
- timeout measures warmup (eliminating many of the reasons to use timeout) HOT 1
- @Benchmarkset fails silently, returns null HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmarktools.jl.