Giter VIP home page Giter VIP logo

Comments (2)

shakthimaan avatar shakthimaan commented on July 24, 2024

Some thoughts:

  • It will be good to categorize the benchmarks, so that a user can simply include a family of benchmarks for an experiment or a test. For example, a researcher interested in studying graph algorithms or analyzing performance for time-series data can run only those relevant benchmarks.

  • All the configuration should exist in a config/ folder in the project repository, so that, the user knows as to where they need to look for in order to make a change.

  • For any test, it will be good to allow the user to setup one or more profiles or experiments which describes in detail the relevant parameters used for benchmarking.

  • It will be useful to have everything included in a single configuration file, including environment variables, so that, using a given file, we should be able to re-run any experiment. This also helps in repeating the test(s) to take average of the results, without having to worry about whether a particular environment variable was defined or what its value was during a test.

  • For a given set of well-defined test configurations, we will also be able to reproduce the results using just these files. These can be easily revision controlled.

  • A test run for running the tip of dune and stock OCaml will be useful for developers pushing changes upstream.

  • Using a single file format for the configuration will maintain consistency, and using s-expressions will greatly help leverage the tools in the OCaml ecosystem.

  • Given the entire stack from application benchmarks to compiler to operating system, and hardware, it will be good to have options or knobs for the user to configure the tests at various levels in the stack. For example, to study the results on different operating system kernels (Linux, FreeBSD, OpenBSD etc.) or processor architectures (AMD, Intel, PPC, etc.), and their respective variants.

from sandmark.

ctk21 avatar ctk21 commented on July 24, 2024

While discussing tasksetting, we have come across another type of parameter: machine specific config for an experiment. This might be the same thing as the environment within which to run an executable.

The use case discussion is here: #159 (comment)

from sandmark.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.