Giter VIP home page Giter VIP logo

Comments (9)

nathansobo avatar nathansobo commented on May 22, 2024 1

That just laid the framework. I think we could probably use more benchmarks though of core functionality.

from xray.

nathansobo avatar nathansobo commented on May 22, 2024 1

@anderoonies I honestly haven't developed strong opinions on this. My initial thought would be that we'd want to focus on situations involving more complexity. If we optimize those, presumably we'd do well in simpler cases, and conversely, if we're fast on a line without edits and slow on a line with lots of edits it still seems like we'd be too slow. That said, I think each scenario might be different. I have a lot of experience optimizing, but not much experience designing a long-lived benchmark suite. Would be happy to hear perspectives on the best design considerations.

from xray.

max-sixty avatar max-sixty commented on May 22, 2024

If anyone wants to get started on this, this is a config for running bench on nightly without preventing compiles on stable: max-sixty@bc950a8

With commands

cd xray_core/
cargo +nightly bench --features "dev"

...or reply here if there are other ways

from xray.

cmyr avatar cmyr commented on May 22, 2024

Another option for micro benchmarks that is stable/nightly friendly is criterion: https://github.com/japaric/criterion.rs.

from xray.

ysimonson avatar ysimonson commented on May 22, 2024

You can put the benchmarks in a benches directory in the crate root. They'll still run when you call cargo bench, but then you don't have to put them behind a feature flag.

from xray.

pranaygp avatar pranaygp commented on May 22, 2024

Can this be closed now that #62 is merged?

from xray.

anderoonies avatar anderoonies commented on May 22, 2024

i'm taking a look at this but had a question about how atomic the benchmarks should be.
right now benchmarks are done for individual functions in one scenario. should the same functions be tested in multiple scenarios—e.g. selecting to the end of a line without any edits, selecting to the end of a line that has multiple edits, etc. how should benchmarks for the same functions—but with different setups—be organized?

thanks!

from xray.

anderoonies avatar anderoonies commented on May 22, 2024

i'm curious to hear others' experience and input as well, being new to writing benchmarks myself.
the existing benchmarks @rleungx added establish a pattern of testing individual functions of the editor API under single, pretty "intense" scenarios. i'm happy to extend that to benchmark the rest of the core API.
@nathansobo, as someone very familiar with the underlying implementations, are there any behaviors of the editor you feel should be focused in benchmarking? in the rgasplit paper, briot et al use "randomly generated traces" for performance evaluations, but i'm not sure the consensus on randomness in benchmarking

from xray.

nathansobo avatar nathansobo commented on May 22, 2024

I'm not sure random edits are as important as sequential edits that simulate what a human would do. But testing against documents containing lots of edits will likely be important.

from xray.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.