Comments (4)
I think it would be helpful to rework the results in terms of characters per cycle/nanosecond (or the reciprocal of that) - possibly a time-based approach is better since the machine used for the benchmarks is an Turbo Legend (the famous 8086K, base at 2.8Ghz but turbo at 5.0). Use cycles if you can get consistent results and use performance counters, otherwise use time.
Measurements that involve tasks of unknown length ("how long is a sentence") are hard to understand.
It's my belief that your workloads are quite large - I would be surprised if Hyperscan can handle the bigger ones nearly as well as you can. Similarly, the 'match rates' (number of matches per sentence/character) are quite high relative to the target applications of Hyperscan. However, we can probably imagine some purpose-built Rabin-Karp style matching algorithms that may achieve good parallelism, as well as some up with some better was to get parallelism in Aho-Corasick. But first, it would be good to figure out where you are.
from daachorse.
Thank you for your comments.
Measurements that involve tasks of unknown length ("how long is a sentence") are hard to understand.
With your comment, we noticed that we did not provide statistics about the length of sentences. To supplement it, we created a page that provides those statistics.
https://github.com/daac-tools/daachorse/wiki/Supplemental-statistics-for-SPE-paper
We will consider your other helpful comments when we have time.
from daachorse.
Thank you for your prompt response. This is very helpful.
From a strictly byte-based perspective, and just picking one case - bytewise, EnWord, 1K Strings (Table 7) we can see 527ms for 1M sentences. Thus, with 60.5 bytes per sentence and 1M sentences we have 8.7ns per character (almost doubling as we go up to 1M strings, which is very good scaling!).
If the benchmarking was single core and the processor was not prevented from using Turbo, I would expect this to be 43.5 cycles per character on average. Without turbo, as the base speed of 2.8Ghz, it would be around 24 cycles per character.
I would expect that other algorithms outside the realm of Aho-Corasick would significantly outperform this, particularly for the smaller cases (1K or 10K strings).
from daachorse.
Thank you very much for the useful observation!
Very interesting. When handling such smaller pattern sets in our applications, we'd like to try other algorithms (such as Rabin-Karp-based).
from daachorse.
Related Issues (5)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from daachorse.