Comments (7)
in fuzzing it is pretty normal that every fuzzer fuzzes the whole queue. this makes especially sense as you would want the fuzzers to be different. different in how they weight a queue entry, different in their mutation, different in the depth of their mutations, only 1 or few doing cmplog, etc.
so IMHO the in_use is not helping fuzzing and rather slows it down.
having a central everything means locks, locks locks. not only queue but also dictionary, statistics, etc.
from libafl-legacy.
The dictionary and everything else would not need to be shared, just the queue.
Everything else is independent. Each fuzzer could have a small stats section that can be read by main (in fuzzers[fuzzer_id]
)
from libafl-legacy.
why does the queue need to be shared?
I mean sure it can - but what are the advantages to to have it shared in the first place? I dont see it yet. but I am coming from the other side ... so explain please would you find better with that.
from libafl-legacy.
Mainly due to KISS.
- if a queue entry changes its content, for example during
trim
, we have single source of truth. - we can have proper stats at any time
- I wouldn't have guessed the overhead was too high (only maybe in the first few minutes where there might be more new queue entires, don't really care about this)
from libafl-legacy.
Oh, and the current concept of libafl++ has multiple queues (feedback specific queue, ...) which would complicate matters further.
from libafl-legacy.
Mainly due to KISS.
* if a queue entry changes its content, for example duringtrim
, we have single source of truth.
IMHO there is no single truth. a queue entry is uninteresting to one fuzzer, but interesting to another. one will want to trim everything as best as possible, another would not want any trimming.
that is not really a difference between the approaches. but it reflects in what I called "independant" in andrea's issue.
IMHO a fuzzer should work on his own and use whatever from the other fuzzer that it wants.
a central queue is a big equalizer where everything is more mush :)
* we can have proper stats at any time
Its more to read in one place vs. shorter reads in more places. I agree it is less work to do stats.
* I wouldn't have guessed the overhead was too high (only maybe in the first few minutes where there might be more new queue entires, don't really care about this)
if you have many fuzzers - and that is what the overall goal is, otherwise what we have with afl++ is fine, it can handle 32-48 cores for a target - then there will be finds often.
a central queue does not scale.
I did a week look fuzzing campaign for example. most fuzzers in the campaign were rarely finding anything after the first 18 hours, but e.g. the laf instances (I had 2) were constantly finding new things until the end. and that were just 20 cores. now how should that be for 100-200 cores.
if you have 100 cores and one seed there will be very little progress and a slooow start because everyone fuzzes the same thing, finds lots of paths, and is fighting and waiting to get a lock.
But: it has more appeal to me than the messaging where a central point is a telegram service.
from libafl-legacy.
Closing this for now as the central queue didn't spark too much joy
from libafl-legacy.
Related Issues (20)
- How to multicore by marc HOT 43
- When we should go public HOT 4
- Add Dictionary/Extras Support HOT 8
- Get some CI going HOT 2
- Add to Fuzzbench HOT 3
- In-Mem Crash Recovery HOT 9
- Directory format
- Oracle class HOT 1
- AFL-Style Testcase Support HOT 1
- AFL++ Custom Mutator Support
- Libfuzzer Compatible Wrapper for In-Mem Fuzzer HOT 3
- Cleanup Makefiles HOT 1
- Autogenerate Docs & more Documentation HOT 1
- Bind to CPU HOT 7
- C++ Bindings HOT 1
- Rust Bindings HOT 1
- Get rid of calloc during fuzzing HOT 1
- Compile problem under Debian Buster 32-Bit HOT 1
- Build system HOT 6
- Crash analysis - more info about crash in/from LibAFL HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from libafl-legacy.