gluster / glusterfs-perf Goto Github PK
View Code? Open in Web Editor NEWAnsible roles and tools to create a test environment and run performance tests
License: GNU General Public License v3.0
Ansible roles and tools to create a test environment and run performance tests
License: GNU General Public License v3.0
as we are starting processes newly for the runs, we need to capture the gluster profile
information after each run. Helps to debug later.
Once the tests are all added and different volume types, it's good to run the benchmark for the last 3-4 major releases and plot the same. Make this available in Gluster readthe docs or release notes.
right now, we are taking just 1 run as the result, as it is performance runs, we need to take the avg of each runs.
This is just a question:
We may end up with less or more 'inventory' over time, so, how are we deciding how many clients are there and how many servers are there?
Isn't it better to define some tests, and clearly identify minimum inventory required for the test? If not satisfied, the test shouldn't even run.
For each perf run, along with iops, latency etc. it's good to collect the following if possible:
While trying to run a test with v5.0, hit this error. Looks like we had previous glusterd workdir.
[2019-02-21 05:01:16.847028] W [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2019-02-21 05:01:16.847044] E [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2019-02-21 05:01:18.059357] E [MSGID: 106022] [glusterd-store.c:2276:glusterd_restore_op_version] 0-management: wrong op-version (70000) retrieved [Invalid argument]
the baselines should be committed to the repository itself, where as the current result should be updated from local run, and a graph image should be emailed.
We need to capture pbench[1] output as part of performance runs, and capture it somewhere on the resdir/
also see if we can expose the resdir through webserver.
Anyways, this issue is for installing and capturing details of pbench
Each run should produce a unique set of information that should be kept together. This is what I would record:
We should create this in some organized way so that it's easily searchable.
This is very useful when we have lots of runs and want to use them for deeper analysis or extract statistics.
It would be good if we run benchmark for both cold cache and hot cache, this will help us in identifying regressions in the cache xlators, and also give us an idea on the benefits of cache layers.
Once we run the performance tests, we have captured the files/sec, iops, bandwidth usage etc. We need to keep a history of all the runs and present in a visual format to be able to comprehend for the developers. One suggestion would be:
We have a github repo, where we push all the metrics collected on a nightly runs. The last 30 or n runs are compared plotted to get the trend of the performance.
The other option suggested is to have a database where we store all the runs metrics.
Currently we only run it for Gluster volumes' fuse mount. We should be running for Gluster block as well.
Should be able to input volume profile or options to run the tests against
Currently the tests are run for only replica 3. Need to be able to run it for different volume types
We needn't get an email if there is no regression seen.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.