Giter VIP home page Giter VIP logo

cbt's Introduction

Ceph - a scalable distributed storage system

See https://ceph.com/ for current information about Ceph.

Status

Issue Backporting

Contributing Code

Most of Ceph is dual-licensed under the LGPL version 2.1 or 3.0. Some miscellaneous code is either public domain or licensed under a BSD-style license.

The Ceph documentation is licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).

Some headers included in the ceph/ceph repository are licensed under the GPL. See the file COPYING for a full inventory of licenses by file.

All code contributions must include a valid "Signed-off-by" line. See the file SubmittingPatches.rst for details on this and instructions on how to generate and submit patches.

Assignment of copyright is not required to contribute code. Code is contributed under the terms of the applicable license.

Checking out the source

Clone the ceph/ceph repository from github by running the following command on a system that has git installed:

git clone [email protected]:ceph/ceph

Alternatively, if you are not a github user, you should run the following command on a system that has git installed:

git clone https://github.com/ceph/ceph.git

When the ceph/ceph repository has been cloned to your system, run the following commands to move into the cloned ceph/ceph repository and to check out the git submodules associated with it:

cd ceph
git submodule update --init --recursive --progress

Build Prerequisites

section last updated 27 Jul 2023

Make sure that curl is installed. The Debian and Ubuntu apt command is provided here, but if you use a system with a different package manager, then you must use whatever command is the proper counterpart of this one:

apt install curl

Install Debian or RPM package dependencies by running the following command:

./install-deps.sh

Install the python3-routes package:

apt install python3-routes

Building Ceph

These instructions are meant for developers who are compiling the code for development and testing. To build binaries that are suitable for installation we recommend that you build .deb or .rpm packages, or refer to ceph.spec.in or debian/rules to see which configuration options are specified for production builds.

To build Ceph, follow this procedure:

  1. Make sure that you are in the top-level ceph directory that contains do_cmake.sh and CONTRIBUTING.rst.

  2. Run the do_cmake.sh script:

    ./do_cmake.sh

    do_cmake.sh by default creates a "debug build" of Ceph, which can be up to five times slower than a non-debug build. Pass -DCMAKE_BUILD_TYPE=RelWithDebInfo to do_cmake.sh to create a non-debug build.

  3. Move into the build directory:

    cd build

  4. Use the ninja buildsystem to build the development environment:

    ninja

    [!TIP] Ninja is the build system used by the Ceph project to build test builds. The number of jobs used by ninja is derived from the number of CPU cores of the building host if unspecified. Use the -j option to limit the job number if build jobs are running out of memory. If you attempt to run ninja and receive a message that reads g++: fatal error: Killed signal terminated program cc1plus, then you have run out of memory.

    Using the -j option with an argument appropriate to the hardware on which the ninja command is run is expected to result in a successful build. For example, to limit the job number to 3, run the command ninja -j 3. On average, each ninja job run in parallel needs approximately 2.5 GiB of RAM.

This documentation assumes that your build directory is a subdirectory of the ceph.git checkout. If the build directory is located elsewhere, point CEPH_GIT_DIR to the correct path of the checkout. Additional CMake args can be specified by setting ARGS before invoking do_cmake.sh. See cmake options for more details. For example:

ARGS="-DCMAKE_C_COMPILER=gcc-7" ./do_cmake.sh

To build only certain targets, run a command of the following form:

ninja [target name]

To install:

ninja install

CMake Options

The -D flag can be used with cmake to speed up the process of building Ceph and to customize the build.

Building without RADOS Gateway

The RADOS Gateway is built by default. To build Ceph without the RADOS Gateway, run a command of the following form:

cmake -DWITH_RADOSGW=OFF [path to top-level ceph directory]

Building with debugging and arbitrary dependency locations

Run a command of the following form to build Ceph with debugging and alternate locations for some external dependencies:

cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
..

Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By default, cmake builds these bundled dependencies from source instead of using libraries that are already installed on the system. You can opt to use these system libraries, as long as they meet Ceph's version requirements. To use system libraries, use cmake options like WITH_SYSTEM_BOOST, as in the following example:

cmake -DWITH_SYSTEM_BOOST=ON [...]

To view an exhaustive list of -D options, invoke cmake -LH:

cmake -LH

Preserving diagnostic colors

If you pipe ninja to less and would like to preserve the diagnostic colors in the output in order to make errors and warnings more legible, run the following command:

cmake -DDIAGNOSTICS_COLOR=always ...

The above command works only with supported compilers.

The diagnostic colors will be visible when the following command is run:

ninja | less -R

Other available values for DIAGNOSTICS_COLOR are auto (default) and never.

Building a source tarball

To build a complete source tarball with everything needed to build from source and/or build a (deb or rpm) package, run

./make-dist

This will create a tarball like ceph-$version.tar.bz2 from git. (Ensure that any changes you want to include in your working directory are committed to git.)

Running a test cluster

From the ceph/ directory, run the following commands to launch a test Ceph cluster:

cd build
ninja vstart        # builds just enough to run vstart
../src/vstart.sh --debug --new -x --localhost --bluestore
./bin/ceph -s

Most Ceph commands are available in the bin/ directory. For example:

./bin/rbd create foo --size 1000
./bin/rados -p foo bench 30 write

To shut down the test cluster, run the following command from the build/ directory:

../src/stop.sh

Use the sysvinit script to start or stop individual daemons:

./bin/init-ceph restart osd.0
./bin/init-ceph stop

Running unit tests

To build and run all tests (in parallel using all processors), use ctest:

cd build
ninja
ctest -j$(nproc)

(Note: Many targets built from src/test are not run using ctest. Targets starting with "unittest" are run in ninja check and thus can be run with ctest. Targets starting with "ceph_test" can not, and should be run by hand.)

When failures occur, look in build/Testing/Temporary for logs.

To build and run all tests and their dependencies without other unnecessary targets in Ceph:

cd build
ninja check -j$(nproc)

To run an individual test manually, run ctest with -R (regex matching):

ctest -R [regex matching test name(s)]

(Note: ctest does not build the test it's running or the dependencies needed to run it)

To run an individual test manually and see all the tests output, run ctest with the -V (verbose) flag:

ctest -V -R [regex matching test name(s)]

To run tests manually and run the jobs in parallel, run ctest with the -j flag:

ctest -j [number of jobs]

There are many other flags you can give ctest for better control over manual test execution. To view these options run:

man ctest

Building the Documentation

Prerequisites

The list of package dependencies for building the documentation can be found in doc_deps.deb.txt:

sudo apt-get install `cat doc_deps.deb.txt`

Building the Documentation

To build the documentation, ensure that you are in the top-level /ceph directory, and execute the build script. For example:

admin/build-doc

Reporting Issues

To report an issue and view existing issues, please visit https://tracker.ceph.com/projects/ceph.

cbt's People

Contributors

aisakaki avatar amnonhanuhov avatar athanatos avatar bengland2 avatar criptik avatar cyx1231st avatar ivotron avatar jdurgin avatar koder-ua avatar ksingh7 avatar liu-chunmei avatar ljx023 avatar marcosmamorim avatar markhpc avatar matan-b avatar mmgaggle avatar mogeb avatar neha-ojha avatar nitzanmordhai avatar nobuto-m avatar perezjosibm avatar rzarzynski avatar sidhant-agrawal avatar sseshasa avatar svelar avatar tchaikov avatar vasukulkarni avatar wadeholler avatar wwdillingham avatar xuechendi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cbt's Issues

Can't find ceph config dump in output directories

In librbdfio.py, before every test, dump_config is called to save the ceph config into a file called ceph_settings.out. I don't think this ever completes successfully because that file is never created.

radosbench should not depend on ceph-common to be installed on head-node

cbt checks the rados version to determine the syntax of the object size for rados bench. It does so locally on the head node (by running rados -v) which fails if the rados binary (from ceph-common on Red Hat systems) is not installed.
Instead cbt should ask the first client node for the version, if it assumes all clients to have the same rados version installed.

Client concurrency scaling

It would be great to have the ability to specify the clients, eg:

clients: [client1, client2, client3, client4 client5, client6]

Then also specify an array with the number of clients to use for each iteration, eg:

num_clients: [1, 2, 4, 6]

This would be useful for finding the client concurrency contention threshold for a cluster.

Option to restart Ceph services

When changing Ceph parameters, some of them require Ceph services, like OSD, to be restarted. CBT can only recreate a brand new cluster or use existing cluster. It would be nice to have something like use_existing, but it would restart all Ceph services and check health before starting the test(s).

ImportError: No module named lxml.etree

Hi
I was trying to setup CBT on a fresh system and encountered the following problem

[root@ceph-node1 cbt]# python cbt.py
Traceback (most recent call last):
  File "cbt.py", line 9, in <module>
    import benchmarkfactory
  File "/root/cbt/benchmarkfactory.py", line 10, in <module>
    from benchmark.cosbench import Cosbench
  File "/root/cbt/benchmark/cosbench.py", line 8, in <module>
    import lxml.etree as ET
ImportError: No module named lxml.etree
[root@ceph-node1 cbt]#

The fix was simple yum -y install python-lxml, just for documentation purpose i am creating this issue. I will submit a PR for updating documentation.

Readahead for use_existing

When using use_existing, readahead is not set on the OSDs because they do not have the labels CBT expects. We should probably determine which devices are mounted in /var/lib/osd/* and then set their readahead values.

OSError: [Errno 2] No such file or directory while running radosbench test

17:56:52 - DEBUG - cbt - pdsh -R ssh -w root@ceph echo 3 | sudo tee /proc/sys/vm/drop_caches
^@17:56:53 - ERROR - cbt - During tests
Traceback (most recent call last):
File "./cbt.py", line 71, in main
b.run()
File "/root/cbt/benchmark/radosbench.py", line 68, in run
self._run('write', '%s/write' % self.run_dir, '%s/write' % self.out_dir)
File "/root/cbt/benchmark/radosbench.py", line 81, in _run
rados_version_str = subprocess.check_output(["rados", "-v"])
File "/usr/lib64/python2.7/subprocess.py", line 568, in check_output
process = Popen(stdout=PIPE, _popenargs, *_kwargs)
File "/usr/lib64/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

wrong collectl syntax for rawdskfilt in monitoring.py

wrong syntax in monitoring.py causing collectl to not run:

collectl -s+mYZ -i 1:10 --rawdskfilt "+cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ " -F0 -f /tmp/cbt/00000000/LibrbdFio/osd_ra-00001024/op_size-00004094/concurrent_procs-004/iodepth-064/read/pool_monitoring/collectl
Quantifier follows nothing in regex; marked by <-- HERE in m/+ <-- HERE cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ / at /usr/share/collectl/formatit.ph line 235.

so instead
rawdskfilt = '+cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ '

it should be
rawdskfilt = 'cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ '

create example files

make example test run configs for most common workloads that can get people started.

OSError: [Errno 2] No such file or directory 09:45:52 - ERROR - cbt - During tests

I have installed ceph in my head and client node, and ceph cluster is up running. I have no idea which file or directory does it need!

09:41:23 - ERROR - cbt - During tests
Traceback (most recent call last):
File "./cbt.py", line 64, in main
b.initialize()
File "/root/cbt-master/benchmark/librbdfio.py", line 68, in initialize
super(LibrbdFio, self).initialize()
File "/root/cbt-master/benchmark/benchmark.py", line 44, in initialize
self.cluster.cleanup()
File "/root/cbt-master/cluster/ceph.py", line 210, in cleanup
common.pdsh(nodes, 'sudo rm -rf %s' % self.tmp_dir).communicate()
File "/root/cbt-master/common.py", line 70, in pdsh
return CheckedPopen(args,continue_if_error=continue_if_error)
File "/root/cbt-master/common.py", line 20, in init
self.popen_obj = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
File "/usr/lib64/python2.7/subprocess.py", line 711, in init
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

Trigger deep scrubs

Ability to trigger deep scrubs to test the impact of deep scrubbing. In hammer there is supposed to be a ceph.conf tunable that allows specifying a time. If that value can be injected, then we can probably do something along the lines of:

current_time = time
schedule_time = current_time + ramp_time

ceph tell * injectargs --some_scrubbing_schedule_key $schedule_time

monitoring.py - collectl issue

wrong in script:
rawdskfilt = 'cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ '

should be ('+' on the beginning) :
rawdskfilt = '+cciss/c\d+d\d+ |hd[ab] | sd[a-z]+ |dm-\d+ |xvd[a-z] |fio[a-z]+ | vd[a-z]+ |emcpower[a-z]+ |psv\d+ |nvme[0-9]n[0-9]+p[0-9]+ '

also it is better to run collectl with sudo and with rotation of logfiles (after midnight for example), so:
common.pdsh(nodes, 'sudo collectl -s+mYZ -i 1:10 -r00:00,7 --rawdskfilt "%s" -F0 -f %s' % (rawdskfilt, collectl_dir))

script hangs while performing rpdcp

Hi
while performing rdcp the script just hangs. However all files and folders are copied to the target ARCH directory, but script doesn't continue on other steps.

Any idea?


6:42:18 - DEBUG - cbt - Nodes : ceph@cbt-node1,cbt-node2,cbt-node3
16:42:18 - DEBUG - cbt - CheckedPopen continue_if_error=True args=pdsh -f 3 -R ssh -w ceph@cbt-node1,cbt-node2,cbt-node3 sudo pkill -SIGINT -f blktrace
16:42:19 - DEBUG - cbt - Nodes : ceph@cbt-node1,ceph@cbt-node2,ceph@cbt-node3,ceph@cbt-node1,cbt-node2,cbt-node3
16:42:19 - DEBUG - cbt - CheckedPopen continue_if_error=False args=pdsh -S -f 6 -R ssh -w ceph@cbt-node1,ceph@cbt-node2,ceph@cbt-node3,ceph@cbt-node1,cbt-node2,cbt-node3 sudo chown -R ceph.ceph /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-04194304/concurrent_procs-003/iodepth-064/read/*

16:42:19 - DEBUG - cbt - CheckedPopen continue_if_error=False args=rpdcp -f 10 -R ssh -w ceph@cbt-node1,ceph@cbt-node2,ceph@cbt-node3,ceph@cbt-node1,cbt-node2,cbt-node3 -r /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-04194304/concurrent_procs-003/iodepth-064/read/* /tmp/ARCH1/00000000/LibrbdFio/osd_ra-00004096/op_size-04194304/concurrent_procs-003/iodepth-064/read

Thanks

how to parse cbt results?

Hi,
just finished benchmark of our Ceph cluster with CBT tool and now I'm confused how to deal with output data. I got a large directory structure with a lot of "id" directories like:

results/00000000/id3309284191688069537/

where are fio log files and some files in json format.

My point is if there is any tool or simple way how to parse these output data to human readable format, that I would be also able to compare two or more CBT benchmark tests.

Thank you

clocksource option for fio

Adding a clocksource option for fio benchmarks would be useful so that gettimeofday can be selected when there are multiple fio processes running on multiple hosts. Assuming ntp is properly configured, this would make aggregating data much easier than it would be otherwise with independent, relative timestamps.

cbt getting idle while executing rpdcp command

cbt is getting idle while trying to copy collectl data by executing the rpdcp command.
Because it getting the nodes by
nodes = settings.getnodes('clients', 'osds', 'mons', 'rgws')
When same hosts are defined under clients, osds, mons, rgws it executes collectl command on each of them 4 times. Rpdcp then cannot finish copy of the files as they are still opened.

OSD expansion testing

The current recovery machinery for CBT marks an OSD or group of OSDs out, then later marks it back in. It would be great is there was a way to mark an OSD or group of OSDs out, destroy them, then add them back in to the cluster.

Resources required to delete a pool affect subsequent test run

Many benchmarks delete and recreate the test pool between runs. However, the Ceph command to delete a pool returns immediately, and the work of deleting objects in the pool takes place in the background. Unfortunately, experience has shown that disk and CPU resources used while deleting the objects is great enough that it influences the test results for the subsequent run.

One way to avoid the problem is to have the cluster.rmpool() function wait until the disk and CPU utilization on the OSD nodes drops to a reasonable level before returning to the caller. I will be issuing a pull request with this change.

add simple cephfs support (cephfsfio)

update to include simple cephFS support.

Reuse rbdfio benchmark
update with new cephfsfio.py benchmark and examples
update ceph.py to create MDSs

Update CBT to support RBD Erasure Coding testing

CBT current does not support testing RBD Erasure coding (ceph 12.2.2). ceph.py needs to be updated such that when librbdfio.py calls mkpool with a data pool specified ceph.py uses the data_pool_profile instead of the pool_profile.

Since librbdfio will append "-data" to the pool name if a user had specified a data_pool the below could be a possible solution.

cbt/cluster/ceph.py:
def mkpool(self, name, profile_name, application, base_name=None):
if 'data' in name:
pool_profiles = self.config.get('data_pool_profiles', {'default': {}})
else:
pool_profiles = self.config.get('pool_profiles', {'default': {}})
. . .

Option to save results archive in S3

It would be great to have the ability to save the results archive to a S3 bucket instead of it only being on the local filesystem of the head node.

More explicit failure indication in cbt run.

When executing the cbt.py test suite, it is very hard to figure out which steps failed/passed.
My experience with this tool is very limited as I just started using it, but I see that the pdsh commands fail without any error, so it is hard to decipher why.

Also, the use_existing flag in cluster: configuration in the yaml file should be highlighted when using against an existing cluster. Once I go through a successful execution I will create a pull request for any doc changes if makes sense and other issues if I see.

Another issue I see is username and groupname are taken as the same which is not the case. Might be useful to add a groups filed as well.

Lastly -->
Now I think I have gotten past some of my inital hurdles and am able to execute an fio benchmark, but I am not sure what is next.

The last step I see is:

21:30:37 - DEBUG - cbt - pdsh -R ssh -w [email protected],[email protected],[email protected] sudo chown -R behzad_dastur.behzad_dastur /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-01048576/concurrent_procs-001/iodepth-064/randwrite/* 21:30:37 - DEBUG - cbt - rpdcp -f 1 -R ssh -w [email protected],[email protected],[email protected] -r /tmp/cbt/00000000/LibrbdFio/osd_ra-00004096/op_size-01048576/concurrent_procs-001/iodepth-064/randwrite/* /tmp/00000000/LibrbdFio/osd_ra-00004096/op_size-01048576/concurrent_procs-001/iodepth-064/randwrite

I can see logs created at:

[root@cbtvm001-d658 cbt]# ls /tmp/00000000/LibrbdFio/osd_ra-00004096/op_size-01048576/concurrent_procs-001/iodepth-064/read/ collectl.b-stageosd001-r19f29-prod.acme.symcpe.net collectl.v-stagemon-002-prod.abc.acme.net output.0.v-stagemon-001-prod.abc.acme.net collectl.v-stagemon-001-prod.abc.acme.net historic_ops.out.b-stageosd001-r19f29-prod.abc.acme.net
Are there ways to now visualize this data.

population of fio files should use fio_cmd provided in yaml

In using kvmrbdfio.py to run fio on my openstack guests, the population of the fio files does not use the path to the fio executable (fio_cmd) provided in the yaml file. I define the fio command path ...

----------- yaml excerpt -----------
benchmarks:
kvmrbdfio:
fio_cmd: "/usr/local/bin/fio"
----------- end excerpt ------------

... however the populate call instead uses no path and relies on the fio executable being in the $PATH defined by sudo users, which excludes /usr/local/bin where my fio resides, resulting in the population failing on the target systems.

Create Module for All-to-All Network Tests

Use iperf for all-to-all network tests. Base code on somthing vaguely like:

Client to OSD tests:

!/bin/bash

for i in 8 9 10 11 12 13
do
val=$((62+$i))
pdsh -R ssh -w osd[$i] iperf -s -B 172.27.50.$val &
done

!/bin/bash

for i in 0 1 2 3 4 5 6 7
do
for val in 70 71 72 73 74 75
do
pdsh -R ssh -w client[$i] iperf -c 172.27.50.$val -f m -t 60 -P 1 > /tmp/iperf_client${i}to${val}.out &
done
done

!/bin/bash

for i in 8 9 10 11 12 13
do
val=$((62+$i))
pdsh -R ssh -w osd[$i] iperf -s -B 172.27.49.$val &
done

!/bin/bash

for i in 8 9 10 11 12 13
do
for val in 70 71 72 73 74 75
do
pdsh -R ssh -w osd[$i] iperf -c 172.27.49.$val -P 1 -f m -t 60 -P 1 > /tmp/iperf_${i}to${val}.out &
done
done

librbdfio fails when client's hostname differs from identifier in CBT job file

The librbdfio benchmark creates RBDs using node names specified in the job file. However, the clients try to access the RBDs using their own "hostname -s" name. If the names are not the same then tests will fail, with output like this in the output files:

Starting 1 process
rbd engine: RBD version: 0.1.9
rbd_open failed.
fio_rbd_connect failed.

I will be submitting a fix.

Concurrent take root testing

A cluster composed of nodes with a mix of SSD and HDD, where the SSDs are being used as OSDs, and not for a cache tier. Currently, we can test either the flash take root with CBT, or the HDD take root, but not both concurrently. Add ability to have clients test pools with different take roots, concurrently.

concurrent_procs - unable to change the vaule

Hi
changing the concurrent_procs option vaule in yaml file doesn't have any affect to the cbt. It always working with value '3'. I found this while using benchmark librbdfio.

THX

radosbench fails if client cannot read admin keyring

The radosbench test uses the 'rados' tool, which requires the client admin key to access the cluster. The rados command will fail unless the user specified in the config file has read acess to the client admin keyring.

A workaround exists, and that is to set "cmd_path: 'sudo /usr/bin/rados'" in the test configuration. But this is cumbersome and breaks the ability to run the command under valgrind.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.