Giter VIP home page Giter VIP logo

flood's Introduction

๐ŸŒŠ๐ŸŒŠ flood ๐ŸŒŠ๐ŸŒŠ

flood is a load testing tool for benchmarking EVM nodes over RPC

CI status Docker Telegram Chat

For each RPC method, flood measures how load affects metrics such as:

  1. throughput
  2. latency (mean, P50, P90, P95, P99, max)
  3. error rate

flood makes it easy to compare the performance of:

  • different node clients (e.g. geth vs erigon vs reth)
  • different hardware/software configurations (e.g. local vs cloud vs low memory vs RAID-0)
  • different RPC providers (e.g. Alchemy vs Quicknode vs Infura)

flood can generate tables, figures, and reports for easy sharing of results (example report here)

Contents

    1. Prerequisites
    2. Installing Flood
    3. Docker
    1. Basic Load Tests
    2. Remote Load Tests
    3. Printing Test Results
    4. Report Generation
    5. Differential Tests
    6. From Python
    7. Performing Deep Checks

Installation

Prerequisites

Install vegeta:

  • on mac: brew update && brew install vegeta
  • on linux: go install github.com/tsenart/vegeta/[email protected]

After installation, make sure vegeta is on your $PATH. Running vegeta -h should output a path. If it does not, you probably have not set up go to install items to your $PATH. You may need to add something like export PATH=$PATH:~/go/bin/ to your terminal config file (e.g. ~/.profile).

flood also requires python >= 3.7

Installing flood

pip install paradigm-flood

Typing flood help in your terminal should show help output. If it does not, you probably have not set up pip to install items to your $PATH. You may need to add something like export PATH=$PATH:~/.local/bin to your terminal config file (e.g. ~/.profile). Alternatively, you can avoid setting up your $PATH and just type python3 -m flood instead of flood.

Docker

Alternatively, flood can be used as a Docker image.

Usage guide

flood works by bombarding an RPC endpoint with different patterns of RPC calls. Measurements of the RPC endpoint's performance under different controlled loads are then used to paint a detailed view of the node's performance.

Every time flood runs, it saves its parameters and test results to an output directory. You can specify this output directory with the --output parameter, otherwise a temporary directory will be created. Running a test will populate the folder with the following files:

  • figures/: directory containing PNG's summarizing node performance
  • results.json: results of the test including performance metrics
  • summary.txt: printed summary of test that was output to the console
  • test.json: metadata and parameters used to create and run the test

Basic load tests

Here is an example of a basic test with flood. It will benchmark block retrieval from two different nodes. It will test at 3 different rates (10, 100, and 1000 requests per second) and it will test them for 30 seconds each.

flood eth_getBlockByNumber NODE1_NAME=NODE1_URL NODE2_NAME=NODE2_URL --rates 10 100 1000 --duration 30

To see all of the parameters available for controlling flood tests use flood --help

Remote load tests

Instead of broadcasting RPC calls from whatever machine running the flood CLI command, flood can broadcast the calls from a remote process on a remote machine. In particular, flood can broadcast the calls from the same machine that is running the EVM node in order to eliminate any noise or bottlenecks associated with networking.

This can be accomplished by installing flood on the remote machine and then providing flood with login credentials and routing details using the following syntax:

flood <test> [node_name=][username@]hostname:[test_url] ...

For example, the following command will test a reth node and an erigon node remotely:

flood eth_call [email protected]:localhost:8545 [email protected]:localhost:8545

If there are multiple remote tests, these tests will be run in parallel. After the tests are complete, flood will retrieve the results and summarize using the same methodology as a local test.

Printing test results

By default flood produces verbose output of each test as it runs. This can be disabled with the --quiet parameter. To re-print the results of an old test, use flood print <TEST_DIR>. To print a summary of multiple tests, use flood print <test_1_dir> <test_2_dir>.

Report generation

After running tests, you can generate an HTML + Jupyter report similar to this one. This is done by running flood report <TEST_DIR>. Multiple tests can be combined into one report with flood repos <TEST_DIR_1> <TEST_DIR_2> ....

Differential tests

Instead of testing the raw performance of an RPC node, flood can be used to test the correctness of a node's responses using a differential testing approach. This works by using two nodes and making sure that their responses match under a variety of RPC calls. This is done using the --equality parameter. For example:

flood all reth=91.91.91.91 erigon=92.92.92.92 --equality

From python

All of flood's functionality can be used from python instead of the CLI. Some functions:

description python
Import flood import flood
Run tests flood.run(...)
Load the parameters of a test flood.load_single_run_results_payload(output_dir)
Load the results of a test flood.load_single_run_results_payload(output_dir)
Run a live version of a results notebook jupyter notebook <TEST_DIR>

Performing deep checks

Under normal operation flood relies on vegeta to compute performance summaries of each test. This works well, but sometimes it is desirable to implement custom introspection not available in vegeta.

In particular, vegeta counts any status-200 response as a success, even if the contents of the response is an RPC error. Running with the --deep-check command will check every response to make sure that it returns well-formed JSON with no RPC errors. With --deep-check, flood also computes separate performance statistics successful vs failed calls.

If you want to save the timing information and raw contents of every single response from the test to the results.json output, use the --save-raw-output argument. This allows for performing own custom analyses on the raw data.

Contributing

Contributions are welcome in the form of issues, PR's, and commentary. Check out the contributor guide in CONTRIBUTING.md.

flood's People

Contributors

andremiras avatar gakonst avatar jonaspf avatar lgingerich avatar mattsse avatar sslivkoff avatar zanepeycke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

flood's Issues

Send tests to different nodes concurrently

when using the remote feature like

rpc_bench eth_getBlockByNumber node=root@xxx:localhost:8545 node2=root@xxxxx:localhost:8545 -r 10000 12500 15000 17500 20000 22500

sending and receiving tests could be handled concurrently right? because they are executed on the remote machine?

Missing dependencies

Version

I'm using the main branch

Platform

Darwin Q7Y7WJYHMX 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000 arm64

Description

When creating a report I got:

No module named nbconvert

After installing nbconvert using pip3 install nbconvert I got:

jupyter_client.kernelspec.NoSuchKernel: No such kernel named python3

After installing ipykernel using pip3 install ipykernel it worked and I could create a report.

I believe the dependencies ipykernel and nbconvert are missing from pyproject.toml. Unfortunately my knowledge on Python packaging isn't up-to-date, therefore I didn't trust myself to create a useful PR for this problem.

Add flood --version

now that versions are enforced,

Exception: local version of flood 0.3.0a-dcf73c57 does not match remote version of flood 0.3.0a-c8974030

it would be useful to get version info via flood --version

Show the default for each flag in the help command

Is your feature request related to a problem? Please describe.
I'd like to be able to see what's the default for each flag from within the help subcommand output.

Describe the solution you'd like
Something like the Python core argparse module has.

Describe alternatives you've considered
Alternatively we do get the actual value used in the run output, but ideally I'd like to know beforehand.
It could also be documented in the README.md

Additional context
argparse example, defaults documented in the help output:
https://docs.python.org/3/library/argparse.html
image

checkout action example, defaults documented in the README.md
https://github.com/actions/checkout
image

About in-function imports.

I saw that some modules are imported inside functions. Is there a reason for this? Or is it a complete error? If it is a bug, I will send a PR to fix all the imports I see.

Thank you, have a good day.

System Requirements

Hi,

Can I know the recommended system requirements (e.g. CPU, Memory) for running this tool for load testing?

Docker run part of the CI

Is your feature request related to a problem? Please describe.
Make sure the docker image runs as expected.
The Docker image could build, but we could have runtime issues.

Describe the solution you'd like
Integration testing, by doing a docker run from the CI.
We could start easy and run only the --help flag.
Then in another iteration we could decide to actually run light tests against a node.
This could be a public node or a private node we tunnel to beforehand.

Describe alternatives you've considered
๐Ÿคทโ€โ™‚๏ธ

Additional context
N/A

Formatting on github action is a bit broken

When started a run using a flood on GitHub Action I noticed that formatting is a bit broken which makes a comparision of 2 nodes harder:
image

Would be nice to think about adjusting that - this is just started by flood all %NODES% --equality command

SSH connection doesn't work

Version
flood@flood-load-2:~$ pip list
Package Version


aiohttp 3.8.5
aiosignal 1.3.1
aiosqlite 0.18.0
asttokens 2.2.1
async-timeout 4.0.3
attrs 21.2.0
Automat 20.2.0
Babel 2.8.0
backcall 0.2.0
bcrypt 3.2.0
beautifulsoup4 4.12.2
bleach 6.0.0
blinker 1.4
certifi 2020.6.20
chardet 4.0.0
charset-normalizer 3.2.0
checkthechain 0.3.9
click 8.0.3
cloud-init 23.2.2
colorama 0.4.4
comm 0.1.4
command-not-found 0.3
configobj 5.0.6
connectorx 0.3.2a6
constantly 15.1.0
contourpy 1.1.0
cryptography 3.4.8
cycler 0.11.0
dbus-python 1.2.18
debugpy 1.6.7.post1
decorator 5.1.1
defusedxml 0.7.1
distro 1.7.0
distro-info 1.1+ubuntu0.1
entrypoints 0.4
eth-abi-lite 3.2.0
executing 1.2.0
fastjsonschema 2.18.0
fonttools 4.42.1
frozenlist 1.4.0
httplib2 0.20.2
hyperlink 21.0.0
idna 3.3
importlib-metadata 4.6.4
incremental 21.3.0
ipykernel 6.25.1
ipython 8.14.0
ipython-genutils 0.2.0
jedi 0.19.0
jeepney 0.7.1
Jinja2 3.0.3
jsonpatch 1.32
jsonpointer 2.0
jsonschema 3.2.0
jupyter_client 8.3.0
jupyter_core 5.3.1
jupyterlab-pygments 0.2.2
keyring 23.5.0
kiwisolver 1.4.4
launchpadlib 1.10.16
lazr.restfulclient 0.14.4
lazr.uri 1.0.6
loguru 0.6.0
lxml 4.9.3
markdown-it-py 3.0.0
MarkupSafe 2.0.1
matplotlib 3.7.2
matplotlib-inline 0.1.6
mdurl 0.1.2
mistune 0.8.4
more-itertools 8.10.0
msgspec 0.14.2
multidict 6.0.4
nbclient 0.8.0
nbconvert 6.5.4
nbformat 5.9.2
nest-asyncio 1.5.7
netifaces 0.11.0
numpy 1.23.5
oauthlib 3.2.0
orjson 3.9.5
packaging 23.1
pandocfilters 1.5.0
paradigm-data-portal 0.2.2
paradigm-flood 0.3.0
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 10.0.0
pip 22.0.2
platformdirs 3.10.0
polars 0.17.15
prompt-toolkit 3.0.39
psutil 5.9.5
psycopg 3.1.10
ptyprocess 0.7.0
pure-eval 0.2.2
pyarrow 12.0.1
pyasn1 0.4.8
pyasn1-modules 0.2.1
pycryptodome 3.18.0
Pygments 2.16.1
PyGObject 3.42.1
PyHamcrest 2.0.2
PyJWT 2.3.0
pyOpenSSL 21.0.0
pyparsing 2.4.7
pyrsistent 0.18.1
pyserial 3.5
python-apt 2.4.0+ubuntu2
python-dateutil 2.8.2
python-debian 0.1.43+ubuntu1.1
python-magic 0.4.24
pytz 2022.1
PyYAML 5.4.1
pyzmq 25.1.1
requests 2.25.1
rich 13.5.2
SecretStorage 3.3.1
service-identity 18.1.0
setuptools 59.6.0
six 1.16.0
sos 4.4
soupsieve 2.4.1
ssh-import-id 5.11
stack-data 0.6.2
systemd-python 234
tinycss2 1.2.1
toml 0.10.2
toolcli 0.6.16
toolconf 0.1.2
tooljob 0.1.6
toolplot 0.3.5
toolsql 0.6.2
toolstr 0.9.7
tooltime 0.2.11
tornado 6.3.3
tqdm 4.66.1
traitlets 5.9.0
Twisted 22.1.0
typing_extensions 4.7.1
ubuntu-advantage-tools 8001
ubuntu-drivers-common 0.0.0
ufw 0.36.1
unattended-upgrades 0.1
urllib3 1.26.5
wadllib 1.3.6
wcwidth 0.2.6
webencodings 0.5.1
wheel 0.37.1
xkit 0.0.0
yarl 1.9.2
zipp 1.0.0
zope.interface 5.4.0

Platform
Ubunu 22.04

Description
I think this changes broke ssh function #36

$ flood eth_getBlockByNumber [email protected]:my.host [email protected]:my.host --rates 2 2 2 --duration 5 --save-raw-output --output ~/flood
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Load test: eth_getBlockByNumber โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
- sample rates: [2, 2, 2]
- sample duration: 5
- extra args: None
- output directory: /root/flood

Gathering node data...
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
[WARNING] ctc config file does not exist; use `ctc setup` on command line to generate a config file

         node  โ”‚                                     url  โ”‚        metadata
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
               โ”‚                                          โ”‚            Geth
      server1  โ”‚                     [email protected]  โ”‚  v1.9.10-stable
               โ”‚  https://my.host  โ”‚     linux-amd64
               โ”‚                                          โ”‚        go1.19.4
    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
               โ”‚                                          โ”‚            Geth
      server2  โ”‚                    [email protected]  โ”‚  v1.9.10-stable
               โ”‚  https://my.host  โ”‚     linux-amd64
               โ”‚                                          โ”‚        go1.19.4


Running load tests...
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
[2023-08-22 17:10:11] Starting
Process Process-3:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.11/site-packages/flood/tests/load_tests/load_test_runs.py", line 179, in run_load_test
    result = _run_load_test_remotely(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/flood/tests/load_tests/load_test_runs.py", line 308, in _run_load_test_remotely
    remote_flood_version = remote_installation['flood_version']
                           ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
TypeError: 'int' object is not subscriptable
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.11/site-packages/flood/tests/load_tests/load_test_runs.py", line 179, in run_load_test
    result = _run_load_test_remotely(
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/flood/tests/load_tests/load_test_runs.py", line 308, in _run_load_test_remotely
    remote_flood_version = remote_installation['flood_version']
                           ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
TypeError: 'int' object is not subscriptable

Source

[email protected]:my.host is fake data for public, my hosts work and commands like ssh and ping working from my server.

This is response from my other server, returns versions as integer (?), and python code expect boolean?

flood@load-flood-1:~$ flood --version
1
flood@load-flood-1:~$ python3 -m flood version --json
1

this command from my main server returns 1 too

$ ssh [email protected] bash -c 'source ~/.profile; echo $(python3 -m flood version --json)'
/home/flood/.profile: line 1: source: filename argument required
source: usage: source filename [arguments]
1

support json file dumps if responses are different

for huge responses with multiple discrepancies it would be beneficial to dump both responses to json files,

suggesting:

--json-dump flag (or something like that) that writes the results to <method>.<node>.json I'm ok with overwriting existing files

Count and display JSON rpc error response codes

currently, flood or rather vegeta treats a JSON RPC error the same as a JSON RPC result because both are "successful" HTTP responses

for load tests, this can skew the final results, because if a node simply returns errors early and doesn't execute anything its throughput is of course higher.

I briefly looked at the vegate commands and report output, I'm not entirely sure where we can extract this but perhaps from the attack_output:

attack_output = _vegeta_attack(
schedule_dir=attack['schedule_dir'],
duration=duration,
rate=rate,
vegeta_kwargs=vegeta_kwargs,
verbose=verbose,
)

it be great to have at least the number of rpc errors part of the test results object, ideally even the error messages itself.

Keep the Docker image small

Is your feature request related to a problem? Please describe.
The docker image is 1.8G, making it lighter would make it easier to publish, refs #23

Describe the solution you'd like
We could leverage multi-stage builds.

Describe alternatives you've considered
Doing it the old fashion, deleting things as we use them for each layers, but it's harder to deal with.

Additional context
Tested in the current main c897403, this is the result of a docker build then a docker images command
image

`flood all` does not work

Version
List the versions of all flood packages you are using.
0.2.1

Platform
The output of uname -a (UNIX), or version and 32 or 64-bit (Windows)
Linux pop-os 6.2.6-76060206-generic #202303130630168375320722.04~77c1465 SMP PREEMPT_DYNAMIC Wed M x86_64 x86_64 x86_64 GNU/Linux

Description
I'm using the CLI and I can't get to run tests with all flag. Using individual calls does work, i.e. flood eth_call

I tried this command:

flood all NODE1_NAME=<REDACTED> NODE2_NAME=<REDACTED> --rates 10 100 1000 --duration 30

I expected this to work, as it is shown in the help command:

examples:
    flood eth_getBlockByNumber localhost:8545
    flood eth_getLogs localhost:8545 localhost:8546 localhost:8547
    flood all client1=0.0.0.0:8545 client2=0.0.0.0:8546 --equality

Parametrize start/end block ranges for all tests

Is your feature request related to a problem? Please describe.
flood is good for testing eth, however its not very easy to test other networks, because it hardcodes block ranges for which it generates requests.

Describe the solution you'd like
Parametrize block ranges for tests (start/end block number)

Publish the Docker image to a public container registry

Is your feature request related to a problem? Please describe.
I want to try flood, but I'm lazy.
I wish I could simply docker run --rm paradigmxyz/flood --help.
But I don't want to git clone and build the Docker image myself.

Describe the solution you'd like
CI would push to a public container registry like the one provided by GitHub or DockerHub.

Describe alternatives you've considered
Of course I built, the image locally, it took me 3m60s and 1.6G
By the way we could leverage multistage build to make this image much much smaller.

Additional context
Thanks for providing the Dockerfile already, that was the hardest part.
By the way publishing to DockerHub can be as easy as simply pointing the DockerHub repository to this repository and it will automatically clone the code, build and publish without having to configure any CI. So rather than pushing from GitHub to DockerHub, you can pull the repo from DockerHub, this is pretty straight forward to configure

Add tests for `eth_feeHistory`

Is your feature request related to a problem? Please describe.

To test the correctness of eth_feeHistory (and benchmark it) I would like to add an equality test for eth_feeHistory.

This involves:

  1. Selecting a pivot block
  2. Selecting an amount of blocks to query fee history for (pivot - count must not go outside finalized blocks, incl. genesis)
  3. Optionally (that is, sometimes) generating some percentiles as floats from 0-100 (inclusive)

Blocked on checkthechain/checkthechain#72

FileNotFoundError results.json on report

Version
0.2.5

Platform

Linux xps-13-9305 6.2.10-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 07 Apr 2023 02:10:43 +0000 x86_64 GNU/Linux

Description

I'm running the tests from Docker (built the image locally).

docker run --rm --volume ${PWD}/output:/output paradigmxyz/flood:latest eth_getBlockByNumber $NODE:8545 --output /output/eth_getBlockByNumber --rates 64 128 256 512 1024 --duration 30

This gets me the following output:

tree output/
output/
โ””โ”€โ”€ eth_getBlockByNumber
    โ””โ”€โ”€ test.json

2 directories, 1 file

But then when I want to get the report with:

docker run --rm --volume ${PWD}/output:/output paradigmxyz/flood:latest report /output/eth_getBlockByNumber 

I'm getting the following error:

[NbConvertApp] Converting notebook /home/flood/report.ipynb to html                                                                                                                            
0.00s - Debugger warning: It seems that frozen modules are being used, which may                                                                                                               
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off                                                                                                                   
0.00s - to python to disable frozen modules.                                                                                                                                                   
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.                                                                                         
0.00s - Debugger warning: It seems that frozen modules are being used, which may                                                                                                               
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off                                                                                                                   
0.00s - to python to disable frozen modules.                                                                                                                                                   
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.                                                                                         
Traceback (most recent call last):                                                             
  File "<frozen runpy>", line 198, in _run_module_as_main                                      
  File "<frozen runpy>", line 88, in _run_code                                                                                                                                                 
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/__main__.py", line 3, in <module>                                                                                            
    main()                                                                                                                                                                                     
  File "/home/flood/.local/lib/python3.11/site-packages/jupyter_core/application.py", line 285, in launch_instance                                                                             
    return super().launch_instance(argv=argv, **kwargs)                                                                                                                                        
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                        
  File "/home/flood/.local/lib/python3.11/site-packages/traitlets/config/application.py", line 1043, in launch_instance                                                                                                                                                                                                                                                                       
    app.start()                                                                                                                                                                                
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/nbconvertapp.py", line 423, in start                                                                                         
    self.convert_notebooks()                                                                   
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/nbconvertapp.py", line 597, in convert_notebooks                                                                             
    self.convert_single_notebook(notebook_filename)                                            
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/nbconvertapp.py", line 560, in convert_single_notebook                                                                       
    output, resources = self.export_single_notebook(                                           
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                           
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/nbconvertapp.py", line 488, in export_single_notebook                                                                                                                                                                                                                                                                       
    output, resources = self.exporter.from_filename(                                           
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                           
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/exporter.py", line 189, in from_filename                                                                           
    return self.from_file(f, resources=resources, **kw)                                        
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                        
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/exporter.py", line 206, in from_file                                                                               
    return self.from_notebook_node(                                                            
           ^^^^^^^^^^^^^^^^^^^^^^^^                                                            
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/html.py", line 223, in from_notebook_node                                                                          
    return super().from_notebook_node(nb, resources, **kw)                                     
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                     
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/templateexporter.py", line 397, in from_notebook_node                                                              
    nb_copy, resources = super().from_notebook_node(nb, resources, **kw)                                                                                                                       
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                       
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/exporter.py", line 146, in from_notebook_node                                                                      
    nb_copy, resources = self._preprocess(nb_copy, resources)                                                                                                                                  
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                  
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/exporters/exporter.py", line 335, in _preprocess                                                                             
    nbc, resc = preprocessor(nbc, resc)                                                        
                ^^^^^^^^^^^^^^^^^^^^^^^                                                        
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__                                                                                 
    return self.preprocess(nb, resources)                                                      
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                      
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/preprocessors/execute.py", line 89, in preprocess                                                                            
    self.preprocess_cell(cell, resources, index)                                               
  File "/home/flood/.local/lib/python3.11/site-packages/nbconvert/preprocessors/execute.py", line 110, in preprocess_cell                                                                      
    cell = self.execute_cell(cell, index, store_history=True)                                                                                                                                  
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                  
  File "/home/flood/.local/lib/python3.11/site-packages/jupyter_core/utils/__init__.py", line 166, in wrapped                                                                                  
    return loop.run_until_complete(inner)                                                      
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                      
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete                                                                                                     
    return future.result()                                                                     
           ^^^^^^^^^^^^^^^                                                                     
  File "/home/flood/.local/lib/python3.11/site-packages/nbclient/client.py", line 1058, in async_execute_cell                                                                                  
    await self._check_raise_for_error(cell, cell_index, exec_reply)                                                                                                                            
  File "/home/flood/.local/lib/python3.11/site-packages/nbclient/client.py", line 914, in _check_raise_for_error                                                                               
    raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)                                                                                                                       
nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:                                                                                                  
------------------                                                                             
# load data                                                                                    

test_payloads = {                                                                              
    test_name: flood.load_single_run_test_payload(test_path)                                                                                                                                   
    for test_name, test_path in test_paths.items()                                             
}                                                                                              

results_payloads = {                                                                           
    test_name: flood.load_single_run_results_payload(output_dir=test_path)                                                                                                                     
    for test_name, test_path in test_paths.items()                                             
}                                                                                              
------------------                                                                             


---------------------------------------------------------------------------                                                                                                                    
FileNotFoundError                         Traceback (most recent call last)                                                                                                                    
Cell In[3], line 8                                                                             
      1 # load data                                                                            
      3 test_payloads = {                                                                      
      4     test_name: flood.load_single_run_test_payload(test_path)                                                                                                                           
      5     for test_name, test_path in test_paths.items()                                     
      6 }                                                                                      
----> 8 results_payloads = {                                                                   
      9     test_name: flood.load_single_run_results_payload(output_dir=test_path)                                                                                                             
     10     for test_name, test_path in test_paths.items()                                     
     11 }                                                                                      

Cell In[3], line 9, in <dictcomp>(.0)                                                          
      1 # load data                                                                            
      3 test_payloads = {                                                                      
      4     test_name: flood.load_single_run_test_payload(test_path)                                                                                                                           
      5     for test_name, test_path in test_paths.items()                                     
      6 }                                                                                      
      8 results_payloads = {                                                                   
----> 9     test_name: flood.load_single_run_results_payload(output_dir=test_path)                                                                                                             
     10     for test_name, test_path in test_paths.items()                                     
     11 }                                                                                      

File ~/.local/lib/python3.11/site-packages/flood/runners/single_runner/single_runner_io.py:124, in load_single_run_results_payload(output_dir)
    121 import json                                                                            
    123 path = get_single_run_results_path(output_dir=output_dir)                                                                                                                              
--> 124 with open(path) as f:                                                                  
    125     results: flood.SingleRunResultsPayload = json.load(f)                                                                                                                              
    126 return results                                                                         

FileNotFoundError: [Errno 2] No such file or directory: '/output/eth_getBlockByNumber/results.json'

Instead I'm expecting to not have any error.
Additional context, I've patched the pyproject.toml to fix a couple of dependencies (will make a bug report and PR for that too).

diff --git a/pyproject.toml b/pyproject.toml
index 7254379..926ee74 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -39,7 +39,9 @@ dependencies = [
     'matplotlib >= 3.7.1, <4',
     'paradigm-data-portal >= 0.2.2, <0.3',
     'checkthechain >= 0.3.6, <0.4.0',
-    'nbconvert > 5.6.0, <6',
+    'nbconvert > 6, <7',
+    'ipython_genutils > 0.1, <1',
+    'ipykernel > 6, <7',
 ]
 
 [project.optional-dependencies]

Report files dont build

Version
1

Platform
Linux Ubuntu22.04 x64
Description
Enter your issue details here.
Hi Paradigm team,
The problem is that when I run the command:

flood eth_getBlockByNumber https://metis-rpc.gateway.pokt.network/ --rates 5 10 15 --duration 5 --output /home/test

I only got the JSON file and the figures and other reports are not built.
Then I tried to run the docker version of the Flood:

docker run -v /tmp:/tmp ghcr.io/paradigmxyz/flood eth_getBlockByNumber test1=https://metis-rpc.gateway.pokt.network/ --rates 1 2 1 --duration 5

But I got the same result. Can you please tell me what the solution is?
The screenshots are also attached please take a look.

Thank you for any help you can offer
image
image
image

Tasks

No tasks being tracked yet.

Tasks

No tasks being tracked yet.

Tests should treat connection errors as failures

the purpose of the equality tests is to compare results from two different endpoints.

It currently treats connectivity errors as equal:

HTTPSConnectionPool(host='', port=8545): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x105492950>: Failed to establish a new connection: [Errno 61] Connection refused'))
HTTPSConnectionPool(host='', port=8545): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1002)')))

which would result in no differences

Does this tool support other EVM compatible refresh chains?

I know from the README that the flood can be used to benchmark the Ethereum mainnet RPC client performance, whether it's a client implemented in Rust, Go or Py, because both of the clients rely on the same chain with public blocks, transactions.

My question is does it support other Ethereum compatible refresh chain? I read the codebase brefly and find many tests are rely on the hardcoded values or the https://datasets.paradigm.xyz/evm_samples. That means it need at least some fork changes to do that for other chains. Or are there other ways to achieve this?

Custom test generation

Is your feature request related to a problem? Please describe.
Currently flood creates the tests using sample files it downloads. The tests and data make assumptions about the state of the chain (e.g., number of blocks, existing transactions, contracts, etc).

Describe the solution you'd like
A tool to customise and generate the sample data for the tests.

Describe alternatives you've considered
A detailed description of the parquet files and how the sample data was obtained could help create our own automation.

Additional context
๐Ÿคทโ€โ™‚๏ธ

Provide extra info to the report

Is your feature request related to a problem? Please describe.
I'd like the HTML report to provide extra essential infos for future records as we export the record to PDF for later reference.

Describe the solution you'd like
Extra infos such as:

  • date of the report
  • version of flood used
  • version of dependencies used
  • flood command used
  • flood effective parameters used

Describe alternatives you've considered
Currently I'm editing the report manually to add a section before "Report Generation" with all this extra info.

Additional context
It would probably be OK to start simple and add the easy ones such as date and version, that would already be a good start to iterate over with the rest

Make it possible to add headers to the Vegeta request

Is your feature request related to a problem? Please describe.
I'd like to be able to test nodes that sit behind a proxy requiring a set of header, e.g. node providers that require an API key in the header.

Describe the solution you'd like
Ideally I'd like to be able to pass headers as Vegeta allows it (e.g. -header flag).
It could be via a command line flag or environment variable.

Describe alternatives you've considered
I hardcoded my header in the code to make it work for now:
https://github.com/paradigmxyz/flood/blob/0.2.5/flood/tests/load_tests/vegeta.py#L53

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.