Giter VIP home page Giter VIP logo

goth's Introduction

Golem Test Harness

codestyle test PyPI version GitHub license

goth is an integration testing framework intended to aid the development process of yagna itself, as well as apps built on top of it.

Dependencies on other Golem projects

  • golemfactory/gnt2 - Dockerized environment with Ganache and contracts
  • golemfactory/pylproxy - PyPI version Python proxy for catching http calls between actors (replacement for mitmproxy used previously)

How it works

Key features:

  • creates a fully local, isolated network of Golem nodes including an Ethereum blockchain (through ganache)
  • provides an interface for controlling the local Golem nodes using either yagna's REST API or CLI
  • includes tools for defining complex integration testing scenarios, e.g. HTTP traffic and log assertions
  • configurable through a YAML file as well as using a number of CLI parameters

Within a single goth invocation (i.e. test session) the framework executes all tests which are defined in its given directory tree.

Internally, goth uses pytest, therefore each integration test is defined as a function with the test_ prefix in its name.

Every test run consists of the following steps:

  1. docker-compose is used to start the so-called "static" containers (e.g. local blockchain, HTTP proxy) and create a common Docker network for all containers participating in the given test.
  2. The test runner creates a number of Yagna containers (as defined in goth-config.yml) which are then connected to the docker-compose network.
  3. For each Yagna container started an interface object called a Probe is created and made available inside the test via the Runner object.
  4. The integration test scenario is executed as defined in the test function itself.
  5. Once the test is finished, all previously started Docker containers (both "static" and "dynamic") are removed and other cleanup is performed before repeating these steps for the next test.

Requirements

  • Linux (tested on Ubuntu 18.04 and 20.04)
  • Python 3.8+
  • Docker

Python 3.8+

You can check your currently installed Python version by running:

python3 --version

If you don't have Python installed, download the appropriate package and follow instructions from the releases page.

Docker

To run goth you will need to have Docker installed. To install the Docker engine on your system follow these instructions.

To verify your installation you can run the hello-world Docker image:

docker run hello-world

Installation

goth is available as a PyPI package:

pip install goth

It is encouraged to use a Python virtual environment.

Usage

Getting a GitHub API token

When starting the local Golem network, goth uses the GitHub API to fetch metadata and download artifacts and images. Though all of these assets are public, using this API still requires basic authentication. Therefore, you need to provide goth with a personal access token.

To generate a new token, go to your account's developer settings.

You will need to grant your new token the public_repo scope, as well as the read:packages scope. The packages scope is required in order to pull Docker images from GitHub.

Once your token is generated you need to do two things:

  1. Log in to GitHub's Docker registry by calling: docker login ghcr.io -u {username}, replacing {username} with your GitHub username and pasting in your access token as the password. You only need to do this once on your machine.
  2. Export an environment variable named GITHUB_TOKEN and use the access token as its value. This environment variable will need to be available in the shell from which you run goth.

Starting a local network

First, create a copy of the default assets:

python -m goth create-assets your/output/dir

Where your/output/dir is the path to a directory under which the default assets should be created. The path can be relative and it cannot be pointing to an existing directory. These assets do not need to be re-created between test runs.

With the default assets created you can run the local test network like so:

python -m goth start your/output/dir/goth-config.yml

If everything went well you should see the following output:

Local goth network ready!

You can now load the requestor configuration variables to your shell:

source /tmp/goth_interactive.env

And then run your requestor agent from that same shell.

Press Ctrl+C at any moment to stop the local network.

This is a special case of goth's usage. Running this command does not execute a test, but rather sets up a local Golem network which can be used for debugging purposes. The parameters required to connect to the requestor yagna node running in this network are output to the file /tmp/goth_interactive.env and can be sourced from your shell.

Creating and running test cases

Take a look at the yagna integration tests README to learn more about writing and launching your own test cases.

Logs from goth tests

All containers launched during an integration test record their logs in a pre-determined location. By default, this location is: $TEMP_DIR/goth-tests, where $TEMP_DIR is the path of the directory used for temporary files.

This path will depend either on the shell environment or the operating system on which the tests are being run (see tempfile.gettempdir for more details).

Log directory structure

.
└── goth_20210420_093848+0000
    ├── runner.log                      # debug console logs from the entire test session
    ├── test_e2e_vm                     # directory with logs from a single test
    │   ├── ethereum-mainnet.log
    │   ├── ethereum-holesky.log
    │   ├── ethereum-polygon.log
    │   ├── provider_1.log              # debug logs from a single yagna node
    │   ├── provider_1_ya-provider.log  # debug logs from an agent running in a yagna node
    │   ├── provider_2.log
    │   ├── provider_2_ya-provider.log
    │   ├── proxy-nginx.log
    │   ├── proxy.log                   # HTTP traffic going into the yagna daemons recorded by a "sniffer" proxy
    │   ├── requestor.log
    │   ├── router.log
    │   ├── test.log                    # debug console logs from this test case only, duplicated in `runner.log`
    └── test_e2e_wasi
        └── ...

Test configuration

goth-config.yml

goth can be configured using a YAML file. The default goth-config.yml is located in goth/default-assets/goth-config.yml and looks something like this:

docker-compose:

  docker-dir: "docker"                          # Where to look for docker-compose.yml and Dockerfiles

  build-environment:                            # Fields related to building the yagna Docker image
    # binary-path: ...
    # deb-path: ...
    # branch: ...
    # commit-hash: ...
    # release-tag: ...
    # use-prerelease: ...

  compose-log-patterns:                         # Log message patterns used for container ready checks
    ethereum-mainnet: ".*Wallets supplied."
    ethereum-holesky: ".*Wallets supplied."
    ethereum-polygon: ".*Wallets supplied."
    ...

key-dir: "keys"                                 # Where to look for pre-funded Ethereum keys

node-types:                                     # User-defined node types to be used in `nodes`
  - name: "Requestor"
    class: "goth.runner.probe.RequestorProbe"

  - name: "Provider"
    class: "goth.runner.probe.ProviderProbe"
    mount: ...

nodes:                                          # List of yagna nodes to be run in the test
  - name: "requestor"
    type: "Requestor"

  - name: "provider-1"
    type: "Provider"
    use-proxy: True

When you generate test assets using the command python -m goth create-assets your/output/dir, this default config file will be present in the output location of your choice. You can make changes to that generated file and always fall back to the default one by re-generating the assets.

Local development setup

Poetry

goth uses poetry to manage its dependencies and provide a runner for common tasks.

If you don't have poetry available on your system then follow its installation instructions before proceeding. Verify your installation by running:

poetry --version

Project dependencies

To install the project's dependencies run:

poetry install

By default, poetry looks for the required Python version on your PATH and creates a virtual environment for the project if there's none active (or already configured by Poetry).

All of the project's dependencies will be installed to that virtual environment.

goth's People

Contributors

azawlocki avatar dependabot[bot] avatar evik42 avatar filipgolem avatar jiivan avatar johny-b avatar kamirr avatar kmazurek avatar lucekdudek avatar maaktweluit avatar mdtanrikulu avatar mfranciszkiewicz avatar mrdarthshoe avatar nieznanysprawiciel avatar pnowosie avatar pradeepbbl avatar pwalski avatar scx1332 avatar shadeofblue avatar staszek-krotki avatar stranger80 avatar tworec avatar wiezzel avatar wkargul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goth's Issues

REST API unification recommendation

In order to have a good dev UX while developing an LWG provider / requestor agent, as an LWG provider / requestor agent developer, I want the REST API to be consistent.

AC:

  • finished consistency analysis of the REST API with the usage of the points suggested by Piotr Chromiec:
    • IDs unification in REST API -> maybe not a problem -> implementation difference, not a specification one
    • event API consistency in REST API -> possibly addressed in the Events API https://github.com/golemfactory/ya-client/pull/2/files
    • timeout consistency in REST API -> possibly all APIs should use timeout definition from common.yaml -> or should otherwise be aligned
    • date format consistency -> probably more a problem of implementation than specification
  • REST API changes suggestion formulated and wrote down
  • REST API changes suggestions consulted with all the teams (payment/market/exe unit)
  • final REST API changes recommendation wrote down and passed to all the teams. the passing might be performed in a form of workshop

Develop the Provider Probe from submodules

Assemble the probe modules to form Provider node:

  • Daemon lifecycle module
  • Daemon CLI module
  • API proxies (Provider side)
  • Output/log/event queue module
    • including MITM "sniffer" module

TODO

  • Specify assertion module as part of test scenario
  • Notify proxy modules that the test ends and end-of-events assertions have to be checked
  • Improve assertions to actually check useful properties

Define guidelines for interactive test scenario execution

Consider a way to design test scenarios in a way that would allow for "stepped" execution, ie. a user (developer? tester?) can execute the test scenario step by step to investigate/troubleshoot execution. Assume debugger features are the way to go.

Documentation generation POC

In order for the documentation to stay up to date, as an LWG requestor / core developer that uses the SDK, I would like the LWG documentation to be automatically generated from the source documentation files.

AC:

  • start with 14 days trial at https://www.gitbook.com/
  • automatic documentation generation working on example source files (proper documentation of the source is outside of the POC)
  • output from #88 used
  • output from #111 used

Reusable test framework implementation

In order to test my requestor agent code, as a requestor agent developer, I want to have requestor agent code testing support in the SDK being implemented in code.

AC:

  • output from #212 used as scope reference
  • all the deriverables defined in #212 are delivered

[Lvl0] Automated test - Run agents against test network

Create a toolset to run a complete scenario including:

  1. Launch test yagna network in a dockerized environment
  2. Run yagna sample agents (ya-requestor, ya-provider) as commandline processes.
  3. Validate the outputs of yagna sample agents
  • Assert no errors are returned
  • Assert correct sequence of actions/responses

Design the mechanism to refer from yagna-integration tests to yagna binaries

Research and recommend a way to "build" the yagna-integration scenario packages referring to a known set of yagna binaries.

Agree some binary publishing mechanism with yagna team, and some "staging area" from which the binaries can be pulled for testing. PREFERRED, but may require some additional work by yagna team.

Move unit tests to a separate job in the CI pipeline

Unit tests should be run separately from integration tests. Should building the yagna Docker image be required for running unit tests, the image should somehow be shared between the two jobs (see this thread for possible approaches).

  • Create Github Action environment locally
  • Ensure shared/cached docker image works as intended
  • Split the unit tests and integration tests to run on separate pipeline

Improve CLI tests to separate unit tests from integration tests

Both kinds of tests should be located in separate modules.

Unit tests should not rely on a running docker container but use appropriate mocks.

Also think of separating integration tests that test yagna commands from those that test the testing framework itself (if there are any). This may provide some insights for the future?

Prepare an action plan & estimate for integration test LVL 1

First milestone - have the "demo scenario" automated.

Weeks 1,2:

  1. Design the yagna binary referencing mechanism
  2. Dockerize the test network setup
  3. Level 0 - Automate running the out-of-the-box agents and validate their results (as read from console)
  4. Brainstorm features required from Test Harness when it comes to defining scenarios
    • Be able to operate directly on Daemon APIs (Subscribe, Collect, etc.)
    • Share experiences of Selenium UI
    • Propose the test scenario definition format/layout
  5. Write a draft of Demo Scenario specs in the agreed scenario format
  6. Assess tools/modules required to execute the Demo Scenario using the draft specs

Weeks 3...:

...depends on outcomes of points 5,6 above.

Research test frameworks and utilities

Research test frameworks and utilities, consider their applicability for Yagna Integration for following aspects:

  • Support for clear and readable test scenario specifications
  • Support for scaling, stress & performance testing

Documentation process definition

In order to implement the automatic LWG documentation process, as an SDK team member, I would like the specification of the LWG's automatic documentation process to be defined and written down.

AC:

Reusable test framework documentation

In order to test my requestor agent code, as a requestor agent developer, I want to have requestor agent code testing support in the SDK being described in SDK documentation.

AC:

  • output from #212 used as scope reference
  • all the deriverables defined in #212 are delivered

Design yagna Daemon CLI proxy module

Write a python module to orchestrate Daemons via CLI options so that it can become a part of Requestor/Provider Probe modules:

  • Identity mgmt
  • App Key mgmt
  • Payment Platform mgmt

Maybe worth thinking of splitting the proxy features into topic areas? The areas would be aligned with CLI command groups (as the ones mentioned above)?

Separate commands' standard output from standard error in CLI wrappers

Stdout is mixed with stderr in the output from CLI wrapper commands. This causes errors when parsing e.g. the output of yagna app-key create with RUST_LOG set to debug, since the command then outputs diagnostic messages to stderr and they get mixed with the app key printed to standard output.

Unify documentation for all

To make the documentation UX smooth and reading of it easy, as an LWG requestor / core developer that uses the SDK, I would like the LWG documentation to be consistent.

AC:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.