Giter VIP home page Giter VIP logo

tests's Introduction

Table Of Contents

Overview

This repo contains Harvester end-to-end (e2e) test suite, implemented using Python pytest framework. In addition, it is also the home of all the Harvester manual test cases. The e2e test suite is a subset of Harvester manual test cases, which is intended to be utilized by CI (i.e. Jenkins) to automatically test Harvester backend functionality.

Prerequisites

The (e2e) tests are expected to be ran against a given Harvester cluster. In addition, the tests are executed via tox so make sure it is installed, either via Python pip or vendor package manager.

Optionally, in order to run the Rancher integration tests, an external Rancher cluster is also required. The Rancher integration tests are disabled unless the Rancher (API) endpoint is specified.

To run the NFS backup & restore tests, an NFS endpoint is required. Likewise, in order to run the AWS S3 backup & restore tests, S3 bucket name and access credential must be provided. The backup & restore tests are disabled unless the required NFS endpoint or S3 access parameters are specified.

Virtual Harvester Cluster

For test case development, we recommend using a virtual Harvester cluster as it is self-contained, disposable, and repeatable. Please refer to the instructions in https://github.com/harvester/ipxe-examples/tree/main/vagrant-pxe-harvester on how to setup a virtual Harvester cluster.

Networking

Some tests required VLAN networking. Those tests are disabled unless a VLAN ID and VLAN interface (NIC) are specified. For those tests to be successful, the VLAN routing and DHCP must be properly setup prior to running the tests. Setting up VLAN networking is infrastructure-specific, and therefore outside the scope of this document. Please work with your IT infrastructure team create the appropriate VLANs.

When using the virtual Harvester cluster (i.e. vagrant-pxe-harvester), the VLAN ID must be 1, and the VLAN NIC must be harvester-mgmt.

Host Management Scripts

Some tests required manipulating the hosts where the VMs are running in order to test scheduling resiliency and disaster recovery scenarios. Therefore, we need external scripts to power-on, power-off, and to reboot a given Harvester node. The host management scripts are expected to be provided by users out-of-band for the following reasons:

  1. the scripts are specific to the Harvester environment. For example, for a virtual vagrant environment, the scripts may simply just performing vagrant halt <node name> and vagrant up <node name>. However for a baremetal environment managed by IPMI, the scripts may need to use IPMI CLI.
  2. for certain environments (i.e. IPMI, RedFish, etc), credential is required.

The host management scripts must all be placed into the same directory and must be named power_on.sh, power_off.sh, and reboot.sh. All the scripts must accept exactly two parameters, which are host name and host IP. Please see the scripts/vagrant directory for examples.

Host management tests must be invoked with the host_management marker, along with the --node-scripts-location parameter which points to the directory that contains the host management shell scripts.

For example, to run the host management tests:

tox -e py38 -- -m "host_management" --node-scripts-location ./scripts

Terraform CLI

The Harvester terraform test requires terraform CLI. Please refer to terraform on how to download and install the CLI.

Image Caching

While running the tests, the image fixtures will attempt to create the test images by providing the download URLs for the various cloud image providers (e.g. htps://download.opensuse.org/repositories/Cloud:/Images:/Leap_15.3/images/openSUSE-Leap-15.3.x86_64-NoCloud.qcow2). Sometimes a given cloud image provider URL can be slow or inaccessible, which cause the underlying tests to fail. Therefore, it is recommended to create a local web server to cache the images that the tests depended on. We can then use the --image-cache-url parameter to convey the image cache URL to the tests. The absence of the --image-cache-url parameter means the tests will attempt to directly download the images directly from the cloud image providers instead.

Running Tests

There are two types of tests, APIs and scenarios respectively. APIs tests are designed to test simple resource creation using backend APIs, while the scenarios tests are intended for testing workflows involving multiple resources.

Tests are executed in Python tox environments. Both Python3.6 and Python 3.8 are supported.

Prior to running the (e2e) tests, you must edit config.yml to provide the correct configuration. Each configuration option is self-documented.


NOTE:

The configuration options in config.yml can be overwritten by command line parameters. For example, to overwrite the endpoint option, we can use the --endpoint parameter while running the tests.

The --do-not-cleanup option

All the resources/artifacts created by the tests will be cleaned up upon exit. If you need to preserve them for debugging purposes, you may use the --do-not-cleanup parameter when running the tests. For example:

tox -e py38 -- harvester_e2e_tests --html=test_result.html --do-not-cleanup

Run All Tests

To run the entire test suite, all the configuration option in config.yml are required.

tox -e py38 -- harvester_e2e_tests --html=test_result.html

Run API Tests Only

API Tests are designed to test REST APIs for one resource (i.e. keypairs, vitualmachines, virtualmachineimages, etc) at a time. For example:

tox -e py36 -- harvester_e2e_tests/apis --html=test_result.html

NOTE:

Since deleting the host is irreversible process, run delete_host test after running all the other tests

For example, to run the API tests in a Python3.6 environment for the first time, and skipping the delete_host test:

tox -e py36 -- harvester_e2e_tests/apis --html=test_result.html -m "not delete_host"

To run the API tests in a Python3.8 environment:

tox -e py38 -- harvester_e2e_tests/apis --html=test_result.html -m "not delete_host"

To skip multiple marker tests, for example: delete_host, host_management, multi_node_scheduling

tox -e py38 -- harvester_e2e_tests/apis --html=test_result.html -m "not delete_host and not host_management and not multi_node_scheduling"

Example Output:

============================= test session starts ==============================
platform linux -- Python 3.6.12, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: <Folder from where tests are running> 
plugins: metadata-1.11.0, html-3.1.1
collected 22 items                                                             

apis/test_images.py .....                                                [ 22%]
apis/test_keypairs.py .....                                              [ 45%]
apis/test_settings.py ..                                                 [ 54%]
apis/test_users.py ...                                                   [ 68%]
apis/test_vm_templates.py ...                                            [ 81%]
apis/test_volumes.py ....                                                [100%]

----------- generated html file: file:///<home Folder>/tests/test_result.html -----------

Run Scenario Tests Only

Scenario tests are designed to test a specific use case or scenario at a time, which may involved a combination of resources in an orchestrated workflow.

To run the scenario tests in a Python3.6 environment:

tox -r -e py36 -- harvester_e2e_tests/scenarios --html=test_result.html

To skip the multi_node_scheduling tests which are run in a multi-node cluster where some hosts have more resources than others in order to test VM scheduling behavior

tox -r -e py36 -- harvester_e2e_tests/scenarios --html=test_result.html -m "not multi_node_scheduling"

To run just the multi_node_scheduling tests

tox -r -e py36 -- harvester_e2e_tests/scenarios --html=test_result.html -m muti_node_scheduling

To run the scenario tests in a Python3.8 environment:

tox -r -e py38 -- harvester_e2e_tests/scenarios --html=test_result.html

By default the tests will cleanup after themselves. If you want to preserve the test artifact for debugging purposes, you may specific the --do-not-cleanup flag. For example:

tox -r -e py38 -- harvester_e2e_tests/scenarios --html=test_result.html --do-not-cleanup

Run Terraform Tests Only

To run the tests which uses terraform to create each resource, it uses the scripts in terraform_test_artifacts folder. For example:

tox -e py38 -- harvester_e2e_tests --html=test_result.html -m terraform

Run Delete Host Tests Only

To run the delete_host tests at the end when all tests are done running. For example:

tox -e py38 -- harvester_e2e_tests/apis --html=test_result.html -m delete_host

Run Backup & Restore Tests Only

To run the backup tests for both S3 and NFS endpoint, provide the S3 and nfs endpoint either on command line or in config.yml.

tox -e py38 -- harvester_e2e_tests --html=test_result.html --accessKeyId <accessKey> --secretAccessKey <secretaccesskey> --bucketName <bucket> --region <region> --nfs-endpoint nfs://<IP>/<path> -m backup

Run Rancher Integration Tests Only

An external Rancher instance is required in order to run Harvester Rancher integration tests. Furthermore, the external Rancher instance must be reachable by the Harvester nodes. Conversely, the Harvester VIP must also be reachable by Rancher during cluster provisioning. Both --rancher-endpoint and --rancher-admin-password arguments must be specified in order to run the Rancher integration tests. Optionally, user may specify --kubernetes-version argument to for a specific Kubernetes version to use when provisioning an RKE cluster via Harvester node driver. If --kubernetes-version is absent, Kubernetes version v1.21.6+rke2r1 will be used.

To run Rancher integration tests, for example:

tox -e py38 -- harvester_e2e_tests/scenarios/test_rancher_integration.py --endpoint https://192.168.0.131 --rancher-endpoint https://rancher-instance --rancher-admin-password rancher_password --kubernetes-version v1.27.1+rke2r2

If the external Rancher instance is shared by multiple Harvester environments, user should also provide the --test-environment argument to distinguish the artifacts created by the current test environment in case manual cleanup is needed. All the artifacts (e.g. RKE2 clusters, cloud credentials, import Harvester cluster, etc) have the test environment name it their names (e.g. harvester--). For example:

tox -e py38 -- harvester_e2e_tests/scenarios/test_rancher_integration.py --endpoint https://192.168.0.131 --rancher-endpoint https://rancher-instance --rancher-admin-password rancher_password --kubernetes-version v1.27.1+rke2r2 --test-environment browns

Running Linter

We are using the standard flake8 linter to enforce coding style. To run the linter:

tox -e pep8

Adding New Tests

The e2e tests were implemented using the Python pytest framework. An e2e test case should be corresponding to one or more manual test cases here. Likewise, if a manual test case is implemented by e2e, it should have the (e2e_be) designation in it's title.

Pytest expects tests to be located in files whose names begin with test_ or end with _test.py.

Here are the general guidelines for adding a new e2e test:

Manual Test Cases

Some scenarios are hard to test using the automation tests and are documented as manual test cases that need to be verified before release. The manual test cases are accessible here.

The manual test case pages can be edited under docs/content/manual/.

To categorize tests, place them in sub-directories under docs/content/manual/. These sub-directories must contain a file named _index.md with the following:

---
title: Name of Test Category
---
Optional description regarding the test category.

Each test page should be structured as such:

---
title: Name of Test Case
---
Description of the test case.

Both of these files can contain Markdown in the title and page body.

Preview The Website

To preview the website changes, you will need to install Hugo. Once Hugo is installed, run the following:

hugo server --buildDrafts --buildFuture

The site will be accessible at http://localhost:1313.

tests's People

Contributors

cjainsuse avatar dependabot[bot] avatar guangbochen avatar guangyee avatar lanfon72 avatar mbelur avatar noahgildersleeve avatar tachunlin avatar yarunachalam avatar yasker avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.