Giter VIP home page Giter VIP logo

kangal's Introduction

Kangal - Automatic loader

Artifact HUB codecov

Run performance tests in Kubernetes cluster with Kangal.


Table of content

Why Kangal?

In Kangal project, the name stands for "Kubernetes and Go Automatic Loader". But originally Kangal is the breed of a shepherd dog. Let the smart and protective dog herd your load testing projects.

With Kangal, you can spin up an isolated environment in a Kubernetes cluster to run performance tests using different load generators.

Key features

  • create an isolated Kubernetes environment with an opinionated load generator installation
  • run load tests against any desired environment
  • monitor load tests metrics in Grafana
  • save the report for the successful load test
  • clean up after the test finishes

How it works

Kangal application uses Kubernetes Custom Resources.

LoadTest custom resource (CR) is a main working entity. LoadTest custom resource definition (CRD) can be found in charts/kangal/crds/loadtest.yaml.

Kangal application contains two main parts:

  • Proxy to create, delete and check load tests and reports via REST API requests
  • Controller to operate with LoadTest custom resource and other Kubernetes entities.

Kangal also uses S3 compatible storage to save test reports.

Supported backends

Currently, there are the following load generator types implemented for Kangal:

Read more about each of them in docs/index.md.

Architectural diagram

The diagram below illustrates the workflow for Kangal in Kubernetes infrastructure.

Architectural diagram

Components

LoadTest Custom Resource

A new custom resource in the Kubernetes cluster which contains requirements for performance testing environments.

More info about the Custom Resources in official Kubernetes documentation.

Kangal Proxy

Provides the following HTTP methods for /load-test endpoint:

  • POST - allowing the user to create a new LoadTest
  • GET - allowing the user to see information (status/logs/report) for specific LoadTest and get an overview for all the currently existing loadtests
  • DELETE - allowing the user to stop and delete existing LoadTest

The Kangal Proxy is documented using the OpenAPI Spec.

If you prefer to use Postman you can also import openapi.json file into Postman to create a new collection.

Kangal Controller

The component is responsible for managing all the aspects of the performance testing process.

Quickstart guide

This tutorial will guide through Kangal installation process and usage.

Installing using helm

First, add the repository to Helm:

helm repo add kangal https://hellofresh.github.io/kangal

Now, install the chart using the following command:

helm install kangal kangal/kangal

That's it, Kangal should be installed, check if is all correct by running:

$ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
kangal-controller-588677b854-r9qcs   1/1     Running   0          44s
kangal-openapi-ui-7c5dd8997c-jj4mk   1/1     Running   0          44s
kangal-openapi-ui-7c5dd8997c-vgm8c   1/1     Running   0          44s
kangal-proxy-7d95c9d65-6t44b         1/1     Running   0          44s
kangal-proxy-7d95c9d65-75dv4         1/1     Running   0          44s

$ kubectl get crd
NAME                              CREATED AT
loadtests.kangal.hellofresh.com   2020-10-05T13:22:59Z

For more information about the Helm Chart check charts/kangal/README.md.

Creating first LoadTest

To run a LoadTest you first need to find Kangal proxy endpoint. Use this command:

$ kubectl get ingress
NAME                HOSTS                           ADDRESS     PORTS   AGE
kangal-openapi-ui   kangal-openapi-ui.example.com   localhost   80      5m48s
kangal-proxy        kangal-proxy.example.com        localhost   80      5m48s

This is assuming you have a properly configured Ingress Controller. If it is not the case you can use Port Forwarding.

With this information, you are now able to do a request to create the first load test. Let's start by downloading an example JMeter test and POST it to Kangal proxy.

$ curl -s -O https://raw.githubusercontent.com/hellofresh/kangal/master/examples/constant_load.jmx
$ curl \
    -F "distributedPods=1" \
    -F "testFile=@constant_load.jmx" \
    -F "type=JMeter" \
    http://${KANGAL_PROXY_ADDRESS}/load-test
{
    "type": "JMeter",
    "distributedPods": 1,
    "loadtestName": "loadtest-dunking-hedgehog",
    "phase": "creating",
    "hasEnvVars": false,
    "hasTestData": false
}

Your first load test was created successfully, in this example with the name loadtest-dunking-hedgehog.

Check the load status with:

curl http://${KANGAL_PROXY_ADDRESS}/load-test/loadtest-dunking-hedgehog
{
    "type": "JMeter",
    "distributedPods": 1,
    "loadtestName": "loadtest-dunking-hedgehog",
    "phase": "running",
    "hasEnvVars": false,
    "hasTestData": false
}

Kangal Controller will automatically create a namespace for your load test and deploy the backend (in this case JMeter), check that by running:

$ kubectl get namespaces
NAME                        STATUS   AGE
...
loadtest-dunking-hedgehog   Active   3s

And you can check if the Pods started correctly using:

$ kubectl get pods --namespace=loadtest-dunking-hedgehog
NAME                    READY   STATUS    RESTARTS   AGE
loadtest-master-f6xpb   1/1     Running   0          18s
loadtest-worker-000     1/1     Running   0          22s

Documentation

Read more at docs/index.md.

Contributing

Please check the contribution guide before opening a PR.

Support

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

kangal's People

Contributors

aj-vrod avatar aleferreiranogueira avatar alexander-bruun avatar amandahla avatar crhuber avatar dependabot[bot] avatar diegomarangoni avatar fabioserra avatar gh-automation-app[bot] avatar hf-ghactions-bot avatar jasonbgilroy avatar ldeitos avatar lucasmdrs avatar mustafavelioglu avatar nhatthm avatar s-radyuk avatar s4nji avatar shashaneranasinghe avatar st3vev avatar taofeeqib avatar thiagoarrais avatar vgarvardt avatar walterwanderley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kangal's Issues

Attach annotation to the SA of the test pods?

We are currently attempting to execute tests that rely on a specific role assignment in order to access in-cluster resources. Regrettably, the service account and its associated annotations from the controller configuration fail to carry over to the new namespaces that are generated for each test iteration.

Is it feasible to pursue one of the following options:

  1. Pass through the annotation from the controller service account to the default test run service account.
  2. Generate a completely new service account that includes the same annotation, and subsequently employ this newly created service account for the test run pods?

ARM64 Support

Would it be possible to have an ARM64 compatible image?
Thanks.

There is a conflict with loadtest between datastore and cache

Hi guys, what does this controller error mean? I run the test, and see this error every time

datastore E0222 08:42:22.954016 1 loadtest.go:428] there is a conflict with loadtest 'loadtest-right-seal' between datastore and cache. it might be because object has been removed or modified in the datastore

Can't query info about load tests that are persisted to backend but no longer in "Finished" state

A short time after a load test goes from Running to Finished state, it is "cleaned up" and no longer shows as Finished. The report associated with this will have been persisted, but as far as the API is concerned, there is no way to query a test that has been finished but cleaned up. If we know the name, we can fetch the test report, but if we do not know the name I am unaware of any way (via API) to know anything about the test.

Perhaps this could be accomplished with a "Persisted" state that could be queried, in combination with tags or a time range.

Improving OpenAPI definition

Hi everyone,

The OpenAPI definition doesn't include descriptions or examples for the parameters and schemas, which makes using the API quite difficult. For example, what are the expectations for testData and testFile in #/components/schemas/LoadTest?

I read "How to use Kangal" and "Kangal User Flow", I there's no example in there either.

curl -X POST http://${KANGAL_PROXY_ADDRESS}/load-test \
  -H 'Content-Type: multipart/form-data' \
  -F distributedPods=1 \
  -F testFile=@./examples/constant_load.jmx \
  -F testData=@./artifacts/loadtests/testData.csv \
  -F envVars=@./artifacts/loadtests/envVars.csv \
  -F type=JMeter

An example of testData.csv and envVars.csv would help.

Edit: I found testData.csv and envVars.csv in artifacts/loadtests. I can guess how to use envVars.cvs, but what's the layout for testData.csv?

Kangal UI

There were some internal proposals to build a simple UI for Kangal, to help the usability when CRUDing Load Tests and checking results.

We want some feedback from the community about this topic:

  • Would this be a valuable feature ?
  • Should it cover something else than the API ?
  • Should it be a separate project ?

Add configuration to limit the number of distributed pods

Some clusters may have limitations on the number of IP addresses available in the cluster.

We need a configuration to limit the number of pods the Controller can work with in the Kubernetes cluster to avoid reaching those limits.

Add configuration validation and warnings

As discussed previously here, some sanity checks on the configuration provided should be implemented, to warn the user of unreasonable setup with missing properties and incompatible values (e.g. syncHandlerTimeout being less than kubeClientTimeout)

Default helm chart value will result queries in swagger to return 404

I found issues with the default helm value on the following line: https://github.com/hellofresh/kangal/blob/master/charts/kangal/values.yaml#L79 would cause a 404 error since it will try to append the /openapi endpoint in front of the get request endpoint.

Removing /openapi from the default helm chart variable on L79 fixed the issue since it no longer makes request to api endpoints with the /openapi in front of the endpoint.

Is this expected behavior?

Allow user specified Storage Class for JMeter Custom Data PVC

As far as I can tell, there is no way for the user to specify the storage class for which a JMeter Custom Data PVC will be created.

In my cluster, the default storage class does not support ReadWriteMany, so I am unable to use this feature. If I had the ability, I could select one of the alternate storage classes which do support ReadWriteMany.

Thanks!

Improve Kangal installation process

Report persistence does not work with default installation.due to the predefined values for KANGAL_PROXY_URL, see example in #122

We should try to build the URL based on the context information during the chart installation or provide some detail about non available functionalities due to missing configuration as mentioned in #193

Backend agnostic testData behaviour

I have noticed that the testData compression introduced by #177 is not implemented for some load generators (maybe it is only supported by the JMeter integration code). Does it make sense to extract this behaviour to Kangal itself instead of delegating to specific load generators/backends? Maybe the main engine could deflate the CSV before calling backend.Sync, for example. Conversely, there would need to be arranged some way to include the init container that the JMeter backend uses to inflate it back.

Would the project be willing to accept a PR on that vein? This change could potentially include some other capabilities like the code that splits the testData CSV between workers, of course. Are there any caveats or potential roadblocks I should be aware of?

Proposal: Add /customdata

Goal

Provide a /customdata mount point to JMeter container where custom JAR files or images could be available to tests.

Requirements

Dynamic volume provisioning enabled on Kubernetes cluster with ReadWriteMany (RWX) support

Implementation

  • Before Pod creation, creates PVC
  • Waits for PVC to be bound to a PV
  • Specify a InitContainer that synchronize via rclone /customdata with S3 BUCKET provided by user
  • Files on /customdata can be used by JMeter tests

Add new backend: Vegeta

Goal

Add tsenart/vegeta as a new backend for Kangal.

Acceptance Criteria:

vegeta is added to Kangal as a new load generator backend. Readme is updated accordingly. Kangal can create loadtests with vegeta type.

Allow secure access to S3

Firstly, thank you for this project!

The MinIO client, configured for report uploading, is initialised with secure set to false (in the NewWithCredentials method):

minioClient, err = minio.NewWithCredentials(endpoint, creds, false, cfg.AWSRegion)

Being able to enable encryption in transit would be a good option to have from a security perspective.

Perhaps this could be implemented in a backwards compatible manner, so the default is secure set to false but this could be set to true using configuration?

Upload multiple test scripts to k6 via API?

Hello, I have several K6 load tests which depend on external scripts imports for reporting and interacting with the backend like:

  • test.js
    • reporting.js
    • auth.js
    • shared.js

Is it possible somehow to upload multiple testFile for execution via API?

I only found some remnants about uploading zip via mysterious /api/v1/artifacts/upload but looks like it's no longer exposed by controller/openapi?

Kangal loadtest-worker never exiting

How come when a load test has finished and the master pod has exited, the worker pod is still Running?

image
image

Shouldn't the pods and / or the namespace be deleted after some time so it doesn't take up resources forever?

Larger test data files

When trying to start a test with a ~1M test data (CSV) file, I've noticed that Kangal silently rejects the test. There is an error message printed to the controller logs, but the POST request yields a 201 response and the error can only be externally detected by examining the response to a succeeding GET to /load-test/:test_name and noticing that it does not include a test name.

Besides warning that some kind of error message should be returned to the end user, I'd like to start a discussion here: do we see that test data size should be limited? It can be argued that we don't need a lot of test data since most load testing can be done by just repeating requests. On the other hand, there are very legitimate use cases for large data sets in a load test scenario like cache busting.

Investigate why JobStatus is not persisted for LoadTest resources

We have experienced before some loadTestStatus.JobStatus not being persisted after we process the object in SyncStatus.

That require us to process the finished loadTest object every time in SyncStatus to update loadTestStatus.JobStatus and check if it can be deleted or not.

Further investigation is required to understand why UpdateStatus doesn’t save the updated status.

Example:
Screenshot

hellofresh/kangal images not updated?

I've noticed that the kangal images on docker hub are pretty old even though there was a 2.0.10 release recently. Are newer images being published somewhere else?

Maybe related to that: when installing via helm to a brand new cluster, I get pods with imageID: docker.io/hellofresh/kangal@sha256:8b64b247581[...]. Which is the ID for version 1.2.0 according to docker hub. If the images are somewhere else, maybe the helm charts need updating?

Kangal JMeter tests frequently do not start

Symptom:

When starting a JMeter load test, Kangal will create the appropriate namespace, and create master and worker pods as required. When querying for test status, kangal reports the test as "Running". Examination of the pods shows the test script is not actually running on the worker pod. From kangal's perspective, the test will remain running forever until manually deleted.

Note that if I get a shell on the master pod and run

/launcher.sh /tests/testfile.jmx

the test then launches successfully, 100% of the time (and proceeds to "finished" state.) Because of this, I wonder if the master is trying to launch the test before the worker is fully ready. I intend to experiment with custom JMeter master images that introduce a delay to see if this is in fact the issue.

Note that I have not observed this issue in a local Minikube cluster, but I have observed it frequently in an EKS test cluster.

When this occurs, the tail of the master log hangs at this point:

Setting property 'jmeter.reportgenerator.outputdir' to:'/results' MDC{}
Default base='/opt/apache-jmeter-5.4.1' MDC{}
Set new base='/tests' MDC{}
Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2 MDC{}
Using SaveService properties version 5.0 MDC{}
Using SaveService properties file encoding UTF-8 MDC{}
Loading file: /tests/testfile.jmx MDC{}
Settings: Delete null: true Check: true Allow variable: true Save: false Prefix: COOKIE_ MDC{}
ReportGenerator will use for Parsing the separator: ',' MDC{}
Will generate report at end of test from results file: results.csv MDC{}
Reading report generator properties from: /opt/apache-jmeter-5.4.1/bin/reportgenerator.properties MDC{}
Merging with JMeter properties MDC{}
Property 'jmeter.reportgenerator.temp_dir' not found, using default value 'temp' instead. MDC{}
Property 'jmeter.reportgenerator.apdex_per_transaction' not found, using default value 'null' instead. MDC{}
apdex_per_transaction is empty, not APDEX per transaction customization MDC{}
Property 'jmeter.reportgenerator.sample_filter' not found, using default value 'null' instead. MDC{}
Property 'jmeter.reportgenerator.start_date' not found, using default value 'null' instead. MDC{}
Property 'jmeter.reportgenerator.end_date' not found, using default value 'null' instead. MDC{}
Property 'jmeter.reportgenerator.date_format' not found, using default value 'null' instead. MDC{}
Will use date range start date: null, end date: null MDC{}
Property 'jmeter.reportgenerator.graph.totalTPS.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.activeThreadsOverTime.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.timeVsThreads.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.responseTimeDistribution.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.transactionsPerSecond.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.responseTimePercentiles.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.responseTimePercentilesOverTime.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.responseTimesOverTime.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.connectTimeOverTime.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.graph.latenciesOverTime.exclude_controllers' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.json.filters_only_sample_series' not found, using default value 'true' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.json.series_filter' not found, using default value '' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.json.show_controllers_only' not found, using default value 'false' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.html.filters_only_sample_series' not found, using default value 'true' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.html.series_filter' not found, using default value '' instead. MDC{}
Property 'jmeter.reportgenerator.exporter.html.show_controllers_only' not found, using default value 'false' instead. MDC{}
Created the tree successfully using /tests/testfile.jmx
Starting distributed test with remote engines: [] @ Mon Sep 19 15:45:08 UTC 2022 (1663602308681) MDC{}
Starting distributed test with remote engines: [] @ Mon Sep 19 15:45:08 UTC 2022 (1663602308681)
Remote engines have been started:[] MDC{}
Remote engines have been started:[]
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
2022-09-19 15:45:08,730 pool-1-thread-1 DEBUG Stopping LoggerContext[name=31befd9f, org.apache.logging.log4j.core.LoggerContext@4be29ed9]
2022-09-19 15:45:08,730 pool-1-thread-1 DEBUG Stopping LoggerContext[name=31befd9f, org.apache.logging.log4j.core.LoggerContext@4be29ed9]...
2022-09-19 15:45:08,732 pool-1-thread-1 DEBUG Appender FLOW stopped with status true
2022-09-19 15:45:08,732 pool-1-thread-1 DEBUG Shutting down OutputStreamManager SYSTEM_OUT.false.false
2022-09-19 15:45:08,732 pool-1-thread-1 DEBUG OutputStream closed
2022-09-19 15:45:08,732 pool-1-thread-1 DEBUG Shut down OutputStreamManager SYSTEM_OUT.false.false, all resources released: true
2022-09-19 15:45:08,733 pool-1-thread-1 DEBUG Appender STDOUT stopped with status true
2022-09-19 15:45:08,733 pool-1-thread-1 DEBUG Stopped XmlConfiguration[location=/opt/apache-jmeter-5.4.1/log4j2.xml] OK
2022-09-19 15:45:08,733 pool-1-thread-1 DEBUG Stopped LoggerContext[name=31befd9f, org.apache.logging.log4j.core.LoggerContext@4be29ed9] with status true

CLEANUP_THRESHOLD usage

I'm not sure about what would be a expected correct behavior from env var CLEANUP_THRESHOLD.

From the docs: "Life time of a load test"

But it only applies for tests that are Finished or Errored.

Would be useful to have separated vars to control life time of each state (finished, errored and running)?

Simplify result uploading

Currently the JMeter master pods upload the results to S3 using a presigned URL once the Load Tests are done.
This process can be simplified by using a sidecar to upload those results.

Support google cloud for results uploading

We use Kangal in our organisation, but organisation does not use AWS, therefore would like to suggest implementing results uploading to Google cloud storage or even a more generic solution to persist test results.
P.S. we run Kangal from Github actions.

Kangal can use JMeter report to measure performance of the service under test

Goal

Add a new functionality to Kangal to measure performance by some key metrics.

Problem:
Currently, Kangal doesn't have a way to estimate the performance of the service responding to the load. Probably all requests got 4** or latency was too high, etc. Users have to monitor logs and metrics of service under test to understand how it's behaving under load. Users can't see if the previous test run showed the better results than the next one.

Possible solution:
JMeter backend has a built-in functionality to generate the report showing some statistics. It can provide the following information:

  • average response time during the test
  • max response time during the test
  • % and the number of errors during the test
  • Max hits per second during the test
  • ...

Examples of JMeter report graphs:
Screenshot 2022-02-16 at 16 24 22
Screenshot 2022-02-16 at 16 23 57

Kangal can read the key values from JMeter report and calculate some simple metrics based on this data. These metrics can be used as thresholds for the next runs to spot the degradation or improvement of performance.

Acceptance Criteria:

User can understand if the service was behaving well under load or not.

Persistence options

Is there other options rather than AWS S3 persistence?

Personally I would like to use my own StorageClass that is already setup for persisting data across the cluster, maybe the helm chart / image could be updated to support both approaches instead of being stuck with AWS.

Grafana docs

The following docs: https://github.com/hellofresh/kangal/blob/37a89a3569a310c2a501478fd8b0c055ad436293/docs/jmeter/writing-tests.md did not work for me after doing everything that was documented, measurements dropdown in grafana ui would be empty and through influx query cli it would also show nothing.

To fix this issue I changed the jmx file to use graphite BackendListener and updated my InfluxDB helm chart to enable Graphite.

The following .jmx file worked for me:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Example GET request." enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">1</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">2000</stringProp>
        <stringProp name="ThreadGroup.ramp_time">200</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
        <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain">kangal-proxy.default</stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path">/</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree>
          <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Header Manager" enabled="true">
            <collectionProp name="HeaderManager.headers">
              <elementProp name="" elementType="Header">
                <stringProp name="Header.name">Content-Type</stringProp>
                <stringProp name="Header.value">application/json, text/plain, */*</stringProp>
              </elementProp>
            </collectionProp>
          </HeaderManager>
          <hashTree/>
        </hashTree>
      </hashTree>
      <BackendListener guiclass="BackendListenerGui" testclass="BackendListener" testname="Backend Listener" enabled="true">
        <elementProp name="arguments" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" enabled="true">
          <collectionProp name="Arguments.arguments">
            <elementProp name="graphiteMetricsSender" elementType="Argument">
              <stringProp name="Argument.name">graphiteMetricsSender</stringProp>
              <stringProp name="Argument.value">org.apache.jmeter.visualizers.backend.graphite.TextGraphiteMetricsSender</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="graphiteHost" elementType="Argument">
              <stringProp name="Argument.name">graphiteHost</stringProp>
              <stringProp name="Argument.value">influxdb.default</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="graphitePort" elementType="Argument">
              <stringProp name="Argument.name">graphitePort</stringProp>
              <stringProp name="Argument.value">2003</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="rootMetricsPrefix" elementType="Argument">
              <stringProp name="Argument.name">rootMetricsPrefix</stringProp>
              <stringProp name="Argument.value">jmeter.</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="summaryOnly" elementType="Argument">
              <stringProp name="Argument.name">summaryOnly</stringProp>
              <stringProp name="Argument.value">false</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="samplersList" elementType="Argument">
              <stringProp name="Argument.name">samplersList</stringProp>
              <stringProp name="Argument.value">.*</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="useRegexpForSamplersList" elementType="Argument">
              <stringProp name="Argument.name">useRegexpForSamplersList</stringProp>
              <stringProp name="Argument.value">true</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
            <elementProp name="percentiles" elementType="Argument">
              <stringProp name="Argument.name">percentiles</stringProp>
              <stringProp name="Argument.value">90;95;99</stringProp>
              <stringProp name="Argument.metadata">=</stringProp>
            </elementProp>
          </collectionProp>
        </elementProp>
        <stringProp name="classname">org.apache.jmeter.visualizers.backend.graphite.GraphiteBackendListenerClient</stringProp>
      </BackendListener>
      <hashTree/>
    </hashTree>
  </hashTree>
</jmeterTestPlan>

with the following influxd config defined through their respective helm chart:

[[graphite]]
  enabled = true
  bind-address = ":2003"
  database = "jmeter"
  retention-policy = ""
  protocol = "tcp"
  batch-size = 5000
  batch-pending = 10
  batch-timeout = "1s"
  consistency-level = "one"
  separator = "."
  udp-read-buffer = 0

When the changes above were used I could finally use the measurements defined through graphite:
image

Maybe not a bug on Kangal side, but maybe docs could be updated a bit.

Please just close ticket if this is not relevant to Kangal, just thought I would share my experience with Kangal -> InfluxDB -> Grafana integrations.

Locust Report not uploading

@diegomarangoni I have installed 1.1.11 as you mentioned in my previous issue. After spinning up the pods, when I try to upload a sample file, I am getting below error.

curl -X PUT -T kangal.yaml -L http://localhost:8081/load-test/loadtest-orderly-armadillo/report -v
*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8081 (#0)
> PUT /load-test/loadtest-orderly-armadillo/report HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Length: 2637
> Expect: 100-continue
>
* Done waiting for 100-continue
* We are completely uploaded and fine
< HTTP/1.1 500 Internal Server Error
< Content-Type: application/json; charset=utf-8
< Vary: Origin
< Date: Tue, 20 Apr 2021 16:29:12 GMT
< Content-Length: 75
< Connection: close
<
{"error":"Presigned URLs cannot be generated with anonymous credentials."}
* Closing connection 0

Backend agnostic integration tests

Currently integration test depend on JMeter Load Generator backend.

Ideally they should be agnostic using a dummy Load Generator, leaving Load Generator dependent tests to their own backend specific integration tests.

define a specific namespace when running jmeter tests?

Hi guys, thank you for cool project!

One questions, when running tests, a new namespace with a random name is always created, this is not always convenient, since there is no way to limit the namespace by CPU and memory resources ...
Is it possible to define a specific namespace when running jmeter tests?

JMeter test closed prematurely

Started JMeter test for 240 minutes, but it was shut down automatically after ~30 minutes. Is there a limit for max test duration?

Upgrade minio-go to v7

The version v7.0.22 of minio-go is already available, let's upgrade it.

Some changes in the client and the report handler will be needed

Not installing the latest version

Upon following the instructions in README, helm is not installing the latest version (1.1.11). It is installing 1.4.9. Could you please release the helm release?

apiVersion: v1
description: Run JMeter performance tests in Kubernetes cluster with Kangal
home: https://github.com/hellofresh/kangal
icon: https://raw.githubusercontent.com/hellofresh/kangal/master/logo.svg
keywords:
- kubernetes
- jmeter
- performance tests
- tests runner
maintainers:
- email: [email protected]
  name: Kangal
name: kangal
version: 1.4.9

Node selector for distributedPods

Hi there,
First of all, great work on the project. We're already using it here at @leroy-merlin-br and will start working soon on it to support K6 as a backend. :)

Now getting to the point, does the load tests pods (distributedPods) run on the same node group / node pool that I set for Kangal Controller?

Cheers! 🍻

Fix kubernetes code gen

Currently the code generator script is failing, preventing new contributions to make changes in the CRD.

Prometheus exporter - Running tests

Is there a way to get the number of running tests as Prometheus metrics? A Prometheus exporter for kangal-controller?

We can check that on /load-test but we would like to monitor how many tests are running using Prometheus/Grafana.

Cleanup some unused configurations

As previously discussed here, there are some configurations in controller and proxy package that are not required anywhere in the code and should be removed.

Remove MasterURL, KubeConfig and any other configuration that might be duplicated, unused or only used by the cmd package

Locust Report - HTTP 404

I was able to run the locust test successfully using Kangal. But when I query the report API, I am getting 404 page not found.

curl http://127.0.0.1:8081/load-test/loadtest-terrific-chinchilla/report/

Error:

404 page not found

Custom locust images for master and worker

The helm chart documentation says this for the locust image:

configmap.LOCUST_IMAGE_NAME: Default Locust image name/repository if none is provided when creating a new loadtest

I understand it like there is a way to specify in each new loadtest the image to use for locust.
To get so I tried with the envvars file defining the following:

"LOCUST_IMAGE_NAME","thisisnotaninamge/locust"
"LOCUST_IMAGE","thisisnotaninamge/locust"
"LOCUST_IMAGE_TAG","noimage"

But even in that case the default image is used.

Is this a bug? Is this the right mechanism to use a custom locust image in a per loadtest basis?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.