Giter VIP home page Giter VIP logo

locust-k8s-operator's People

Contributors

abdelrhmanhamouda avatar dependabot[bot] avatar fernandezcuesta avatar jachinte avatar metacosm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

locust-k8s-operator's Issues

[Feature request] how to download locust report after test execution

Is your feature request related to a problem? Please describe.
Download detailed report after test execution

Describe the solution you'd like
Integrate slack notification with download report option

Additional context
Its very difficult to keep in track while executing the test

[Feature request] support loading a test directly from a config map

Is your feature request related to a problem? Please describe.
deployed image on the cluster doesn't have an easy way of detecting the tests unless they were pre¡-packaged in the image

Describe the solution you'd like
to enable dynamic mounting of a tests deployed to the cluster without the need to build a new image or have a PVC / PV for each test, a support for dynamic configmap mounting is needed

Describe alternatives you've considered
Build image with test pre-packaged or PVC / PV

[Feature request] Add a TTL period to deployed jobs

Is your feature request related to a problem? Please describe.
When a test ends, LocustTest resources have to be manually deleted.

Describe the solution you'd like
I'd like to set the time-to-live after finished attribute in the Job spec, so that master and worker jobs delete themselves when a test ends. This would be optional.

Describe alternatives you've considered
Call method withTtlSecondsAfterFinished(X) when creating job specs in class ResourceCreationHelpers. Parameter X would be defined in Helm's values.yaml and passed to the operator as an environment variable.

Additional context
I'm willing to implement this change if you think they'd be a good addition to the repository.

[Custom request] Increase test coverage to cover boot sequence

Using @MicronautTest is not working as intended because the operator needs a kubernetes mock server to be setup before the application is spawned. This is default to achieve with @MicronautTest. An easy way to get around this is by utilising testContainers. However, it is an approch that would be a last resort due to the added complexity of setting up a seamless flow for this.

Best approach, enable a CRUD kubernetes mock server to be initialised before application injection takes place.

[Feature request] Allow automatic resolution of completed tests

Is your feature request related to a problem? Please describe.
Since I'm launching tests programmatically, one thing I'm missing from the operator is how to determine whether a test was successful. I created a k8s admission webhook that is notified whenever a master pod is updated. However, since the locust-exporter container never stops running, the tests can never be marked as successful. This is because the master pod doesn't transition to a Complete state when the test ends.

I noticed that the latest version of locust-exporter now has an endpoint to terminate the container (/quitquitquit). Indeed, calling this endpoint makes the test successful because the pod is marked as Complete. However, one would have to call this endpoint manually every time a test ends.

Describe the solution you'd like
An ideal solution would be terminating the locust-exporter once the test is done, so that the master pod is marked as Complete.

Describe alternatives you've considered
Since metrics are scraped, one would have to ensure that Prometheus pulled the metrics before terminating the exporter container.

  • An alternative here is to set a time-to-live (TTL) period after finished (e.g., Prometheus's scraping frequency), so that metrics are scraped at least once.
  • Another alternative would be to push metrics to Prometheus using pushgateway, which was designed for ephemeral batch jobs.

The second alternative seems more appropriate, however, it requires a lot more work. The first alternative would require either modifying locust-exporter, so that it terminates itself (e.g., when locust_up = 0) after the TTL period, or adding a sidecar container querying the metrics endpoint, looking for locust_up = 0, to call /quitquitquit eventually.

Additional context

What do you think of these alternatives? I'm willing to implement one of these.

[Feature request] Enable HELM for the operator

Is your feature request related to a problem? Please describe.
IT is not possible to deploy the operator with HELM as of now!

Describe the solution you'd like
Enable HELM for the operator

Describe alternatives you've considered
k8s deployment. However, it is not as effective as HELM

Set CPU and Memory resources for the metrics exporter container

Is your feature request related to a problem? Please describe.
We have a custom OPA admission controller policy that watches for and rejects containers that do not have CPU and Memory resources defined. The locust-metrics-exporter in fact does not have these resources set so when spinning up the locust master worker it errors out as seen in this error message below.

�[32m2023-08-16 19:34:49,414�[0;39m �[34mINFO �[0;39m [�[34mReconcilerExecutor-locusttestreconciler-26�[0;39m] �[33mcom.locust.operator.controller.utils.resource.manage.ResourceCreationManager�[0;39m: Creating Job for:loadtest-master in namespace: locust
�[32m2023-08-16 19:34:57,014�[0;39m �[1;31mERROR�[0;39m [�[34mReconcilerExecutor-locusttestreconciler-26�[0;39m] �[33mcom.locust.operator.controller.utils.resource.manage.ResourceCreationManager�[0;39m: Exception occurred during Job creation: Failure executing: POST at: https://172.20.0.1/apis/batch/v1/namespaces/locust/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-resources] container <locust-metrics-exporter> has no resource limits.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1/apis/batch/v1/namespaces/locust/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-resources] container <locust-metrics-exporter> has no resource limits.
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
	at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
	at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)

Describe the solution you'd like
I've implemented a solution in this PR: #131
which reuses the same resources defined for the load gen container.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

[BUG] Operator doesn't manage the metric exporter sidecar

Describe the bug
After the CRD has been applied and the test has completed running for the duration specified, the process fails. The worker job completes, while the master job continues to run.

To Reproduce
Steps to reproduce the behavior:

  1. Apply the LocustTest manifest to start the test
  2. Once the test runs for specified duration, list available jobs and pods
  3. You should see that the worker job and worker pods have been completed. However, the master job has not been completed and the pod is in a NotReady state.

Expected behavior
Once the test has completed, both the worker and the master pods should be in a Completed state and eventually removed.

Screenshots
Pods status after the test has completed.
image

Jobs status after the test has completed.
image

Additional context
I suspect that the problem is that on the master node, the locust-metrics-exporter container never stop and continues to run. Failing to signal job completion.

[Custom request] document a full flow example

To help viewers understand better the project, document an example, from start to finish. The example should include:

  • How to deploy the operator
  • Optional serviceAccounts in case needed for the users cluster
  • simple locust test
  • deploy test as configMap
  • deploy test CR
  • resources running

[Feature request] Customizable "locust_exporter" Image

Is your feature request related to a problem? Please describe.
I often encounter timeouts and instability while pulling the locust_exporter image due to network issues. This creates a frustrating experience.

Describe the solution you'd like
I would like to have the ability to customize the locust_exporter image in the locust-k8s-operator. This customization feature would allow me to configure the image repository URL and tag, enabling me to choose a mirror or address that suits my network environment and preferences.

Not only does this modification address network-related issues, but it also makes it possible for users to further customize the exported monitoring metrics.

No resources are deployed

Describe the bug
No jobs or services are run when I deploy a LocustTest.

To Reproduce
Steps to reproduce the behavior:

  1. Create a local kubernetes cluster using k3d:
k3d cluster create mycluster
  1. Deploy the Helm chart of the operator:
cd locust-k8s-operator/charts/locust-k8s-operator
helm install -f values.yaml locust-operator . -n locust-operator --create-namespace --debug --wait
  1. Create a ConfigMap:
kubectl create configmap demo-cm --from-file PATH_TO_FILE/demo.py
  1. Create and deploy the LocustTest specification:
cat <<EOF >test.yaml
apiVersion: locust.io/v1
kind: LocustTest
metadata:
  name: demo.test
spec:
  image: locustio/locust:latest
  masterCommandSeed:
    --locustfile /lotest/src/demo.py
    --host https://www.google.com
    --users 100
    --spawn-rate 3
    --run-time 3m
  workerCommandSeed: --locustfile /lotest/src/demo.py
  workerReplicas: 3
  configMap: demo-cm
EOF

kubectl apply -f test.yaml

Expected behavior
I would expect to see two jobs and one service but I see none. Although I do see the locusttest resource and the operator.

Screenshots

# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   45h

# kubectl get job
No resources found in default namespace.

# kubectl get locusttest
NAME        MASTER_CMD                                                                                                         WORKER_REPLICA_COUNT   IMAGE                    AGE
demo.test   --locustfile /lotest/src/robot-shop.py --host http://35.237.9.210:8080/ --users 100 --spawn-rate 3 --run-time 3m   3                      locustio/locust:latest   9m2s

# kubectl get all -n locust-operator
NAME                                                       READY   STATUS    RESTARTS       AGE
pod/locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx   1/1     Running   1 (118m ago)   120m

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/locust-operator-locust-k8s-operator   1/1     1            1           120m

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/locust-operator-locust-k8s-operator-56d88bfcc4   1         1         1       120m

Additional context

# helm list -n locust-operator
NAME           	NAMESPACE      	REVISION	UPDATED                             	STATUS  	CHART                    	APP VERSION
locust-operator	locust-operator	1       	2022-10-17 15:53:09.454928 -0400 EDT	deployed	locust-k8s-operator-0.1.0	0.1.0

Here are the logs, which contain more detail about the problem:

# kubectl logs pod/locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx -n locust-operator
 __  __ _                                  _   
|  \/  (_) ___ _ __ ___  _ __   __ _ _   _| |_ 
| |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __|
| |  | | | (__| | | (_) | | | | (_| | |_| | |_ 
|_|  |_|_|\___|_|  \___/|_| |_|\__,_|\__,_|\__|
  Micronaut (v3.6.2)

2022-10-17 19:54:53,983 INFO [main] io.micronaut.context.env.DefaultEnvironment: Established active environments: [k8s, cloud]
2022-10-17 19:54:56,958 INFO [main] com.locust.LocustTestOperatorStarter: Starting Kubernetes reconciler!
2022-10-17 19:54:58,361 WARN [main] io.javaoperatorsdk.operator.api.config.BaseConfigurationService: Configuration for reconciler 'locusttestreconciler' was not found. Known reconcilers: None.
2022-10-17 19:54:58,366 INFO [main] io.javaoperatorsdk.operator.api.config.BaseConfigurationService: Created configuration for reconciler com.locust.operator.controller.LocustTestReconciler with name locusttestreconciler
2022-10-17 19:54:58,544 INFO [main] io.javaoperatorsdk.operator.Operator: Registered reconciler: 'locusttestreconciler' for resource: 'class com.locust.operator.customresource.LocustTest' for namespace(s): [all namespaces]
2022-10-17 19:54:58,545 INFO [main] io.javaoperatorsdk.operator.Operator: Operator SDK 3.2.0 (commit: edd12e5) built on Mon Sep 05 08:26:50 UTC 2022 starting...
2022-10-17 19:54:58,574 INFO [main] io.javaoperatorsdk.operator.Operator: Client version: 6.1.1
2022-10-17 19:54:58,580 INFO [main] io.javaoperatorsdk.operator.processing.Controller: Starting 'locusttestreconciler' controller for reconciler: com.locust.operator.controller.LocustTestReconciler, resource: com.locust.operator.customresource.LocustTest
2022-10-17 19:54:59,547 INFO [main] io.javaoperatorsdk.operator.processing.Controller: 'locusttestreconciler' controller started, pending event sources initialization
2022-10-17 19:54:59,614 INFO [main] io.micronaut.runtime.Micronaut: Startup completed in 7597ms. Server Running: http://locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx:8080
2022-10-17 21:42:36,550 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.LocustTestReconciler: LocustTest created: 'demo.test'
2022-10-17 21:42:36,601 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating service for: demo-test-master in namespace: default
2022-10-17 21:42:36,894 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during service creation: Failure executing: POST at: https://10.43.0.1/api/v1/namespaces/default/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "services" in API group "" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/api/v1/namespaces/default/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "services" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2022-10-17 21:42:36,934 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating Job for: demo-test-master in namespace: default
2022-10-17 21:42:37,880 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during Job creation: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2022-10-17 21:42:37,917 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating Job for: demo-test-worker in namespace: default
2022-10-17 21:42:37,997 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during Job creation: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)

[Feature request] Allow resource deployments into specific node groups

Is your feature request related to a problem? Please describe.
Enable the operator to support deploying cluster resources into specific node groups

Describe the solution you'd like
Through the Custom resource, support a filed that would dictate which Kubernetes node group to be used.

[Feature request] Switch pod cmd injection from env var to container CMD

Is your feature request related to a problem? Please describe.
Currently the operator sets the command as an env var. This is not aligned with the native locust image which expects the container to be invoked with a CMD.

Describe the solution you'd like
create the resource with a CMD instead of a ENV var

Describe alternatives you've considered
custom build of locust that accepts the env var

[Feature request] Inject kafka auth data based on a CR flag

To avoid having kafka auth info being injected to the pod all the time, allow this functionality to be controlled by a CR optional flag

Example:

apiVersion: locust.io/v1
kind: LocustTest
metadata:
  name: <CR_NAME>
spec:
  ...
  kafka: enabled | disabled (default: disabled)

[Feature request] Adding labels and annotations to pods

Is your feature request related to a problem? Please describe.
I would like to get Prometheus metrics for tests I launch programmatically. Since Prometheus scrapes all pods annotated with prometheus.io/scrape: true, metrics are stored as if they belong to the same test / system.

Describe the solution you'd like
I'd like to attach labels and annotations to the deployed pods, so that I can customize the Prometheus config. This is what I have in mind: use __meta_kubernetes_pod_label_<labelname> and __meta_kubernetes_pod_annotation_<annotationname> in kubernetes_sd_config.

Describe alternatives you've considered
I am not aware of other solutions.

Additional context
I already implemented this feature in looflow/locust-k8s-operator. Would you be interested in having this as a contribution?

[Feature request] Make it possible to inject kubernetes secrets into the locust worker pods

Is your feature request related to a problem? Please describe.
I'd like to have secrets (such as tokens) available in the locust files to be used in the API calls

Describe the solution you'd like
The idea here is to create a Secret with the tokens in the locust namespace and point the LocustTest to that Secret (similarly as the locust file is mounted from ConfigMap. The controller would then mount the Secret to the worker pods (either as env variables or as a files) and the locust files executed by the workers would then be able to get those tokens from env variables (or by reading the mounted files).

Describe alternatives you've considered
Currently, I need to hard-code the tokens into the locust file directly.

Additional context
n/a

[BUG] service selector is not applied correctly

Describe the bug
service selector is not applied correctly between pod and service

To Reproduce
deploy a locusttest resource and describe the generated service and the pod

Expected behavior
both label & selector are the same

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

[Custom request] enhance "How does it work" docs section

Main points cover:

  • Add more information on LocustTest CRD
    • HELM flag responsible for CRD deployment
    • Mandatory fields
    • Optional fields
    • kubectl "display" fields
  • Add more information about LocustTest CR
    • Impact of each added section
    • Assumptions made by having (or leaving) configMap field empty

[Custom request] Metrics & Dashboard

Hi, I would like to learn more about "Metrics & Dashboard". Unfortunately, there is no documentation. What functionality exists today? What integrations are required? Is there a GUI? Thank you

[Feature request] Enable setting the exporter image repository and tag from Helm's values file

Is your feature request related to a problem? Please describe.
It would be great if I could upgrade the locust-exporter image without having to compile the operator and create a new release.

Describe the solution you'd like
Move constants EXPORTER_IMAGE_* from Constants.java to values.yaml.

Describe alternatives you've considered
Perhaps the SysConfig class is the most appropriate place to declare these parameters? They'd have to be declared in several places, including Micronaut's application.yml, and Helm's values.yaml and deployment.yaml.

Additional context
I'm willing to implement this change, in case you think it'd be a good addition to the repository.

[Custom request] Quality of life change: Use Google's release-please

This project is already compliant with conventional commits standard. Currently to release a new version, a manual trigger needs to take place by running "cz bump". This is extremely easy but it would be more convenient of that part is also automated. One possible way is by using the "Release please" utility from Google.

Starting point: https://github.com/googleapis/release-please

Criteria to check:

  • compatibility with current project
  • how it can be integrated
  • how it can bump versions for gradle
  • any conflicts with what have been previously done with "cz"
  • any conflicts with how "releases" are currently generated in the project

[Custom request] Enhance "How does it work" docs to run it with Locust UI

So we can run locust in a headless state, in auto mode, and the last using the UI to pass in host and other information. What changes can I make to the spec file with Kind:LocustTest to be able to connect to the locust master running on port 8089? or should I be creating a spec file and somehow tie that up to the spec file with Kind:LocustTest somehow? The way Kubernetes does it is that we have to provide a spec file with Kind:Service so I am looking for an equivalent to that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.