abdelrhmanhamouda / locust-k8s-operator Goto Github PK
View Code? Open in Web Editor NEWDeploy and manage Locust tests on Kubernetes.
Home Page: https://abdelrhmanhamouda.github.io/locust-k8s-operator/
License: Apache License 2.0
Deploy and manage Locust tests on Kubernetes.
Home Page: https://abdelrhmanhamouda.github.io/locust-k8s-operator/
License: Apache License 2.0
Is your feature request related to a problem? Please describe.
Download detailed report after test execution
Describe the solution you'd like
Integrate slack notification with download report option
Additional context
Its very difficult to keep in track while executing the test
Is your feature request related to a problem? Please describe.
deployed image on the cluster doesn't have an easy way of detecting the tests unless they were pre¡-packaged in the image
Describe the solution you'd like
to enable dynamic mounting of a tests deployed to the cluster without the need to build a new image or have a PVC / PV for each test, a support for dynamic configmap mounting is needed
Describe alternatives you've considered
Build image with test pre-packaged or PVC / PV
Is your feature request related to a problem? Please describe.
When a test ends, LocustTest
resources have to be manually deleted.
Describe the solution you'd like
I'd like to set the time-to-live after finished attribute in the Job spec, so that master and worker jobs delete themselves when a test ends. This would be optional.
Describe alternatives you've considered
Call method withTtlSecondsAfterFinished(X)
when creating job specs in class ResourceCreationHelpers
. Parameter X
would be defined in Helm's values.yaml
and passed to the operator as an environment variable.
Additional context
I'm willing to implement this change if you think they'd be a good addition to the repository.
Using @MicronautTest
is not working as intended because the operator needs a kubernetes mock server to be setup before the application is spawned. This is default to achieve with @MicronautTest
. An easy way to get around this is by utilising testContainers
. However, it is an approch that would be a last resort due to the added complexity of setting up a seamless flow for this.
Best approach, enable a CRUD kubernetes mock server to be initialised before application injection takes place.
Is your feature request related to a problem? Please describe.
Since I'm launching tests programmatically, one thing I'm missing from the operator is how to determine whether a test was successful. I created a k8s admission webhook that is notified whenever a master pod is updated. However, since the locust-exporter container never stops running, the tests can never be marked as successful. This is because the master pod doesn't transition to a Complete
state when the test ends.
I noticed that the latest version of locust-exporter now has an endpoint to terminate the container (/quitquitquit
). Indeed, calling this endpoint makes the test successful because the pod is marked as Complete
. However, one would have to call this endpoint manually every time a test ends.
Describe the solution you'd like
An ideal solution would be terminating the locust-exporter once the test is done, so that the master pod is marked as Complete
.
Describe alternatives you've considered
Since metrics are scraped, one would have to ensure that Prometheus pulled the metrics before terminating the exporter container.
The second alternative seems more appropriate, however, it requires a lot more work. The first alternative would require either modifying locust-exporter, so that it terminates itself (e.g., when locust_up = 0
) after the TTL period, or adding a sidecar container querying the metrics endpoint, looking for locust_up = 0
, to call /quitquitquit
eventually.
Additional context
What do you think of these alternatives? I'm willing to implement one of these.
Is your feature request related to a problem? Please describe.
IT is not possible to deploy the operator with HELM as of now!
Describe the solution you'd like
Enable HELM for the operator
Describe alternatives you've considered
k8s deployment. However, it is not as effective as HELM
Is your feature request related to a problem? Please describe.
We have a custom OPA admission controller policy that watches for and rejects containers that do not have CPU and Memory resources defined. The locust-metrics-exporter
in fact does not have these resources set so when spinning up the locust master worker it errors out as seen in this error message below.
�[32m2023-08-16 19:34:49,414�[0;39m �[34mINFO �[0;39m [�[34mReconcilerExecutor-locusttestreconciler-26�[0;39m] �[33mcom.locust.operator.controller.utils.resource.manage.ResourceCreationManager�[0;39m: Creating Job for:loadtest-master in namespace: locust
�[32m2023-08-16 19:34:57,014�[0;39m �[1;31mERROR�[0;39m [�[34mReconcilerExecutor-locusttestreconciler-26�[0;39m] �[33mcom.locust.operator.controller.utils.resource.manage.ResourceCreationManager�[0;39m: Exception occurred during Job creation: Failure executing: POST at: https://172.20.0.1/apis/batch/v1/namespaces/locust/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-resources] container <locust-metrics-exporter> has no resource limits.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1/apis/batch/v1/namespaces/locust/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. admission webhook "validation.gatekeeper.sh" denied the request: [container-must-have-resources] container <locust-metrics-exporter> has no resource limits.
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Describe the solution you'd like
I've implemented a solution in this PR: #131
which reuses the same resources defined for the load gen container.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Describe the bug
After the CRD has been applied and the test has completed running for the duration specified, the process fails. The worker job completes, while the master job continues to run.
To Reproduce
Steps to reproduce the behavior:
LocustTest
manifest to start the testExpected behavior
Once the test has completed, both the worker and the master pods should be in a Completed
state and eventually removed.
Screenshots
Pods status after the test has completed.
Jobs status after the test has completed.
Additional context
I suspect that the problem is that on the master node, the locust-metrics-exporter
container never stop and continues to run. Failing to signal job completion.
currently dependabot only scans build.gradle. This project contains the dependencies in another file that is not being scanned!
To help viewers understand better the project, document an example, from start to finish. The example should include:
Is your feature request related to a problem? Please describe.
I often encounter timeouts and instability while pulling the locust_exporter image due to network issues. This creates a frustrating experience.
Describe the solution you'd like
I would like to have the ability to customize the locust_exporter image in the locust-k8s-operator. This customization feature would allow me to configure the image repository URL and tag, enabling me to choose a mirror or address that suits my network environment and preferences.
Not only does this modification address network-related issues, but it also makes it possible for users to further customize the exported monitoring metrics.
Replace the main README.md with a proper in-depth documentation hosted on gh-pages.
Bounce: enable versioning
In order to ease tracing of generated resources, add a managed-by
label to all generated resources
Describe the bug
No jobs or services are run when I deploy a LocustTest.
To Reproduce
Steps to reproduce the behavior:
k3d cluster create mycluster
cd locust-k8s-operator/charts/locust-k8s-operator
helm install -f values.yaml locust-operator . -n locust-operator --create-namespace --debug --wait
kubectl create configmap demo-cm --from-file PATH_TO_FILE/demo.py
cat <<EOF >test.yaml
apiVersion: locust.io/v1
kind: LocustTest
metadata:
name: demo.test
spec:
image: locustio/locust:latest
masterCommandSeed:
--locustfile /lotest/src/demo.py
--host https://www.google.com
--users 100
--spawn-rate 3
--run-time 3m
workerCommandSeed: --locustfile /lotest/src/demo.py
workerReplicas: 3
configMap: demo-cm
EOF
kubectl apply -f test.yaml
Expected behavior
I would expect to see two jobs and one service but I see none. Although I do see the locusttest resource and the operator.
Screenshots
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 45h
# kubectl get job
No resources found in default namespace.
# kubectl get locusttest
NAME MASTER_CMD WORKER_REPLICA_COUNT IMAGE AGE
demo.test --locustfile /lotest/src/robot-shop.py --host http://35.237.9.210:8080/ --users 100 --spawn-rate 3 --run-time 3m 3 locustio/locust:latest 9m2s
# kubectl get all -n locust-operator
NAME READY STATUS RESTARTS AGE
pod/locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx 1/1 Running 1 (118m ago) 120m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/locust-operator-locust-k8s-operator 1/1 1 1 120m
NAME DESIRED CURRENT READY AGE
replicaset.apps/locust-operator-locust-k8s-operator-56d88bfcc4 1 1 1 120m
Additional context
# helm list -n locust-operator
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
locust-operator locust-operator 1 2022-10-17 15:53:09.454928 -0400 EDT deployed locust-k8s-operator-0.1.0 0.1.0
Here are the logs, which contain more detail about the problem:
# kubectl logs pod/locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx -n locust-operator __ __ _ _ | \/ (_) ___ _ __ ___ _ __ __ _ _ _| |_ | |\/| | |/ __| '__/ _ \| '_ \ / _` | | | | __| | | | | | (__| | | (_) | | | | (_| | |_| | |_ |_| |_|_|\___|_| \___/|_| |_|\__,_|\__,_|\__| Micronaut (v3.6.2)2022-10-17 19:54:53,983 INFO [main] io.micronaut.context.env.DefaultEnvironment: Established active environments: [k8s, cloud]
2022-10-17 19:54:56,958 INFO [main] com.locust.LocustTestOperatorStarter: Starting Kubernetes reconciler!
2022-10-17 19:54:58,361 WARN [main] io.javaoperatorsdk.operator.api.config.BaseConfigurationService: Configuration for reconciler 'locusttestreconciler' was not found. Known reconcilers: None.
2022-10-17 19:54:58,366 INFO [main] io.javaoperatorsdk.operator.api.config.BaseConfigurationService: Created configuration for reconciler com.locust.operator.controller.LocustTestReconciler with name locusttestreconciler
2022-10-17 19:54:58,544 INFO [main] io.javaoperatorsdk.operator.Operator: Registered reconciler: 'locusttestreconciler' for resource: 'class com.locust.operator.customresource.LocustTest' for namespace(s): [all namespaces]
2022-10-17 19:54:58,545 INFO [main] io.javaoperatorsdk.operator.Operator: Operator SDK 3.2.0 (commit: edd12e5) built on Mon Sep 05 08:26:50 UTC 2022 starting...
2022-10-17 19:54:58,574 INFO [main] io.javaoperatorsdk.operator.Operator: Client version: 6.1.1
2022-10-17 19:54:58,580 INFO [main] io.javaoperatorsdk.operator.processing.Controller: Starting 'locusttestreconciler' controller for reconciler: com.locust.operator.controller.LocustTestReconciler, resource: com.locust.operator.customresource.LocustTest
2022-10-17 19:54:59,547 INFO [main] io.javaoperatorsdk.operator.processing.Controller: 'locusttestreconciler' controller started, pending event sources initialization
2022-10-17 19:54:59,614 INFO [main] io.micronaut.runtime.Micronaut: Startup completed in 7597ms. Server Running: http://locust-operator-locust-k8s-operator-56d88bfcc4-zbgpx:8080
2022-10-17 21:42:36,550 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.LocustTestReconciler: LocustTest created: 'demo.test'
2022-10-17 21:42:36,601 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating service for: demo-test-master in namespace: default
2022-10-17 21:42:36,894 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during service creation: Failure executing: POST at: https://10.43.0.1/api/v1/namespaces/default/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "services" in API group "" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/api/v1/namespaces/default/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "services" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2022-10-17 21:42:36,934 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating Job for: demo-test-master in namespace: default
2022-10-17 21:42:37,880 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during Job creation: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2022-10-17 21:42:37,917 INFO [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Creating Job for: demo-test-worker in namespace: default
2022-10-17 21:42:37,997 ERROR [ReconcilerExecutor-locusttestreconciler-39] com.locust.operator.controller.utils.resource.manage.ResourceCreationManager: Exception occurred during Job creation: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://10.43.0.1/apis/batch/v1/namespaces/default/jobs. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. jobs.batch is forbidden: User "system:serviceaccount:locust-operator:locust-operator-locust-k8s-operator" cannot create resource "jobs" in API group "batch" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:713)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.requestFailure(OperationSupport.java:693)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.assertResponseCode(OperationSupport.java:642)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$handleResponse$0(OperationSupport.java:581)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.lambda$retryWithExponentialBackoff$2(OperationSupport.java:622)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source)
at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl$4.onResponse(OkHttpClientImpl.java:268)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
codacy-coverage-reporter
is not playing well with PRs from forks repos
Is your feature request related to a problem? Please describe.
Enable the operator to support deploying cluster resources into specific node groups
Describe the solution you'd like
Through the Custom resource, support a filed that would dictate which Kubernetes node group to be used.
Update to gradle 8
https://docs.gradle.org/current/userguide/upgrading_version_7.html
Is your feature request related to a problem? Please describe.
Currently the operator sets the command as an env var. This is not aligned with the native locust image which expects the container to be invoked with a CMD.
Describe the solution you'd like
create the resource with a CMD instead of a ENV var
Describe alternatives you've considered
custom build of locust that accepts the env var
Setup documentation
Cover the following:
Update graddle wrapper to v7.6.1 to avoid error with jib version 3.3.2
https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/charts/locust-k8s-operator/values.yaml#L14 Is always pointing at the latest tag, this should be changed so it holds the value of the app version.
Most likely, this line needs to be updated to include an override to the image.tag value: https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/.github/workflows/release.yaml#L65
To help users start using the operator, add a getting started section to the documentation
For better maintainability, investigate Github dependency submission API along with the following gh-action https://github.com/marketplace/actions/gradle-dependency-submission
To avoid having kafka auth info being injected to the pod all the time, allow this functionality to be controlled by a CR optional flag
Example:
apiVersion: locust.io/v1
kind: LocustTest
metadata:
name: <CR_NAME>
spec:
...
kafka: enabled | disabled (default: disabled)
Is your feature request related to a problem? Please describe.
I would like to get Prometheus metrics for tests I launch programmatically. Since Prometheus scrapes all pods annotated with prometheus.io/scrape: true
, metrics are stored as if they belong to the same test / system.
Describe the solution you'd like
I'd like to attach labels and annotations to the deployed pods, so that I can customize the Prometheus config. This is what I have in mind: use __meta_kubernetes_pod_label_<labelname>
and __meta_kubernetes_pod_annotation_<annotationname>
in kubernetes_sd_config.
Describe alternatives you've considered
I am not aware of other solutions.
Additional context
I already implemented this feature in looflow/locust-k8s-operator. Would you be interested in having this as a contribution?
Establish a release process and tie it to image releases
Is your feature request related to a problem? Please describe.
add CPU request and limit for master and workers
Describe the solution you'd like
add the requests and limits for CPU and memory in container specs
Is your feature request related to a problem? Please describe.
I'd like to have secrets (such as tokens) available in the locust files to be used in the API calls
Describe the solution you'd like
The idea here is to create a Secret
with the tokens in the locust namespace and point the LocustTest
to that Secret
(similarly as the locust file is mounted from ConfigMap
. The controller would then mount the Secret
to the worker pods (either as env variables or as a files) and the locust files executed by the workers would then be able to get those tokens from env variables (or by reading the mounted files).
Describe alternatives you've considered
Currently, I need to hard-code the tokens into the locust file directly.
Additional context
n/a
Github actions provide 2 types of events pull_request
& pull_request_target
the latter allow PRs from forks to run while having access to repo secrets while the first doesn't.
The intended outcome here is to redesign the flows so they react correctly to PRs from within the repo and from forks.
Is your feature request related to a problem? Please describe.
add support for imagePullSecrets to pull the customImage from Private Registry
Describe the solution you'd like
add imagePullSecrets with Image and ImagePullPolicy Similar to deployments
Describe the bug
service selector is not applied correctly between pod and service
To Reproduce
deploy a locusttest resource and describe the generated service and the pod
Expected behavior
both label & selector are the same
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Is your feature request related to a problem?
allows flexibility to specify custom namespace other than operator namespace
Describe the solution you'd like
adding the serviceaccount and role that will be created in custom namespace
[BUG] Github action regression: failing build on false condition
Main points cover:
Hi, I would like to learn more about "Metrics & Dashboard". Unfortunately, there is no documentation. What functionality exists today? What integrations are required? Is there a GUI? Thank you
Is your feature request related to a problem? Please describe.
It would be great if I could upgrade the locust-exporter image without having to compile the operator and create a new release.
Describe the solution you'd like
Move constants EXPORTER_IMAGE_*
from Constants.java
to values.yaml
.
Describe alternatives you've considered
Perhaps the SysConfig
class is the most appropriate place to declare these parameters? They'd have to be declared in several places, including Micronaut's application.yml
, and Helm's values.yaml
and deployment.yaml
.
Additional context
I'm willing to implement this change, in case you think it'd be a good addition to the repository.
Update to JOSDK v 4.4.1
This project is already compliant with conventional commits standard. Currently to release a new version, a manual trigger needs to take place by running "cz bump". This is extremely easy but it would be more convenient of that part is also automated. One possible way is by using the "Release please" utility from Google.
Starting point: https://github.com/googleapis/release-please
Criteria to check:
Describe the bug
After the last security update, the release.yaml workflows lost its permissions to the project.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
HELM and docs are updated
Allow for "-" in the test key
So we can run locust in a headless state, in auto mode, and the last using the UI to pass in host and other information. What changes can I make to the spec file with Kind:LocustTest to be able to connect to the locust master running on port 8089? or should I be creating a spec file and somehow tie that up to the spec file with Kind:LocustTest somehow? The way Kubernetes does it is that we have to provide a spec file with Kind:Service so I am looking for an equivalent to that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.