Giter VIP home page Giter VIP logo

distributed-load-testing-on-aws's Introduction

Distributed Load Testing on AWS

The Distributed Load Testing Solution leverages managed, highly available and highly scalable AWS services to effortlessly create and simulate thousands of connected users generating a selected amount of transactions per second, originating from up to 5 simultaneous AWS regions. As a result, developers can understand the behavior of their applications at scale and at load to identify any bottleneck problems before they deploy to Production.

On this Page

Architecture Overview

Architecture

Deployment

The solution is deployed using a CloudFormation template with a lambda backed custom resource. To simulate users from regions other than the region the solution is initially deployed in, a regional template must be deployed within the other desired regions. For details on deploying the solution please see the details on the solution implementation guide: Distributed Load Testing

Source Code

source/api-services
A NodeJS Lambda function for the API microservices. Integrated with Amazon API Gateway, used to manage test scenarios.

source/console
ReactJS Single page application to provide a GUI to the solutions. Authenticated through Amazon Cognito this dashboard allows users to Create tests and view the final results.

source/custom-resource
A NodeJS Lambda function used as a CloudFormation custom resource for sending anonymized metrics, configuration for regional testing infrastructure, and iot configuration.

source/infrastructure
A Typescript AWS Cloud Development Kit (AWS CDK) v2 package that defines the infrastructure resources to run the Distributed Load Testing on AWS solution.

It also uses the AWS Solutions Constructs aws-cloudfront-s3 package to define the CloudFront distribution and the S3 bucket that stores the content that makes up the UI.

source/real-time-data-publisher
A NodeJS Lambda function used to publish the real time load test data to an IoT topic.

source/results-parser
A NodeJS Lambda function used to write the xml output from the docker images to Amazon DynamoDB and generate the final results for each test.

source/solution-utils
A NodeJS package that contains commonly used functionality that is imported by other packages in this solution.

source/task-canceler
A NodeJS Lambda function used to stop tasks for a test that has been cancelled.

source/task-runner
A NodeJS Lambda function that runs the Amazon ECS task definition for each test.

source/task-status-checker
A NodeJS Lambda function that checks if the Amazon ECS tasks are running or not.

Creating a custom build

The solution can be deployed through the CloudFormation template available on the solution home page: Distributed Load Testing.

To make changes to the solution, download or clone this repository, update the source code and then run the deployment/build-s3-dist.sh script to deploy the updated Lambda code to an Amazon S3 bucket in your account.

Prerequisites

  • Node.js 16.x or later
  • S3 bucket that includes the AWS region as a suffix in the name. For example, my-bucket-us-east-1. The bucket and CloudFormation stack must be in the same region. The solution's CloudFormation template will expect the source code to be located in a bucket matching that name.

Running unit tests for customization

  • Clone the repository and make the desired code changes.
git clone https://github.com/aws-solutions/distributed-load-testing-on-aws.git
cd distributed-load-testing-on-aws
export BASE_DIRECTORY=$PWD
  • Run unit tests to make sure the updates pass the tests.
cd $BASE_DIRECTORY/deployment
chmod +x ./run-unit-tests.sh
./run-unit-tests.sh

Building distributable for customization

  • Configure the environment variables.
export REGION=aws-region-code # the AWS region to launch the solution (e.g. us-east-1)
export BUCKET_PREFIX=my-bucket-name # prefix of the bucket name without the region code
export BUCKET_NAME=$BUCKET_PREFIX-$REGION # full bucket name where the code will reside
export SOLUTION_NAME=my-solution-name
export VERSION=my-version # version number for the customized code
export PUBLIC_ECR_REGISTRY=public.ecr.aws/aws-solutions # replace with the container registry and image if you want to use a different container image
export PUBLIC_ECR_TAG=v3.2.5 # replace with the container image tag if you want to use a different container image
  • Build the distributable.
cd $BASE_DIRECTORY/deployment
chmod +x ./build-s3-dist.sh
./build-s3-dist.sh $BUCKET_PREFIX $SOLUTION_NAME $VERSION

Note: The build-s3-dist script expects the bucket name without the region suffix as one of its parameters.

  • Deploy the distributable to the Amazon S3 bucket in your account.

    • Make sure you are uploading the files in deployment/global-s3-assets and deployment/regional-s3-assets to $BUCKET_NAME/$SOLUTION_NAME/$VERSION.
  • Get the link of the solution template uploaded to your Amazon S3 bucket.

  • Deploy the solution to your account by launching a new AWS CloudFormation stack using the link of the solution template in Amazon S3.

Creating a custom container build

This solution uses a public Amazon Elastic Container Registry (Amazon ECR) image repository managed by AWS to store the solution container image that is used to run the configured tests. If you want to customize the container image, you can rebuild and push the image into an ECR image repository in your own AWS account. For details on how to customize the container image, please see the Container image customization section of the implementation guide.

Collection of operational metrics

This solution collects anonymized operational metrics to help AWS improve the quality and features of the solution. For more information, including how to disable this capability, please see the implementation guide.


Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: Apache-2.0

distributed-load-testing-on-aws's People

Contributors

amazon-auto avatar bassemwanis avatar beomseoklee avatar dacgray avatar dimitri-lopez avatar drumadrian avatar emcfins avatar evanwieren avatar g-lenz avatar georgebearden avatar gsingh04 avatar jpeddicord avatar kamyarz-aws avatar naxxster avatar pyranja avatar tabdunabi avatar tomnight avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distributed-load-testing-on-aws's Issues

Failed to check ECR.

Describe the bug
Newer version of deployment throwing below error when we start the test using single endpoint or jmx.

To Reproduce
Deploy the solution using the latest release.

Expected behavior
Test should start successfully.

Please complete the following information about the solution:

  • Version: 1.3.0
  • Region: N Virginia
  • Was the solution modified from the version published on this repository? No
  • If the answer to the previous question was yes, are the changes available on GitHub?
  • Have you checked your service quotas for the sevices this solution uses? yes
  • Were there any errors in the CloudWatch Logs? No

Screenshots
image

Additional context
None

Docker rate limiting

Describe the bug
I cannot run a single set of tests because I am being rate iimited by the new docker policies.

To Reproduce
Installed via CloudFormation, standard install. Nothing special done to authenticate to docker hub

Expected behavior
Codebuild builds and pushes containers to ECR

Please complete the following information about the solution:

  • [ 1.2.0] Version: [e.g. v1.1.0]
  • [us-east-1 ] Region: [e.g. us-east-1]
  • [no ] Was the solution modified from the version published on this repository?
  • If the answer to the previous question was yes, are the changes available on GitHub?
  • [yes] Have you checked your service quotas for the sevices this solution uses?
  • [Yes, see below ] Were there any errors in the CloudWatch Logs?

Screenshots
"Failed to check ECR"

Additional context
Could this be pointed to a public AWS container repository?

021-01-09T21:21:37.870-06:00 | [Container] 2021/01/10 03:21:35 Entering phase BUILD
-- | --
  | 2021-01-09T21:21:37.870-06:00 | [Container] 2021/01/10 03:21:35 Running command docker build -t $REPOSITORY:latest .
  | 2021-01-09T21:21:37.870-06:00 | Sending build context to Docker daemon 1.312MB
  | 2021-01-09T21:21:37.870-06:00 | Step 1/8 : FROM blazemeter/taurus
  | 2021-01-09T21:21:37.870-06:00 | toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

Support for integrating with a DB (for seed data) and InfluxDB outside this solution

Hi,

In cases where the seed data used in JMeter scripts (as in a pool of UID/Pwds, application transaction data used in scripted workflows etc) is stored in a DB (in our case MySQL DB), how do we have the JMeter Master/Slaves communicate with a persistent DB outside this solution via JDBC connection configuration?

Similarly, we use an influxDB backend listener to write test results to which could be visualized in Grafana dashboard realtime or afterwards.

So, if there is a way the aws services running the JMeter tests in this solution can communicate with persistent seed data DB and influx DB hosted onPrem -OR- on EC2 instances in AWS, it would enhance the scenarios enormously in which this solution can be used and adapted by many.

Can you please clarify if this can be done now and how? If not, can you please add this capability to feature roadmap

Group tests for repeatable more complete test cases

This tools has some really great potential to service more complex needs. To do that we'll need the ability to group tests together into something like a test suite. This entire sweet of tests could run against the same base_url.

The test suite should have a view which allows users to see across the various tests to see overall performance trends for the entire suite.

I'm looking to see how I can contribute but my react is a bit weak :)

Cannot reset the password in the web console

When I try to reset the password in the Console URL, it is displaying below error message.
User password cannot be reset in the current state,

This error occurs, if the user did not confirm their account for the first time, and trying to reset the password.

Steps to reproduce

  1. Create a stack using the template
  2. After successful creation, launch the console URL from the Outputs tab in the Cloudformation service
  3. Click on Reset password.
  4. Enter username
  5. Click on SEND CODE

Actual Behavior

It is displaying User password cannot be reset in the current state.

Expected Behavior

Display message can be more meaningful. E.g. Account not confirmed, please check your administrator email

How to split CSV input file amongst all containers/tasks?

I am trying to use a large single input CSV file split amonst all of the containers / tasks I am executing load tests on. It appears that each container is using the same users (rows 1 - 100 for example) for each container, which results in only the first 100 users being used for authentication step in JMeter test, even if trying to test 1000 concurrent users.

How do you solve for this? If I have a large CSV input file and I want to split it between each container so that each has a set of unique users to use for testing?

Perpetual "Running" no metrics

Describe the bug
Larger tests get stuck in "Running" state forever and metrics never load.

The test seemingly runs fine but once the tasks spin down the process stalls and metrics never load.

To Reproduce
Test Settings

Expected behavior
Metrics to appear after test is complete

Please complete the following information about the solution:

  • [y] Version: [e.g. v1.1.0]
  • [y] Region: [e.g. us-east-1]
  • [n] Was the solution modified from the version published on this repository?
  • [n/a] If the answer to the previous question was yes, are the changes available on GitHub?
  • [y] Have you checked your service quotas for the sevices this solution uses?
  • [n] Were there any errors in the CloudWatch Logs?

Screenshots
Screenshot of issue

API execution example

Can you please share an example of how to execute the API from the command line? e.g. using curl.

Is there any configuration change that needs to be done on the API-Gateway or Cognito to allow the APIs to work?

Thanks,

Auto Refresh Button in the Dashboard

Could you please add configurable Auto Refresh button for the tests in progress. This will refresh and displays the latest metrics, without refreshing the page manually.

Unable to access objects in logsbucket

Version v1.2.0
image
I'm unable to access any object that is put in the logsbucket, I believe this is because the ownership of the objects is set to the object writer. I'm concerned after checking the canonical ID of the objects inside the logsbucket I see it differs from the canonical ID of my AWS account, is this supposed to be that way?

Ability to compare tests (or tests suites)

Once tests are done it would be great to be able to select multiple tests or test suites to compare. This will allow the team to see good or bad trends of performance from test to test. Graphics often speak volumes so overlaying the two tests will provide a good visual of the performance.

Ambiguous requests count in test report

I am not sure I have understood well, but if I set:

  • tasks count: 1
  • concurrency: 100
  • ramp up: 0
  • hold for: 1 minute

the total requests count should be 100 * 60 * 1 = 6000, however I got totally different number.

image

Can someone explain me why?

Thanks

Not able to resolve custom DNS

Not able to resolve custom DNS from Fargate VPC with DHCP options set
Our DNS servers are hosted in another VPC in the same account and we setup peering between the Fargate VPC and the vpc that hosts the DNS servers. The target resource, an ALB, also lies in the VPC that hosts the DNS service. We modified the DHCP option set of the Fargate VPC and added a NAT gateway in its public subnet and have edited the security group of the target ALB to allow incoming requests from the Fargate VPC CIDR but the tests seems to be failing to launch after these changes. We hoped that the Fargate tasks which uses the awsvpc networking by default would be able to resolve the custom DNS using the VPC DHCP options set but it didn't. Any help would be appreciated to get the Fargate tasks to resolve the custom DNS. Thanks.

Debug error

We need a way to look at logs when an error occurs.

For example when I did a GET on an object in S3 using the system I got this error:

Error SocketException:12

This would help with debugging.

Is there any reason to create two subnets in the same AZ?

When the solution create a VPC and two subnets, the two subnets are located in the same AZ.
I am not sure this is an intended implementation or not.
It would be better to locate in the different AZs, for more reliable test cluster or for more resource capabities.

Need a way to see actual requests being made to a target server

Is your feature request related to a problem? Please describe.
Thanks for this great solution!

Unfortunately, due to all requests being made from within the container (which is fine), we aren't able to track requests that are made on the fly if target server is unable to see them.

I would love to have a way to see real-time request made or else, have near real-time logs available via CloudWatch Logs or similar so even if target server is continuously prompting 5xx or connection time out error, we can quickly find out why.

Also, if TPS graph can be seen, would be awesome.

Ability to leverage external libraries

Is your feature request related to a problem? Please describe.

I find it difficult to add external libraries and use them with JSR223 Sampler for JMeter scripts.

Describe the feature you'd like

Ability to provide external libraries (from dashboard UI) and use them in JSR223 Sampler of JMeter scripts.

Additional context

Trying preparing a script to login via aws cognito simulating SRP Flows for Client-Side Authentication Flow, operations InitiateAuth and RespondToAuthChallenge.

My understanding to do this, I need to use an external library (aws sdk) and use it via JMeter JSR223 Sampler to generate accessTokens.

Ref: https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-authentication-flow.html#amazon-cognito-user-pools-client-side-authentication-flow

Any alternative solutions are welcome.

JMeter "Ultimate Thread Group" plugin support

Describe the bug
I'd like to use the JMeter "Ultimate Thread Group" plugin to ramp up the users with a delay between each group of users. I added the plugin in JMeter and can see the config in the .jmx file. However when I add upload the .zip (.jmx & .csv users) and run the job, the job completes but does not appear to delay the ramp up period for the groups I have configured. Instead of using those values from the JMeter script, the solution uses the ramp up time and the concurrency, which is the number of virtual users, to run the load testing.

To Reproduce
Add "Ultimate Thread Group" plugin to JMeter and configure threads schedule
Save the .jmx and zip along with users .csv
Upload the zipped file and run the job. I set the following as part of my testing
TASK COUNT: 1
CONCURRENCY:1
RAMP UP: 0m
HOLD FOR: 5m

Expected behavior
I expected the solution to take the values from the JMeter script and run the load testing. Instead, it used the ramp up time and concurrency values

Please complete the following information about the solution:

  • [1.2.0] Version: [e.g. v1.1.0]
  • [eu-west-2 ] Region: [e.g. us-east-1]
  • [ No] Was the solution modified from the version published on this repository?
  • [N/A ] If the answer to the previous question was yes, are the changes available on GitHub?
  • [ Yes] Have you checked your service quotas for the sevices this solution uses?
  • [No ] Were there any errors in the CloudWatch Logs?

Use of JMeter Include controller and use of top level variables in included file

What is your question?
I have a main jmeter script that references another script via the jmeter include controller.
This works when run outside the solution and variable values are passed but theres issue via the solution.

For the solution, the zip file contains the main jmx, plus a folder with the included jmx.
When run in the solution I can see that the other script is accessed using debug statements, but a variable set in the first is not set in the second.

Here's how I do it:
first jmx: in JSR223 Sampler/groovy: the value is set using vars.put("var", varvalue);
second jmx: accessed as ${var}
Can you tell me if this is supported?

Recent update from Diego Magalhaes - our AWS contact - indicates the following:
It goes through the files in zip file to find the first JMX file, so if JMX 1 is shown first, the solution will run JMX 1, and if JMX 2 is shown first, the solution will run JMX 2 instead which would cause an issue for the customer.

To summarize what I'm looking for based on the above:

  • how to make sure my root jmx is seen first?
  • how to pass the variable from the root to included jmx (could it be solved by just ensuring the root is seen first??)

I'm prepping for an upcoming series of very high load tests - I would appreciate any insight you can provide on this.
Thx, Nicola

JMeter Support

Executing JMeter scripts from the web console by uploading *.jmx and its dependencies would be a great feature to have.

Uploading large zip file for testing fails

Describe the bug
When I creating a test case using .zip file, the uploading of the zip file fails.

To Reproduce

  1. Write any jmx test file
  2. Archive test file to .zip file (which is over 12MB)
  3. Make test case with the .zip file
  4. upload fails

Expected behavior
The upload should be success and the scenario bucket should have upload zip file with newly created scenario id.

Please complete the following information about the solution:

  • Version: v1.2.0
  • Region: ap-northeast-1
  • Was the solution modified from the version published on this repository? No
  • If the answer to the previous question was yes, are the changes available on GitHub?
  • Have you checked your service quotas for the sevices this solution uses?
  • Were there any errors in the CloudWatch Logs?

Screenshots
Screen Shot 2021-04-19 at 11 34 41 PM

Screen Shot 2021-04-19 at 11 45 32 PM

Additional context

Ability to see the TPS in the results

For now, the result set doesn't contain TPS. It would be great to see the TPS (transaction per seconds) in the result set once the test is done (the avg TPS for all tasks).

Support for JMeter plugins

Is there a way to identify whether Taurus downloads the relevant JMeter plugins from the uploaded JMX? Is this solution supports JMeter plugins? Please clarify.

Test id starting with "-" will cause failure of uploading result files.

Test id, which is automatically and randomly generated in creating test scenarios, sometimes starts with "-" (e.g. "-jafifg"). Such test id causes error in "load-test.sh".

    cat $TEST_ID.jmx | grep filename > results.txt

If test id is "-jafifg", command above equals to "cat -jafifg.jmx" and will cause an error like cat: invalid option -- 'j'. This results in failure of whole uploading process in load-test.sh.

I expect that test id which starts with "-" never be generated.

Is it still possible to use load test scenario yaml file?

In previous version it is possible to create a load test yaml file and run thru CodePipeline. Is it still possible to do that in this version? If there is can you please point me to the documentation as I can't find it in the official implementation guide.

It will be even better if there a place on the UI where we can upload a list of scenarios (not JMX thread groups).

Thanks!

Incorrect Concurrency count on the initial test

Describe the bug
On the initial test, after the implementation of DLT 2.0, concurrency count in the Result History section is not correct. The count on subsequent tests seem fine.

To Reproduce

  1. Implement DLT 2.0
  2. Execute any JMeter test script
  3. Compare the values of CONCURRENCY on Load Test Details page

Expected behavior
On Load Test Details page, value of the CONCURRENCY on both the first section of the page and Result History section should be the same.

Please complete the following information about the solution:

  • Version: [e.g. v1.1.0] 2.0
  • Region: [e.g. us-east-1] us-east-1
  • Was the solution modified from the version published on this repository? No
  • If the answer to the previous question was yes, are the changes available on GitHub? N/A
  • Have you checked your service quotas for the sevices this solution uses? Yes
  • Were there any errors in the CloudWatch Logs? No

Screenshots

image

Additional context

SSO Integration

What is your question?

Is it possible for the solution to be SSO integrated? I've been trying to configure Cognito UserPool to work with Azure AD SSO, but so far no luck. Any guidance or assistance would be much appreciated.

create EC

environment:
Region:us-west-2

When I normally create a stack, I start a test task and find that the lambda function load-test-TaskRunner cannot start ECS tasks. The error log is displayed as follows:

2019-11-19T15:56:23.543Z	28972b44-f39d-502e-bfad-7d0ae15297ac	INFO	{ InvalidParameterException: Unable to assume the service linked role. Please verify that the ECS service linked role exists.
    at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:51:27)
    at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
  message:
   'Unable to assume the service linked role. Please verify that the ECS service linked role exists.',
  code: 'InvalidParameterException',
  time: 2019-11-19T15:56:23.499Z,
  requestId: '7a6d7064-768a-425f-bfae-698c0515e607',
  statusCode: 400,
  retryable: false,
  retryDelay: 71.86203577055117 }

I checked that the Execution role of the lambda is there.
What is wrong with my environment?

Jtl log file access

We made a research on where to find jtl result file. But we couldn't find any answer. Is there any way to access jtl log file or isn't it created on completion of test?
Also, we can't see what erros(except for 500 internal server error) are after running load test. We can only see how many error has occured.
InkedConcurrency-150-Total_LI

Please provide sample of HTTP Headers

What is your question?

When trying to pass API Key as header parameter I am getting error "WARNING: headers text is not valid JSON".

Please provide sample with expected format.

Variables/Parameter Support

If you need to test some sort of data creation - say create random customers or addresses, etc - we need to have a way to specify this in the POST body so the performance test can call a 'create-data' type of API with random values.

Right now, it seems the POST body is set one and data calls will be repeated with the same body.

or am I missing something?

Connection timed out

I am facing "Connection timed out" error.

Task : 466726df7d984f67ac487636425fd957

Fargate Tasks fail to finish

Context:

Task Count: 40
Tps per Task: 100

Time to peak: 5m
TIme to hold: 10m

Of the 40 taks, around 5-10 of them usually to finish and i cannot see the result since the task is still on the 'Running' State. My options are to stop the task or to evict the taks from ECS and review the results manually on Dynamo.

Even waiting 1-5 hours after the defined time to reach peak and hold, doens't seem to have any effect. There is still some taks still hanged.

Ability to trigger Jmeter tests from API

@beomseoklee I saw your answer regarding uploading zip files to include CSV configurations in an upcoming release. Would it also be possible to point the dlts to a zip in S3? I am currently kicking off simple tests from Jenkins via cURL. When I take a look at the json test scenario when I kick off a Jmeter load test from UI I see: {"script":"***.jmx"} Is it possible to do the following :

curl --location --request POST 'https://****.us-east-1.amazonaws.com/prod/scenarios' \
                       --header 'Content-Type: text/plain' \
                       --data-raw '{
                           "testDescription": "'"$TEST_DESCRIPTION"'",
                           "testName": "'"$TEST_NAME"'",
                           "testType": "'"$TEST_TYPE"'",
                           "taskCount": "'"$TASK_COUNT"'",
                           "testScenario": {
                               "execution": [
                                   {
                                       "concurrency": "'"$CONCURRENT_USERS"'",
                                       "ramp-up": "'"$RAMP_UP_TIME"''"$RAMP_UP_TIME_UNIT"'",
                                       "hold-for": "'"$HOLD_CONCURRENCY_TIME"''"$HOLD_CONCURRENCY_TIME_UNIT"'",
                                       "scenario": "'"$TEST_NAME"'"
                                   }
                               ],
                               "scenarios": {
                                   "'"$TEST_NAME"'": {
                                       "script": "Jmeter-scripts/***.jmx"
                                   }
                               }
                           }
                       }'

I tried the above and I am able to trigger a test but it fails on parsing empty array

2020-10-26T19:23:28.063Z	362131ee-2f7c-45cd-987e-830a20c2d552	ERROR	TypeError: Reduce of empty array with no initial value
    at Array.reduce (<anonymous>)
    at Object.finalResults (/var/task/lib/parser/index.js:178:50)
    at Runtime.exports.handler (/var/task/index.js:37:26)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)

When I run the same script via UI it works.

Order Dashboard jobs listing by Last Run Descending

Current ordering by name of job is a bit unpredictable if multiple teams are doing various custom runs. Ordering by last run descending means most recent runs will show up at the top and make this tool easier to use.

Alternatively, adding a toggle on each column would also be a suitable solution but I think what I've suggested above is a better default if you do this then the current by name.

If it helps I can submit a pull request for this I expect. Haven't dug into the particular code details yet.

Cost estimate

Is there any estimate of what running this solution costs?

Uploading CSV Data Set Config for JMeter

How can we upload the CSV Data Set Config file which will be used in the JMeter script? Right now, I do not see any option to upload in the web console. Please clarify.

Test Failed: Test might be failed to run.

What is your question?

I deployed the v2 version of the template and ran a Simple Test successfully. However, when I ran a jmx script that was working previously, I get Test Failed. I don't see much in the way of logs as to why. I see the ECS container spinning up and in running state and the Step Function status comes back - "Complete" when the jmx script runs. My question is where can we look to see why this JMX test failed. I looked in dynamo and in errorReason field I see: Test might be failed to run, but nothing else. Please let me know recommended next steps.

Update README to explain deployment process clearly

This file:
https://github.com/awslabs/distributed-load-testing-on-aws/blob/master/deployment/distributed-load-testing-on-aws.yaml

includes the following Bucket and Key Mapping:

SourceCode:
General:
S3Bucket: CODE_BUCKET
KeyPrefix: SOLUTION_NAME/CODE_VERSION

In the template published here:
https://aws.amazon.com/solutions/distributed-load-testing-on-aws/?did=sl_card&trk=sl_card

that is hosted at this path:
https://s3.amazonaws.com/solutions-reference/distributed-load-testing-on-aws/latest/distributed-load-testing-on-aws.template

The Mapping is this:

SourceCode:
General:
S3Bucket: solutions
KeyPrefix: distributed-load-testing-on-aws/v1.0.0

This should be explained in more detail so a contributor planning to build and improve this solution knows that the template needs to be updated with the Bucket and Key of the new owner's AWS account.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.