Giter VIP home page Giter VIP logo

jmeter-ecs's Introduction

JMeter for ECS

JMeter Images for Distributed Testing on EC2 Container Service (ECS)

This application uses two images:

  • smithmicro/jmeter - Contains the JMeter software that is deployed in ECS
  • smithmicro/lucy - The orchestration image that can run behind a corporate firewall and manages AWS resources

Warning: Using these Docker images will incur compute and storage costs in AWS. Care is taken to terminate all instances and volumes after JMeter tests complete, but bugs could allow these resources to continue to run. See the issues list for more detail.

How to Use

The smithmicro/lucy Docker image can be run as-is with a number of required environement variables.

Prerequisites to use this image:

  • Create a VPC with at least one subnet as ECS requires the use of VPC **
  • Create a VPC security group that allows ports 22, 1099, 50000 and 51000 (tcp) and 4445 (udp) to the VPC **
  • Create a security key pair and place in the keys subdirectory
  • Have your AWS CLI Access Key ID/Secret Access Key handy
  • Replace or edit the included plans/demo.jmx to run your specific tests
  • Ensure you have a Role named ecsInstanceRole. This is created by the ECS first-run experience.

** If you do not have a VPC created, you can use the included aws-setup.sh script to create the VPC, Subnet and required Security Group.

Docker run template:

docker run -v <oath to jmx>:/plans -v <path to pem>:/keys -v <path to logs>:/logs \
    --env AWS_ACCESS_KEY_ID=<key id> \
    --env AWS_SECRET_ACCESS_KEY=<access key> \
    --env AWS_DEFAULT_REGION=<region> \
    --env SECURITY_GROUP=<security group within your VPC> \
    --env SUBNET_ID=<subnet IDs within your VPC> \
    --env KEY_NAME=<key pair name without extension> \
    --env MINION_COUNT=<number of minions> \
    smithmicro/lucy /plans/demo.jmx

For 5 test instances in N. Virginia, docker run would look like this, assuming your jmeter-key.pem file is located in the keys subdirectory:

docker run -v $PWD/plans:/plans -v $PWD/keys:/keys -v $PWD/logs:/logs \
    --env AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQRST \
    --env AWS_SECRET_ACCESS_KEY=abcdefghijklmnopqrstuvwxyz0123456789ABCDEF \
    --env AWS_DEFAULT_REGION=us-east-1 \
    --env SECURITY_GROUP=sg-12345678 \
    --env SUBNET_ID=subnet-12345678,subnet-87654321 \
    --env KEY_NAME=jmeter-key \
    --env MINION_COUNT=5 \
    smithmicro/lucy /plans/demo.jmx

Architecture

This Docker image replaces the JMeter master/slave nomenclature with Gru, Minion and Lucy. Gru manages the Minions from within ECS, but Lucy orchestrates the entire process.

+--------------------------------------+
|  EC2                                 |
|  +--------------------------------+  |
|  |  ECS                           |  |
|  |                +--------+      |  |
|  |  +-------+     | +--------+    |  |      +--------+
|  |  |       |---->| | +--------+ ---------->|        |
|  |  |  Gru  |<----| | |        | ---------->| Target |
|  |  |       |     +-| | Minion | ---------->|        |
|  |  +-------+       +-|        |  |  |      +--------+
|  |     ^ |            +--------+  |  |
|  +-----|-|------------------------+  |
+--------|-|---------------------------+
         | |
    .jmx | | .log/.jtl
         | v
     +----------+
     |          |
     |   Lucy   |
     |          |
     +----------+

Lucy runs the lucy.sh script to perform the following steps:

  • Step 1 - Create the ECS Cluster
  • Step 2 - Wait for the cluster to have all container instances registered
  • Step 3 - Run a Minion Task with the requested instance count
  • Step 4 - Get Gru and Minion's instance ID's
  • Step 5 - Get IP addresses from Gru and Minions
  • Step 6 - Copy all files to Minions/Gru, or just the JMX
  • Step 7 - Run Gru with the specified JMX
  • Step 8 - Fetch the results from Gru
  • Step 9 - Delete the cluster

Volumes

The lucy container uses 3 volumes:

  • /plans - mapped into the orchestrator to provide the input JMX files
  • /keys - mapped into the orchestrator to provide the PEM file
  • /logs - mapped into the orchestrator to provide the output jmeter.log and results.jtl

Environment Variables

The following required and optional environment variables are supported:

Variable Required Default Notes
AWS_DEFAULT_REGION Yes None AWS Region (e.g. us-east-1)
AWS_ACCESS_KEY_ID Yes None AWS Access Key
AWS_SECRET_ACCESS_KEY Yes None AWS Secret Key
INPUT_JMX Yes None File path of JMeter Test file to run (.jmx). You can optionally specify this as the first command line option of docker run
KEY_NAME Yes None AWS Security Key Pair .pem file (do not specify the .pem extension)
SECURITY_GROUP Yes None AWS Secuirty group that allows ports 22,1099,50000,51000/tcp and 4445/udp from all ports (e.g. sg-12345678)
SUBNET_ID Yes None One or more Subnets (comma separated) that are assigned to your VPC
VPC_ID VPC assigned to SUBNET_ID We dautomatically erive this from your SUBNET_ID
JMETER_VERSION latest smithmicro/jmeter Image tag. See Docker Hub for available versions.
INSTANCE_TYPE t2.micro To double your memory, pass t2.small
MEM_LIMIT 950m If you are using t2.small, set MEM_LIMIT to 1995m
MINION_COUNT 2
PEM_PATH /keys This must match your Volume map. See Volume section above.
CLUSTER_NAME JMeter Name that appears in your AWS Cluster UI
GRU_PRIVATE_IP None Set to true if you would like to run Lucy within AWS. See GitHub Issue 8 for details.
JMETER_FLAGS None Custom JMeter command line options. For example, passing -X will tell the Minion to exit at the end of the test
RETAIN_CLUSTER None Set to true if you want to re-use your cluster for future tests. Warning, you will incur AWS charges if you leave your cluster running.
CUSTOM_PLUGIN_URL None The URL of a custom plugin you want to install in the Minions. File will be copied to $JMETER_HOME/lib/ext.
COPY_DIR None Set to true if you want to copy the directory in which the .jmx file is located to all Minions and Gru. The files will be located in all Docker containers in /plans. Update your JMX file to reference external files at /plans/...

Notes

All current JMeter Plugins are installed via the Plugins Manager.

For more information on JMeter Distributed Testing, see:

Inspired by...

https://en.wikipedia.org/wiki/Despicable_Me_2

Minions

jmeter-ecs's People

Contributors

dsperling avatar kostanos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jmeter-ecs's Issues

SSL certificate parameters

Hi, so the server I am testing against needs ssl certificate. I am aware that we can copy the whole /plans folder to ECS now. So I copied the p12 file into the /plans folder, and added "--env JMETER_FLAGS=-Djavax.net.ssl.keyStore=/plans/someCert.p12 " to add as system properties when running lucy.

After running it, the result are still all 401. And going throught the logs, The log on Gru has "Setting System property: javax.net.ssl.keyStore=/plans/.....p12" line, but the logs on minions don't have this line. So just wondering if the variable line I added is enough or is there some changes need to be made?

Ability to Pass entire contents of /plans directory to Gru

This would be more of a feature request but having lucy pass all files from the /plans directory to gru's /plans directory would be very useful.

We have a few jmx files that will use test data that is contained in a csv file that it will read in when master launches, create specific test plans for each slave and hand them out.

With only the jmx file being provided to gru it's unable to load in the test data.

The minion instances are not recognized

trying to run the lucy docker, and got the issue with detecting Minion and Gru instances, in my case, all 3 instances detected as Gru and no minions at all:

...
lucy_1  | Gru instance ID: i-**********
lucy_1  | i-*********
lucy_1  | i-**********
lucy_1  | Minion instances IDs: 
lucy_1  | Gru at **.***.***.*****.**.**.****.***.***.***
...

For some reason there were no instance with running task.
Is there any other way to determinate Gru vs Minion than by tasks count? as it expects in code:

...
# Step 4 - Get Gru and Minion's instance ID's.  Gru is the container with a runningTasksCount = 0
...

Don't wait for instances when `ecs-cli up` fails

Related to #52

If for some reason the ecs-cli up command fails, the script ignores the situation, steps over, and then hangs waiting for instances to show up. Also, when you ctrl-c for stopping the process, the cluster that has been created will not be deleted.

java.lang.SecurityException: class "org.bouncycastle.asn1.pkcs.PBES2Algorithms"'s signer information does not match signer information of other classes in the same package

Hi,
we were using jmeter 4.0 all this time and the last successful run that we got for our project on 01-Apr-2020 but when we run the same project (without any change) on 12-May-2020 it's suddenly stooped working with the error "java.lang.SecurityException: class "org.bouncycastle.asn1.pkcs.PBES2Algorithms"'s signer information does not match signer information of other classes in the same package"
From them we tried updating the jmeter to latest version 5.1 but its just not staring the run and its just stuck at this point "Starting the test @ Mon May 18 17:48:34 GMT 2020" after Downloaded newer image for smithmicro/jmeter:5.1. and the jmeter.log file recursively updated with the above mentioned error. Here is the full stack trace for reference and if any help/suggestions are greatly appreciated.

020-05-18 17:57:32,058 ERROR o.a.j.r.ClassFinder: Error filtering class org.bouncycastle.asn1.pkcs.PBES2Algorithms, it will be ignored
java.lang.SecurityException: class "org.bouncycastle.asn1.pkcs.PBES2Algorithms"'s signer information does not match signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java:898) ~[?:1.8.0_212]
at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668) ~[?:1.8.0_212]
at java.lang.ClassLoader.defineClass(ClassLoader.java:761) ~[?:1.8.0_212]
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_212]
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) ~[?:1.8.0_212]
at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_212]
at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_212]
at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_212]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212]
at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_212]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_212]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_212]
at java.lang.Class.forName0(Native Method) ~[?:1.8.0_212]
at java.lang.Class.forName(Class.java:348) ~[?:1.8.0_212]
at org.apache.jorphan.reflect.ClassFinder$ExtendsClassFilter.isChildOf(ClassFinder.java:104) ~[jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder$ExtendsClassFilter.accept(ClassFinder.java:90) ~[jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.applyFiltering(ClassFinder.java:496) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClassesInOnePath(ClassFinder.java:455) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClasses(ClassFinder.java:340) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClassesThatExtend(ClassFinder.java:306) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClassesThatExtend(ClassFinder.java:238) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClassesThatExtend(ClassFinder.java:220) [jorphan.jar:5.1 r1853635]
at org.apache.jorphan.reflect.ClassFinder.findClassesThatExtend(ClassFinder.java:186) [jorphan.jar:5.1 r1853635]
at org.apache.jmeter.threads.RemoteThreadsListenerImpl.(RemoteThreadsListenerImpl.java:61) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.engine.ConvertListeners.addNode(ConvertListeners.java:64) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jorphan.collections.HashTree.traverse(HashTree.java:976) [jorphan.jar:5.1 r1853635]
at org.apache.jmeter.engine.ClientJMeterEngine.runTest(ClientJMeterEngine.java:137) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.engine.DistributedRunner.start(DistributedRunner.java:132) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.engine.DistributedRunner.start(DistributedRunner.java:149) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.JMeter.runNonGui(JMeter.java:1084) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.JMeter.startNonGui(JMeter.java:986) [ApacheJMeter_core.jar:5.1 r1853635]
at org.apache.jmeter.JMeter.start(JMeter.java:563) [ApacheJMeter_core.jar:5.1 r1853635]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_212]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_212]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_212]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_212]
at org.apache.jmeter.NewDriver.main(NewDriver.java:253) [ApacheJMeter.jar:5.1 r1853635]

The Gui generated jtl file ,the JTL report showing No data to display in view results tree

we are facing the following challenges using JMeter docker kindly give your feedback

  1. Save response to a file the file which has been stored in HTML format in slave server when errors are occurring.

  2. The Gui generated jtl file, the JTL report showing No data to display in view results tree

  3. The failed request response time and request count is showing wrong in Aggregate report in JMeter using JTL output file.

  4. After successfully download the report, When we click on index HTML report it is showing wrong details for the following graphs.

    a. Response Times Over Time
    b. Response Time Percentiles Over Time (successful responses)
    c. Active Threads Over Time
    d. Bytes Throughput Over Time
    e. Latencies Over Time
    f. Connect Time Over Time
    g. Hits Per Second
    h. Codes Per Second

How to add SSL p12 file to Jmeter?

My test server request SSL connection, that need p12 file.
When I run Jmeter locally, I must change system.properties and add javax.net.ssl.keyStore=xxxxx.p12.
How to add my certificate file xxxx.p12 and system.properties files to Jmeter?

Environment Variable "JMETER_FLAGS" is passed to Lucy, but does not be passed to Gru

I added JMETER_FLAGS in docker run command "docker run -v $PWD/plans:/plans -v $PWD/keys:/keys -v $PWD/logs:/logs --env AWS_ACCESS_KEY_ID=XXXXXXXXXX --env AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXX --env AWS_DEFAULT_REGION=us-west-1 --env SECURITY_GROUP=XXXXXXXXXXXXX --env SUBNET_ID=XXXXXXXXXXXXX,XXXXXXXXXXXXX --env KEY_NAME=XXXXXXXXXXXXX --env JMETER_FLAGS=-Jduration=60 --env MINION_COUNT=2 smithmicro/lucy /plans/xxx.jmx", The "JMETER_FLAGS" is only passed to Lucy, but not passed to Gru.

I think the problem is in Lucy.sh, line 144
"docker run -p 1099:1099 -p 51000:51000 -v /tmp:/plans -v /logs:/logs --env MINION_HOSTS=$MINION_HOSTS smithmicro/jmeter:$JMETER_VERSION $JMX_IN_COMTAINER"

Should be
"docker run -p 1099:1099 -p 51000:51000 -v /tmp:/plans -v /logs:/logs --env MINION_HOSTS=$MINION_HOSTS --env JMETER_FLAGS=$JMETER_FLAGS smithmicro/jmeter:$JMETER_VERSION $JMX_IN_COMTAINER"

my project load test time is 3hours so i want to check what process is going in between time .how i will identify

Hi smith,
i have done load test using this interface .really very good ..but i need some clarification .pls clear me
when i am doing long project load testing for my existing environment some time it will success or some time error will occur after 5 or 10 min. so i can identify easily in windows machine in on the fly. but in this ecs environment i am waiting for full completion .so my time is wasting so kindly guide me correct way.so how i will get response in between time .

Slaves can't connect back to master

First, I tried to incorporate 5.0 version of JMeter in local fork. Containers seems to work fine, no crashing, with heap a little bit reduced. Test starts on master, slaves seemingly receive commands from it, but can't connect back to master to send reports,and Lucy stays at:

[01:49:48] Creating summariser <summary>
[01:49:48] Created the tree successfully using /plans/demo.jmx
[01:49:48] Configuring remote engine: 172.30.2.60
[01:49:48] Configuring remote engine: 172.30.2.40
[01:49:48] Starting remote engines
[01:49:48] Starting the test @ Thu Jan 17 01:49:48 GMT 2019 (1547689788786)
[01:49:52] Remote engines have been started
[01:49:52] Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445

Looking at ECS dashboard I can only see task definitions of slaves. Master's container instance indicates that all memory resources are free ("Registered 985, Available 985"), ports 1099 and 50000 are not in use (or not binded) and there are 0 tasks running. But SSH-logging into the EC2-instance of Gru shows that container is running and the ports are binded:

0070afad1be6        innokentiyt/jmeter:5.0h          "/opt/jmeter/entrypo…"   15 minutes ago      Up 15 minutes       0.0.0.0:1099->1099/tcp, 0.0.0.0:51000->51000/tcp, 4445/udp, 50000/tcp   amazing_elion

Master does can connect to RMI ports of slaves (tested it using telnet and nc from host and from inside the container). Slaves do not. I tried to loosen VPC's Security Group rules, but it not helped.

Sample send mode

In the current constellation, when there are to many clients, the bandwith is eaten up by samples that are being sent to the server.
Samples can not reach the server in time, so the client is slowing down the test, because he waits until the server received the sample.

To fix that, another sendermode has to be set. For example disk or async.
https://jmeter.apache.org/usermanual/remote-test.html#sendermode

Option to use Instance Private IPs

This isn't necessarily an issue but adding a flag or environment variable to choose if the script will collect gru and the minion's private IPs vs public would be useful.

For example, my environment is entirely in Amazon including where lucy would run from. Having the instances within a vpc and security group that is locked down so the systems can only talk to each other and reach the internet allows for better overall security versus opening up port 22 to the entire world, along with the other necessary ports.

I modified the script and rebuilt lucy to give this a try and it worked just fine:

# Step 7 - Get private IP addresses from Gru and Minions
GRU_HOST=$(aws ec2 describe-instances --instance-ids $GRU_INSTANCE_ID \
      --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text | tr -d '\n')
echo "Gru at $GRU_HOST"

MINION_HOSTS=$(aws ec2 describe-instances --instance-ids $MINION_INSTANCE_IDS \
      --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text | tr '\n' ',')
echo "Minions at at $MINION_HOSTS"

Again not necessarily an issue as the package currently works as is, but would be a nice feature.

java.net.ConnectException: Connection refused (Connection refused)

i acheved u r master slave concept,but some time result report's are generating, some time not generating after i will check logs i got this Connection refused to host: 10.74.2.147; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
in aws ecs containg,kindly let me know bro

JMETER 3.1 VERSION DOCKER IMAGE NOT SUPPORTING CORRECTLY

Hi smith,

when i run 3.1 version I GOT BELOW EXECUTION ERROR. Pls tell where problem .i think it's in docker images

EXECUTION ERROR

Starting remote engines
Starting the test @ Tue Apr 02 13:16:24 GMT 2019 (1554210984775)
Error in rconfigure() method java.rmi.MarshalException: error marshalling arguments; nested exception is:
java.net.SocketException: Broken pipe (Write failed)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445

Unable to run more than 10 minion tasks

There is a ECS limit that restricts a single run-task command to have a maximum count of 10

Adding in the ability to break the command up if there are more than 10 minions would be helpful when a load test requires high volume.

Support Control-C to stop tests

Currently, trying to abort the lucy.sh script will leave the following AWS resources running:

  • Gru
  • Minions
  • ECS Cluster
  • ECS Task

Run Gru and Minions in one cluster

Now that we are using ECS CLI to create all clusters and instances, we are creating two clusters:

  • JMeterGru - running 1 instance
  • JMeterMinion - running all minion instances

This currently slows the setup and teardown time by 30 seconds.

Once ECS CLI supports the ability to scale instances per service, we could use the following procedure:

  1. Create Cluster with MINION_COUNT + 1 instances
  2. compose up with an update lucy.yml creating MINION_COUNT minions and 1 Gru
  3. Upgrade Gru to pull the JMX file from as s3 bucket

Another option:

  1. Create Cluster with MINION_COUNT + 1 instances
  2. compose up with a lucy.yml creating MINION_COUNT minions
  3. Find the IP address of the free instance within the cluster (procedure????)
  4. Start Gru on the free instance

Retaining the cluster is causing containers to keep the port busy

If cluster is retained, after trying to do another test run, this happens:

lucy_1 | Minions at 172.31.36.28,172.31.30.216,172.31.35.238,172.31.18.154,172.31.43.111,172.31.46.132,172.31.33.178,172.31.30.205,172.31.37.225,172.31.43.145,172.31.43.198,172.31.30.29,172.31.11.207,172.31.32.122,172.31.2.35,172.31.14.72,172.31.9.104,172.31.15.244,172.31.15.123,172.31.1.211,172.31.5.13,172.31.36.53,172.31.41.171,10.74.1.22,10.74.1.119,172.31.32.24,172.31.45.162,172.31.37.187,172.31.41.38,172.31.34.139,172.31.24.30,172.31.9.165,172.31.36.205,172.31.38.45,10.0.1.177,172.31.41.12,172.31.12.89,172.31.32.49,172.31.0.239,172.31.34.248,None,172.31.25.152,172.31.46.203,172.31.5.71,172.31.24.167,172.31.24.135,172.31.33.56,172.31.33.17,172.31.44.126,172.31.46.158,172.31.5.203,172.31.33.136,172.31.37.190,172.31.33.176,172.31.28.80,172.31.1.166,172.31.11.91,172.31.45.134,None,172.31.4.6,172.31.3.89,172.31.28.126,172.31.37.114,172.31.15.153,172.31.10.180,172.31.44.44,172.31.41.248,172.31.35.119,172.31.41.46,10.0.0.121,172.31.35.74,172.31.22.195,172.31.32.129,172.31.38.115,172.31.36.18,172.31.33.150,172.31.26.188,172.31.39.6,172.31.0.110,172.31.40.125,172.31.44.231,10.0.0.180,172.31.2.177,172.31.40.134,172.31.15.182,172.31.36.44,172.31.46.174,172.31.16.199,172.31.42.191,None,172.31.39.72,None,172.31.40.202,172.31.7.248,172.31.33.53,172.31.16.193,172.31.0.209,172.31.34.62,None,172.31.43.154,172.31.38.100,172.31.9.249,172.31.10.0,172.31.43.78,172.31.24.144,172.31.31.235,172.31.37.111,172.31.24.85,172.31.42.17,172.31.16.131,172.31.40.163,172.31.14.237,172.31.46.173,172.31.32.112,None,172.31.4.1,172.31.39.60,172.31.15.206,172.31.40.59,172.31.2.192,172.31.10.41,10.0.1.107,172.31.46.185,172.31.3.93,
lucy_1 | Copying /plans/demo.jmx to Gru
lucy_1 | ssh: Could not resolve hostname 54.233.252.21754.207.105.77: Name does not resolve
lucy_1 | lost connection
lucy_1 | Running Docker to start JMeter in Gru mode
lucy_1 | ssh: Could not resolve hostname 54.233.252.21754.207.105.77: Name does not resolve
lucy_1 | Copying results from Gru
lucy_1 | ssh: Could not resolve hostname 54.233.252.21754.207.105.77: Name does not resolve
lucy_1 | cluster/JMeter is retained upon request.

or:

docker: Error response from daemon: driver failed programming external connectivity on endpoint lucid_clarke (355735cf1d09ee6038688d2e5f61b2ede2680f76b8d4d7c9e05561d2518f6711): Bind for 0.0.0.0:51000 failed: port is already allocated

Also, # - RETAIN_CLUSTER=false had to comment this line or remove it in order to get the cluster deleted, probably another issue altogether.

Allow multiple test runs per Cluster lifetime

AWS minimum billing granularity is per hour, so allow the Cluster to accept multiple test runs.

Idea:
Keep the cluster alive for 50 minutes and allow additional test runs using the lucy image. Lucy will need to detect if the cluster is running, determine the IP addesses for Gru and all the Minions, and execute a new job.

https://aws.amazon.com/ec2/faqs/

Script cant handle more than 100 instances

Hi,

First, thanks for this very cool and easy to setup lovely "script"

I planned a huge load tests ;)
The script has problems with more than 99 Instances.

Error: when calling the DescribeContainerInstances operation: instanceIds can have at most 100 items.

Also there are a lot of "nonenonenonenonenonenonenonenonenonenone" in the output

Instance count is 100
Instance count is 101
time="2018-08-15T16:55:36Z" level=warning msg="Environment variable is unresolved. Setting it to a blank value..." key name=CUSTOM_PLUGIN_URL
time="2018-08-15T16:55:36Z" level=info msg="Using ECS task definition" TaskDefinition="jmeter:51"
time="2018-08-15T16:55:37Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:37Z" level=warning msg="Environment variable is unresolved. Setting it to a blank value..." key name=CUSTOM_PLUGIN_URL
time="2018-08-15T16:55:37Z" level=info msg="Using ECS task definition" TaskDefinition="jmeter:51"
time="2018-08-15T16:55:37Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:37Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:38Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
time="2018-08-15T16:55:39Z" level=info msg="Couldn't run containers" reason="RESOURCE:MEMORY"
Container instances IDs: arn:aws:ecs:eu-central-1:530324959084:container-instance/00431776-7cbb-4fb5-ba76-0161d18c0c47 arn:aws:ecs:eu-central-1:530324959084:container-instance/0240eb1b-684b-41c4-9584-48ddd66ca3ba arn:aws:ecs:eu-central-1:530324959084:container-instance/0a93f309-ffa9-4d7d-8d75-b285c623b023 arn:aws:ecs:eu-central-1:530324959084:container-instance/0aa603ae-ddf5-4459-b8d6-60a44fa54965 arn:aws:ecs:eu-central-1:530324959084:container-instance/0b070e27-557b-4c24-9377-e178d7f75be2 arn:aws:ecs:eu-central-1:530324959084:container-instance/0bc56275-867f-432c-a747-71187b13c592 arn:aws:ecs:eu-central-1:530324959084:container-instance/0f293991-a548-4832-88a1-2bcd78fbdb66 arn:aws:ecs:eu-central-1:530324959084:container-instance/13a88b31-269c-44e6-91a7-d683d830e214 arn:aws:ecs:eu-central-1:530324959084:container-instance/16aafbe6-ba03-4b96-9ed4-e647036eb405 arn:aws:ecs:eu-central-1:530324959084:container-instance/16e20247-bc51-4b4c-bb4e-86bb4a95cb7d arn:aws:ecs:eu-central-1:530324959084:container-instance/177ee7f1-6f3e-4c9a-9b16-0aefc942fc7b arn:aws:ecs:eu-central-1:530324959084:container-instance/18e590a0-eefa-4b29-ab73-4c9a6ab67b46 arn:aws:ecs:eu-central-1:530324959084:container-instance/1a89d5d0-8023-43e1-ac81-c3505c41ff2d arn:aws:ecs:eu-central-1:530324959084:container-instance/1f8dfef5-e263-41fe-8552-225722228676 arn:aws:ecs:eu-central-1:530324959084:container-instance/21bad63f-e2d9-4b99-961b-e13f35f57b76 arn:aws:ecs:eu-central-1:530324959084:container-instance/23a709b2-b896-426d-a2c4-00cbb0cf2f5f arn:aws:ecs:eu-central-1:530324959084:container-instance/25ed026e-ae2e-4a66-9bea-bf8b4e979b46 arn:aws:ecs:eu-central-1:530324959084:container-instance/299b9bd0-c5c2-4821-a047-c29cf52e1314 arn:aws:ecs:eu-central-1:530324959084:container-instance/2c4e17a3-104c-4b16-9909-68131950010c arn:aws:ecs:eu-central-1:530324959084:container-instance/2cebaaf3-0334-4f87-bb58-61778800a1a4 arn:aws:ecs:eu-central-1:530324959084:container-instance/305354a7-c378-440c-9d9e-9cbcd5c71089 arn:aws:ecs:eu-central-1:530324959084:container-instance/3625bd5a-1d11-4211-8ffd-a0e3ae1c03f5 arn:aws:ecs:eu-central-1:530324959084:container-instance/3a97b1f1-9d0b-4e53-827a-97809d753838 arn:aws:ecs:eu-central-1:530324959084:container-instance/3ebd4896-83ab-4672-b141-7f7dd925f655 arn:aws:ecs:eu-central-1:530324959084:container-instance/3f4b0539-32f6-49d4-aa82-c6ad38e47be4 arn:aws:ecs:eu-central-1:530324959084:container-instance/47c88693-19ba-43e1-8ac9-c2f8ba32cd15 arn:aws:ecs:eu-central-1:530324959084:container-instance/484090fe-ce66-4087-82b4-694b5df5c9ca arn:aws:ecs:eu-central-1:530324959084:container-instance/494c1220-0fe7-46e4-896f-41732140c866 arn:aws:ecs:eu-central-1:530324959084:container-instance/4b9a341b-0188-4b21-abec-67f69fa3650f arn:aws:ecs:eu-central-1:530324959084:container-instance/4faeee65-8a5d-43e5-936a-8dfcf1dc47e9 arn:aws:ecs:eu-central-1:530324959084:container-instance/51fbfb63-23fc-48a0-951d-023511f38322 arn:aws:ecs:eu-central-1:530324959084:container-instance/55a00536-0639-4647-a556-2d65444bbee2 arn:aws:ecs:eu-central-1:530324959084:container-instance/55cf9541-a317-4a1c-beb1-d656fe40a644 arn:aws:ecs:eu-central-1:530324959084:container-instance/59961526-097f-467e-bd98-5ac8a145b7dc arn:aws:ecs:eu-central-1:530324959084:container-instance/5cbd6a2e-9ae9-4ab1-969c-7d7f65019eb0 arn:aws:ecs:eu-central-1:530324959084:container-instance/5dbe1e46-a20c-4872-9d2f-ca4e3764cdea arn:aws:ecs:eu-central-1:530324959084:container-instance/60c31908-eeec-4be9-8dd1-979ec84b269e arn:aws:ecs:eu-central-1:530324959084:container-instance/616c6336-d1d2-4938-aa34-30d9372b37dd arn:aws:ecs:eu-central-1:530324959084:container-instance/6202af95-0a9f-48ce-a6da-822ab1b9db2e arn:aws:ecs:eu-central-1:530324959084:container-instance/670a96e1-3038-4ae1-b5f8-895328e8535e arn:aws:ecs:eu-central-1:530324959084:container-instance/672cf831-2533-47f7-894a-5cff336de2ea arn:aws:ecs:eu-central-1:530324959084:container-instance/6a7cd8a9-43fa-417d-ac7b-3a6f548ae02d arn:aws:ecs:eu-central-1:530324959084:container-instance/72e0c16f-0825-4741-bd5a-941d6a13f101 arn:aws:ecs:eu-central-1:530324959084:container-instance/737a5bf9-75b4-44c6-b57c-761b2555dafa arn:aws:ecs:eu-central-1:530324959084:container-instance/73f5d35f-2eaa-4ab0-9949-053d309a4548 arn:aws:ecs:eu-central-1:530324959084:container-instance/741bcb49-27a3-4293-823f-20a2e709227e arn:aws:ecs:eu-central-1:530324959084:container-instance/7468f119-e33e-47b0-bdc2-ec5f2194b204 arn:aws:ecs:eu-central-1:530324959084:container-instance/7526a671-1de7-4d5e-a386-6fbc3c6d4212 arn:aws:ecs:eu-central-1:530324959084:container-instance/77ff09fd-3bb6-4b88-8284-df50c17805b0 arn:aws:ecs:eu-central-1:530324959084:container-instance/78f460c1-0f71-4cdd-8b1f-612e131f530e arn:aws:ecs:eu-central-1:530324959084:container-instance/793f9b35-cb1c-49e3-ad57-ac104c291c9b arn:aws:ecs:eu-central-1:530324959084:container-instance/803262b7-a2fd-416e-b362-24e0c3000800 arn:aws:ecs:eu-central-1:530324959084:container-instance/816e7eb7-08e4-4aec-bc75-4372d04968cb arn:aws:ecs:eu-central-1:530324959084:container-instance/81ffa859-9f23-45da-a2e4-5f621189313f arn:aws:ecs:eu-central-1:530324959084:container-instance/83f64305-1c62-4756-9307-8054f1916140 arn:aws:ecs:eu-central-1:530324959084:container-instance/856124fc-4f6f-4a53-a15e-caee0d459d74 arn:aws:ecs:eu-central-1:530324959084:container-instance/8829d4c7-bcda-4c6d-9cc7-75f0219f0b7a arn:aws:ecs:eu-central-1:530324959084:container-instance/89b91e05-3e76-4d97-825c-2fb34858116e arn:aws:ecs:eu-central-1:530324959084:container-instance/8a29f26f-8d04-4ff4-b5cc-c85023dee586 arn:aws:ecs:eu-central-1:530324959084:container-instance/8e758800-01f4-4fb4-a960-33b7d9d60c73 arn:aws:ecs:eu-central-1:530324959084:container-instance/91a4f691-bca8-47d7-b536-61c08ceb6935 arn:aws:ecs:eu-central-1:530324959084:container-instance/92f753ab-115c-403e-8bbc-71c66ce0a317 arn:aws:ecs:eu-central-1:530324959084:container-instance/9449429d-0b47-4a8f-b669-b39e4acb059d arn:aws:ecs:eu-central-1:530324959084:container-instance/954e305c-ddc0-4c29-9d76-d0144f92ec5d arn:aws:ecs:eu-central-1:530324959084:container-instance/9c90f545-5157-43c9-aac1-d048afa6059d arn:aws:ecs:eu-central-1:530324959084:container-instance/9d45932b-6f8a-4eb6-bd36-2efc54562bfd arn:aws:ecs:eu-central-1:530324959084:container-instance/a171e178-2332-49ba-aa45-d191b1369e2d arn:aws:ecs:eu-central-1:530324959084:container-instance/a24ebe59-488f-4341-86ef-6edc4099e6f9 arn:aws:ecs:eu-central-1:530324959084:container-instance/a2549a60-6a02-40d5-83ba-87c15f976c88 arn:aws:ecs:eu-central-1:530324959084:container-instance/a31e4bdd-c300-4789-90d5-d5afcf1eca4f arn:aws:ecs:eu-central-1:530324959084:container-instance/a33a3b8f-f75c-4bac-9356-69c7c5b0942e arn:aws:ecs:eu-central-1:530324959084:container-instance/a3f99022-9863-4fac-97bc-aea4005007de arn:aws:ecs:eu-central-1:530324959084:container-instance/a4b1eb80-0d08-4ea4-a27a-225fbc613588 arn:aws:ecs:eu-central-1:530324959084:container-instance/a5173b2a-aff5-4ed7-8192-8ae32e92cb0d arn:aws:ecs:eu-central-1:530324959084:container-instance/a544d67f-3d9d-4a51-a881-3ffa662430db arn:aws:ecs:eu-central-1:530324959084:container-instance/ad129594-6f92-4575-8c37-88ee291b5843 arn:aws:ecs:eu-central-1:530324959084:container-instance/adf8c45f-30f7-4502-9d06-77bca222b691 arn:aws:ecs:eu-central-1:530324959084:container-instance/af1e0eb0-77f4-4d7b-8e9a-950bc766c7aa arn:aws:ecs:eu-central-1:530324959084:container-instance/b0a1a999-c17a-4282-8681-f1c57422f6f7 arn:aws:ecs:eu-central-1:530324959084:container-instance/b0be02f8-f361-46f2-8238-b7c4ddf30da6 arn:aws:ecs:eu-central-1:530324959084:container-instance/b30c1641-6702-4efa-b1fa-714041dab21f arn:aws:ecs:eu-central-1:530324959084:container-instance/b3d29de6-3f92-4d79-afac-fd84ab50a0c7 arn:aws:ecs:eu-central-1:530324959084:container-instance/bb4b4d00-1de7-4f75-9246-300d06180b05 arn:aws:ecs:eu-central-1:530324959084:container-instance/bb68d9df-7ece-4153-b68f-b1a079451b12 arn:aws:ecs:eu-central-1:530324959084:container-instance/c12137c7-380a-447a-8ff8-85a9887ab1e5 arn:aws:ecs:eu-central-1:530324959084:container-instance/c1ee77d3-3869-41d8-86f6-23a73506d1c8 arn:aws:ecs:eu-central-1:530324959084:container-instance/c23061b0-5f88-4e98-a3f7-4455d3cfb227 arn:aws:ecs:eu-central-1:530324959084:container-instance/c2efb177-ce70-407f-81ad-3617031a02fa arn:aws:ecs:eu-central-1:530324959084:container-instance/c6815d23-756f-4d59-8c14-abc9cd3ce75c arn:aws:ecs:eu-central-1:530324959084:container-instance/caaf37ab-14a4-4c29-8d09-225622be4726 arn:aws:ecs:eu-central-1:530324959084:container-instance/db22345a-72ea-4bc1-aea6-dd4d03aaaf9a arn:aws:ecs:eu-central-1:530324959084:container-instance/dc81bd74-2178-4cee-bd66-3417acb05608 arn:aws:ecs:eu-central-1:530324959084:container-instance/e06f51a2-6a0b-434e-b527-7555b08d7151 arn:aws:ecs:eu-central-1:530324959084:container-instance/eddce58a-4b37-4edb-adb9-05af682b5494 arn:aws:ecs:eu-central-1:530324959084:container-instance/f237fa8f-f71b-4f01-b545-e8a03833b506 arn:aws:ecs:eu-central-1:530324959084:container-instance/f4212a71-b17e-4399-86a2-694ea62a36cf arn:aws:ecs:eu-central-1:530324959084:container-instance/f68cc5c0-0ce6-44ca-b9f2-3bbc0ef24c15 arn:aws:ecs:eu-central-1:530324959084:container-instance/f898ae4e-a85d-487a-8825-06dd2d47f4ed arn:aws:ecs:eu-central-1:530324959084:container-instance/fc62bffe-6f6f-4efd-b2d6-cfc07fbba2e7 arn:aws:ecs:eu-central-1:530324959084:container-instance/fd433ed7-29d9-49e4-a22f-a96fe489be75 arn:aws:ecs:eu-central-1:530324959084:container-instance/fdac2474-b374-4e66-9e98-3038f38868ad 

An error occurred (InvalidParameterException) when calling the DescribeContainerInstances operation: instanceIds can have at most 100 items.
Gru instance ID: 

An error occurred (InvalidParameterException) when calling the DescribeContainerInstances operation: instanceIds can have at most 100 items.
Minion instances IDs: 
Gru at 18.184.165.179NoneNoneNoneNoneNoneNoneNoneNoneNoneNone18.184.77.12054.93.62.18518.185.124.1118.196.88.18618.197.123.4118.184.3.22454.93.67.24218.185.48.5818.184.204.17654.93.231.3918.195.35.12752.59.203.118.195.61.4554.93.253.1683.120.32.13454.93.234.14752.59.193.3818.196.249.2954.93.67.8252.57.237.20852.59.113.6552.57.133.24518.195.241.12035.156.245.21018.197.143.1393.120.116.14318.184.117.20054.93.51.018.184.154.5018.184.6.9018.184.222.19518.194.68.952.58.107.19535.159.53.22718.185.54.10818.197.199.2173.120.32.1133.120.37.2918.184.14.12152.59.194.216NoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNoneNone18.184.217.17818.185.138.11218.196.150.17518.185.83.9218.185.123.16935.159.51.22518.194.45.3052.28.201.13554.93.251.7218.185.120.2543.120.38.4418.184.22.10418.197.104.18852.58.123.713.120.116.22254.93.249.13835.158.95.6852.58.219.15552.58.85.2418.185.18.8918.185.8.7035.158.118.17018.196.176.11118.185.21.24852.59.190.14752.59.162.1903.120.116.21235.158.119.11018.184.77.13635.158.222.10252.59.231.8518.185.35.23254.93.191.13518.194.219.6054.93.214.2754.93.251.6735.158.239.4618.194.252.718.194.83.3054.93.115.18618.184.222.352.28.36.10254.93.212.19718.194.223.4418.185.102.8518.185.120.7418.197.57.4454.93.195.9435.159.18.18452.59.208.1818.185.125.19618.196.205.9918.185.125.21918.184.13.18318.195.174.254.93.173.21754.93.114.7418.194.42.13818.184.112.23535.158.118.143NoneNoneNoneNoneNoneNoneNoneNoneNoneNone
Minions at 10.74.2.228,None,None,None,None,None,None,None,None,None,None,10.74.2.90,10.74.2.58,10.74.2.122,10.74.2.155,10.74.2.57,10.74.2.111,10.74.2.47,10.74.2.124,10.74.2.198,10.74.2.180,10.74.2.26,10.74.2.235,10.74.2.175,10.74.2.12,10.74.2.61,10.74.2.141,10.74.2.98,10.74.2.211,10.74.2.177,10.74.2.161,10.74.1.106,10.74.1.57,10.74.1.28,10.74.1.93,10.74.1.114,10.74.1.209,10.74.1.116,10.74.1.197,10.74.1.53,10.74.1.149,10.74.2.136,10.74.2.184,10.74.2.217,10.74.2.63,10.74.2.79,10.74.2.147,10.74.2.81,10.74.2.166,10.74.2.36,10.74.2.181,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,10.74.1.158,10.74.1.131,10.74.1.128,10.74.1.224,10.74.1.134,10.74.1.87,10.74.1.85,10.74.1.181,10.74.1.72,10.74.1.216,10.74.1.154,10.74.1.56,10.74.1.200,10.74.1.185,10.74.1.79,10.74.1.76,10.74.1.109,10.74.1.179,10.74.1.215,10.74.1.55,10.74.1.26,10.74.1.90,10.74.1.235,10.74.1.41,10.74.1.254,10.74.1.77,10.74.1.162,10.74.1.230,10.74.1.135,10.74.1.36,10.74.2.95,10.74.2.156,10.74.2.240,10.74.2.70,10.74.2.101,10.74.2.27,10.74.2.43,10.74.2.238,10.74.2.142,10.74.2.46,10.74.1.169,10.74.1.31,10.74.1.143,10.74.1.45,10.74.1.237,10.74.1.178,10.74.1.99,10.74.1.151,10.74.1.119,10.74.1.213,10.74.1.201,10.74.1.140,10.74.1.98,10.74.1.211,10.74.1.19,10.74.1.144,10.74.1.65,10.74.1.38,10.74.1.246,10.74.1.150,None,None,None,None,None,None,None,None,None,None,
Copying /plans/example.jmx to Gru
ssh: Could not resolve hostname 18.184.165.179nonenonenonenonenonenonenonenonenonenone18.184.77.12054.93.62.18518.185.124.1118.196.8: Name does not resolve

Adjust volume size

Hi, I think it would be good if we can adjust the volume size.
Currently its not possible to make the volume bigger to host more logs. In big tests that is needed.

ecs-cli seems not to have an option to set the EBS Volume size. In the cluster setup GUI on AWS Web Console its possible.

Support CloudFormation to create the VPC

Port aws-setup.sh to a CloudFormation template. Setup would be a single aws cli call:

aws cloudformation create-stack --stack-name JMeter --template-body file://jmeter-vpc.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.