Giter VIP home page Giter VIP logo

marcosborges / terraform-aws-loadtest-distribuited Goto Github PK

View Code? Open in Web Editor NEW
46.0 7.0 45.0 1.56 MB

This module proposes a simple and uncomplicated way to run your load tests created with JMeter, TaurusBzt or Locust on AWS as IaaS.

Home Page: https://registry.terraform.io/modules/marcosborges/loadtest-distribuited/aws/latest

Makefile 1.45% HCL 65.72% Shell 32.82%
terraform aws terraform-modules jmeter locust k6 taurus

terraform-aws-loadtest-distribuited's Introduction

AWS LoadTest Distribuited Terraform Module

This module proposes a simple and uncomplicated way to run your load tests created with JMeter, Locust, K6 or TaurusBzt on AWS as IaaS.

bp

Basic usage with JMeter

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "jmeter"
    loadtest_dir_source = "examples/plan/"
    nodes_size = 2
    
    loadtest_entrypoint = "jmeter -n -t jmeter/basic.jmx  -R \"{NODES_IPS}\" -l /var/logs/loadtest -e -o /var/www/html -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true "

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

bp

bp


Basic usage with Taurus

In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "jmeter"
    loadtest_dir_source = "examples/plan/"
    nodes_size = 2
    
    loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

Basic usage with Locust

module "loadtest-distribuited" {

    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    nodes_size = 2
    executor = "locust"

    loadtest_dir_source = "examples/plan/"
    locust_plan_filename = "basic.py"
    
    loadtest_entrypoint = <<-EOT
        nohup locust \
            -f ${var.locust_plan_filename} \
            --web-port=8080 \
            --expect-workers=${var.node_size} \
            --master > locust-leader.out 2>&1 &
    EOT

    node_custom_entrypoint = <<-EOT
        nohup locust \
            -f ${var.locust_plan_filename} \
            --worker \
            --master-host={LEADER_IP} > locust-worker.out 2>&1 &
    EOT

    subnet_id = data.aws_subnet.current.id
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

Advanced Config:

The module also provides advanced settings.

  1. It is possible to automate the splitting of the contents of a bulk file between the load nodes.

  2. It is possible to export the ssh key used in remote access.

  3. We can define a pre-configured and customized image.

  4. We can customize too many instances provisioning parameters: tags, monitoring, public_ip, security_group, etc...

module "loadtest" {


    source  = "marcosborges/loadtest-distribuited/aws"

    name = "nome-da-implantacao"
    executor = "bzt"
    loadtest_dir_source = "examples/plan/"

    loadtest_dir_destination = "/loadtest"
    loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/basic.yml"
    nodes_size = 3

    
    subnet_id = data.aws_subnet.current.id
    
    #AUTO SPLIT
    split_data_mass_between_nodes = {
        enable = true
        data_mass_filenames = [
            "data/users.csv"
        ]
    }

    #EXPORT SSH KEY
    ssh_export_pem = true

    #CUSTOMIZE IMAGE
    leader_ami_id = data.aws_ami.my_image.id
    nodes_ami_id = data.aws_ami.my_image.id

    #CUSTOMIZE TAGS
    leader_tags = {
        "Name" = "nome-da-implantacao-leader",
        "Owner": "nome-do-proprietario",
        "Environment": "producao",
        "Role": "leader"
    }
    nodes_tags = {
        "Name": "nome-da-implantacao",
        "Owner": "nome-do-proprietario",
        "Environment": "producao",
        "Role": "node"
    }
    tags = {
        "Name": "nome-da-implantacao",
        "Owner": "nome-do-proprietario",
        "Environment": "producao"
    }
 
    # SETUP INSTANCE SIZE
    leader_instance_type = "t2.medium"
    nodes_instance_type = "t2.medium"
 
    # SETUP JVM PARAMETERS
    leader_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "
    nodes_jvm_args = " -Xms12g -Xmx80g -XX:MaxMetaspaceSize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 "

    # DISABLE AUTO SETUP
    auto_setup = false

    # SET JMETER VERSION. WORK ONLY WHEN AUTO-SETUP IS TRUE
    jmeter_version = "5.4.1"

    # ASSOCIATE PUBLIC IP
    leader_associate_public_ip_address = true
    nodes_associate_public_ip_address = true
    
    # ENABLE MONITORING
    leader_monitoring = true
    nodes_monitoring = true

    #  SETUP SSH USERNAME
    ssh_user = "ec2-user"

    # SETUP ALLOWEDs CIDRS FOR SSH ACCESS
    ssh_cidr_ingress_block = ["0.0.0.0/0"]
    
}

data "aws_subnet" "current" {
    filter {
        name   = "tag:Name"
        values = ["my-subnet-name"]
    }
}

data "aws_ami" "my_image" {
    most_recent = true
    filter {
        name   = "owner-alias"
        values = ["amazon"]
    }
    filter {
        name   = "name"
        values = ["amzn2-ami-hvm*"]
    }
}

Sugestion

The C5 family of instances is a good choice for the load test.

Model vCPU Mem (GiB) Storage (GiB) Network Band. (Gbps)
c5n.large 2 5.25 EBS 25 -> 4.750
c5n.xlarge 4 10.5 EBS 25 -> 4.750
c5n.2xlarge 8 21 EBS 25 -> 4.750
c5n.4xlarge 16 42 EBS 25 4.750
c5n.9xlarge 36 96 EBS 50 9.500
c5n.18xlarge 72 192 EBS 100 19.000
c5n.metal 72 192 EBS 100 19.000

Requirements

Name Version
terraform >= 0.13.1
aws >= 3.63

Providers

Name Version
aws >= 3.63
null n/a
tls n/a

Modules

No modules.

Resources

Name Type
aws_iam_instance_profile.loadtest resource
aws_iam_role.loadtest resource
aws_instance.leader resource
aws_instance.nodes resource
aws_key_pair.loadtest resource
aws_security_group.loadtest resource
null_resource.executor resource
null_resource.key_pair_exporter resource
null_resource.publish_split_data resource
null_resource.split_data resource
tls_private_key.loadtest resource
aws_ami.amazon_linux_2 data source
aws_subnet.current data source
aws_vpc.current data source

Inputs

Name Description Type Default Required
auto_execute Execute Loadtest after leader and nodes available bool true no
auto_setup Install and configure instances Amazon Linux2 with JMeter and Taurus bool true no
executor Executor of the loadtest string "jmeter" no
jmeter_version JMeter version string "5.4.1" no
leader_ami_id Id of the AMI string "" no
leader_associate_public_ip_address Associate public IP address to the leader bool true no
leader_custom_setup_base64 Custom bash script encoded in base64 to setup the leader string "" no
leader_instance_type Instance type of the cluster leader string "t2.medium" no
leader_jvm_args JVM Leader JVM_ARGS string " -Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 " no
leader_monitoring Enable monitoring for the leader bool true no
leader_tags Tags of the cluster leader map {} no
loadtest_dir_destination Path to the destination loadtest directory string "/loadtest" no
loadtest_dir_source Path to the source loadtest directory string n/a yes
loadtest_entrypoint Path to the entrypoint command string "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" *.yml" no
name Name of the provision string n/a yes
nodes_ami_id Id of the AMI string "" no
nodes_associate_public_ip_address Associate public IP address to the nodes bool true no
nodes_custom_setup_base64 Custom bash script encoded in base64 to setup the nodes string "" no
nodes_instance_type Instance type of the cluster nodes string "t2.medium" no
nodes_jvm_args JVM Nodes JVM_ARGS string "-Xms2g -Xmx2g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 -Dnashorn.args=--no-deprecation-warning -XX:+HeapDumpOnOutOfMemoryError " no
nodes_monitoring Enable monitoring for the leader bool true no
nodes_size Total number of nodes in the cluster number 2 no
nodes_tags Tags of the cluster nodes map {} no
region Name of the region string "us-east-1" no
split_data_mass_between_nodes Split data mass between nodes
object({
enable = bool
data_mass_filename = string
})
{
"data_mass_filename": "../plan/data/data.csv",
"enable": false
}
no
ssh_cidr_ingress_blocks SSH user for the leader list
[
"0.0.0.0/0"
]
no
ssh_export_pem n/a bool false no
ssh_user SSH user for the leader string "ec2-user" no
subnet_id Id of the subnet string n/a yes
tags Common tags map {} no
taurus_version Taurus version string "1.16.0" no
web_cidr_ingress_blocks web for the leader list
[
"0.0.0.0/0"
]
no

Outputs

Name Description
leader_private_ip The private IP address of the leader server instance.
leader_public_ip The public IP address of the leader server instance.
nodes_private_ip The private IP address of the nodes instances.
nodes_public_ip The public IP address of the nodes instances.

terraform-aws-loadtest-distribuited's People

Contributors

cloutiermat avatar julnamoo avatar luanelioliveira avatar marcosborges avatar moveurbody avatar xorphitus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-loadtest-distribuited's Issues

basic.jmx doesn't exist or can't be opened

Hi! i followed the steps but ive got this error:
image

module "loadtest" {
  source = "marcosborges/loadtest-distribuited/aws"
  version = "0.4.0"

  name = "execution-name-jmeter-loadtest"

  executor            = "jmeter"
  loadtest_dir_source = local.loadtest_dir_source
  nodes_size          = local.node_size
  nodes_intance_type  = "t3.medium"
  leader_instance_type = "t3.medium"

  loadtest_entrypoint = <<-EOT
    jmeter \
        -n \
        -t basic.jmx  \
        -R "{NODES_IPS}" \
        -l /loadtest/logs -e -o /var/www/html/jmeter \
        -Dnashorn.args=--no-deprecation-warning \
        -Dserver.rmi.ssl.disable=true
  EOT

  subnet_id = var.public_subnet_ids[0]
}

locals {
    node_size = 2
    loadtest_dir_source = "${path.module}/plan"
    loadtest_dir_destination = "/loadtest"
}

image

i googled but i cant find a configuration that works

feat: Allow dynamic setting of Locust version

Overview

I've encountered an issue where my current Locust scripts are incompatible with the hardcoded Locust package version inside the module (2.4.3) .
I have noticed the locust version is being installed with the following script (see reference to script)

To resolve this, I propose introducing a variable that allows users to specify the version of Locust they wish to use.

I look forward to your feedback and ideas on this proposed feature and possible ways of implementation.

Too many nodes causes excessive tag length

Having too many nodes results in a tag like:

"nodes"       = "172.31.5.52,172.31.7.105,172.31.15.75,172.31.11.163,172.31.2.94,172.31.14.217,172.31.10.93,172.31.3.237,172.31.10.112,172.31.15.102,172.31.9.36,172.31.8.191,172.31.5.171,172.31.2.136,172.31.3.135,172.31.10.140,172.31.3.202,172.31.5.59,172.31.2.215,172.31.13.198,172.31.1.47,172.31.2.69,172.31.5.86,172.31.8.116,172.31.9.174,172.31.2.49,172.31.5.27,172.31.3.94,172.31.7.210,172.31.0.52,172.31.13.162,172.31.0.81,172.31.6.4,172.31.15.212,172.31.11.96,172.31.6.218,172.31.6.81,172.31.1.18,172.31.10.253,172.31.15.203,172.31.12.18,172.31.8.108,172.31.1.130,172.31.12.222,172.31.14.206,172.31.4.249,172.31.6.175,172.31.0.49,172.31.3.152,172.31.11.94"

Which exceeds the character limit for a single tag. This causes a 400.

[Question] Is the k6 support official or not yet?

Hello,
I see that k6 is mentioned in the official docs, however, i don't see any docs on it and when i check the examples, it looks like the implementation is still on going.

Is this a thing that can be used?

[Locust] Support a way to install additional Python packages

I use some additional Python packages in my loadtest such as redis, pymongo, pika ... It would be nice to support the install command as an input that executes as part of the setup process. Currently I work around by adding the install commands in the entrypoint shell scripts but that is every manual and I am unable to publish the code for the team use.

Or is there a way to do it that I missed?

Cannot `terraform apply` minor changes with multiple workers due to a module cycle

Problem

I have tried to reuse AWS resources after the first terraform apply. In a way running the apply command with this, -var='locat_plan_filename="script1.py"' , but the apply failed with the below logs.

Logs

⚡  terraform plan
Acquiring state lock. This may take a few moments...
module.locust.tls_private_key.loadtest: Refreshing state... [id=a5b616973aaf0f278d108eb3d874fed573c0ca87]
module.locust.aws_key_pair.loadtest: Refreshing state... [id=locuster-dist-loadtest-keypair]
module.locust.data.aws_subnet.current: Reading...
module.locust.data.aws_ami.amazon_linux_2: Reading...
module.locust.aws_iam_role.loadtest: Refreshing state... [id=locuster-dist-loadtest-role]
module.locust.null_resource.key_pair_exporter: Refreshing state... [id=7139382739481599406]
module.locust.data.aws_subnet.current: Read complete after 1s [id=subnet-****]
module.locust.data.aws_vpc.current: Reading...
module.locust.data.aws_vpc.current: Read complete after 0s [id=vpc-****]
module.locust.aws_security_group.loadtest: Refreshing state... [id=sg-0cc7e185d0b6035e6]
module.locust.aws_iam_instance_profile.loadtest: Refreshing state... [id=locuster-dist-loadtest-profile]
module.locust.data.aws_ami.amazon_linux_2: Read complete after 3s [id=ami-0d08ef957f0e4722b]
module.locust.aws_instance.nodes[3]: Refreshing state... [id=i-0dcf1bab9c079d17d]
module.locust.aws_instance.nodes[2]: Refreshing state... [id=i-0e9a8ccbe930343a8]
module.locust.aws_instance.nodes[0]: Refreshing state... [id=i-0cc320e0c6964a3d4]
module.locust.aws_instance.nodes[1]: Refreshing state... [id=i-0500f93dc3e480a58]
module.locust.aws_instance.leader: Refreshing state... [id=i-01069a4aa417686e4]
module.locust.null_resource.spliter_execute_command: Refreshing state... [id=4082036881688020582]
module.locust.null_resource.setup_leader: Refreshing state... [id=6433090945558305086]
module.locust.null_resource.setup_nodes[2]: Refreshing state... [id=7349787731921181745]
module.locust.null_resource.setup_nodes[0]: Refreshing state... [id=4438619351419084083]
module.locust.null_resource.setup_nodes[1]: Refreshing state... [id=87202048137344834]
module.locust.null_resource.setup_nodes[3]: Refreshing state... [id=462707723291235854]
module.locust.null_resource.executor[0]: Refreshing state... [id=1416692590300092713]
╷
│ Error: Cycle: module.locust.null_resource.setup_nodes[2] (destroy), module.locust.null_resource.setup_nodes[0] (destroy), module.locust.null_resource.setup_nodes[3] (destroy), module.locust.null_resource.executor[0] (destroy), module.locust.null_resource.setup_nodes[1] (destroy)
│
│
╵
Releasing state lock. This may take a few moments...

Proposal

  • Change the dependency between null_resource.executor and null_resource.setup_nodes.
    As Locust requires to launch a master node first, and then workers to register recognized workers only. IMHO, the cycle would be solved with the dependency reverse, unless null_resource.setup_nodes has a critical reason that should be done before than null_resource.executor.
  • poc: #29

new release?

The release date of v0.4.0 Nov 12, 2021, while the latest commit date is Aug 3, 2022. I am using v0.4.0 and waiting for a bug fix in the latest version.

Locust listening port

Hi,
Thanks for this project, it is great.

An issue I have noticed, is that the Locust "leader" is listening on port 8080. Notice here --web-port=8080

https://github.com/locustio/locust/blob/master/examples/terraform/aws/main.tf

However the security group is opening ports 80,443

https://github.com/marcosborges/terraform-aws-loadtest-distribuited/blob/master/security.tf

Since 8080 doesn't match 80 or 443, the dashboard is inaccessible. right? or is that missing something?

Solution 1:

Add port 8080 in security.tf. When displaying terraform output, show port 8080 in the dashboard_url.

Solution 2:

Have terraform install an nginx proxy on the leader instance. Locust will still run on port 8080. Nginx proxies the request from 80 to 8080 in the background.

Let me know if you have a preference for solution 1 or 2. Maybe I could work on it, if there is time.

UI not accessible

Hi,

After some time running, the UI were no more accessible.

sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

It appears that the iptables rules were not existing anymore, maybe the host reboot ??

This may be fixed by putting the iptables rules in an init script for bootstrap, or maybe remove the iptable rule and make locust to listen on 80 directly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.