Giter VIP home page Giter VIP logo

terraform-aws-bastion's People

Contributors

asagage avatar bpar476 avatar cernytomas avatar dannyibishev avatar e-bourhis avatar figuerascarlos avatar fllaca avatar gowiem avatar guimove avatar henadzit avatar jeremietharaud avatar jirkabs avatar maartenvanderhoef avatar michal-adamek avatar naineel avatar ohayak avatar paullucas-iw avatar plmaheux avatar protip avatar rahloff avatar remusss avatar rob-unearth avatar robbiet480 avatar sturman avatar timcosta avatar toddlj avatar va3093 avatar willaustin avatar wobondar avatar zollie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-bastion's Issues

Dependency violation when changing name_prefix.

I had trouble creating multiple bastions in the same AWS account but different Terraform workspaces so I added the bastion_launch_configuration_name variable in order to fix that. Now however, it's forcing recreation of some security groups and as a result is giving a dependency violation. I think it will be fixed by adding create_before_destroy = true to the security group lifecycles.

module "bastion" {
  source                      = "Guimove/bastion/aws"
  bucket_name                 = "bastion-${var.customer}"
  region                      = var.region
  vpc_id                      = aws_vpc.main.id
  cidrs                       = var.bastion_allowed_cidrs
  is_lb_private               = false
  bastion_host_key_pair       = aws_key_pair.ec2key.id
  create_dns_record           = false
  elb_subnets                 = aws_subnet.pub_sn.*.id
  auto_scaling_group_subnets  = aws_subnet.pub_sn.*.id
  bastion_launch_configuration_name = "bastion-${var.customer}-${var.environment}"
  tags = {
    Name = "bastion-${var.customer}"
    Customer = var.customer
  }
}

Plan:

  # aws_instance.mgmt[1] will be updated in-place
  ~ resource "aws_instance" "mgmt" {
        ami                          = "ami-0dd655843c87b6930"
        arn                          = "arn:aws:ec2:us-west-1:redacted:instance/i-0441redacted"
        associate_public_ip_address  = false
        availability_zone            = "us-west-1b"
        cpu_core_count               = 2
        cpu_threads_per_core         = 1
        disable_api_termination      = false
        ebs_optimized                = false
        get_password_data            = false
        id                           = "i-0441redacted"
        instance_state               = "running"
        instance_type                = "t2.medium"
        ipv6_address_count           = 0
        ipv6_addresses               = []
        key_name                     = "key-dev"
        monitoring                   = false
        primary_network_interface_id = "eni-030bredacted"
        private_dns                  = "ip-172-22-2-22.us-west-1.compute.internal"
        private_ip                   = "172.22.2.22"
        security_groups              = []
        source_dest_check            = true
        subnet_id                    = "subnet-0cferedacted"
        tags                         = {
            "Customer" = "dev"
            "Name"     = "mgmt-ui-instance-dev"
        }
        tenancy                      = "default"
        volume_tags                  = {}
      ~ vpc_security_group_ids       = [
            "sg-07acredacted",
            "sg-0c439redacted",
        ] -> (known after apply)

        credit_specification {
            cpu_credits = "standard"
        }

        root_block_device {
            delete_on_termination = true
            encrypted             = false
            iops                  = 100
            volume_id             = "vol-03ddredacted"
            volume_size           = 8
            volume_type           = "gp2"
        }
    }

  # module.bastion.aws_security_group.bastion_host_security_group (deposed object cbcdd9a2) will be destroyed
  - resource "aws_security_group" "bastion_host_security_group" {
      - arn                    = "arn:aws:ec2:us-west-1:redacted:security-group/sg-0ed3redacted" -> null
      - description            = "Enable SSH access to the bastion host from external via SSH port" -> null
      - egress                 = [] -> null
      - id                     = "sg-0ed3redacted" -> null
      - ingress                = [] -> null
      - name                   = "lc-host" -> null
      - owner_id               = "redacted" -> null
      - revoke_rules_on_delete = false -> null
      - tags                   = {
          - "Customer" = "dev"
          - "Name"     = "bastion-dev"
        } -> null
      - vpc_id                 = "vpc-0e28redacted" -> null
    }

  # module.bastion.aws_security_group.private_instances_security_group must be replaced
-/+ resource "aws_security_group" "private_instances_security_group" {
      ~ arn                    = "arn:aws:ec2:us-west-1:redacted:security-group/sg-0c43redacted" -> (known after apply)
        description            = "Enable SSH access to the Private instances from the bastion via SSH port"
      ~ egress                 = [] -> (known after apply)
      ~ id                     = "sg-0c43redacted" -> (known after apply)
      ~ ingress                = [] -> (known after apply)
      ~ name                   = "lc-priv-instances" -> "bastion-dev-dev-priv-instances" # forces replacement
      ~ owner_id               = "redacted" -> (known after apply)
        revoke_rules_on_delete = false
        tags                   = {
            "Customer" = "dev"
            "Name"     = "bastion-dev"
        }
        vpc_id                 = "vpc-0e28redacted"
    }

  # module.bastion.aws_security_group_rule.ingress_instances will be created
  + resource "aws_security_group_rule" "ingress_instances" {
      + description              = "Incoming traffic from bastion"
      + from_port                = 22
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = (known after apply)
      + self                     = false
      + source_security_group_id = "sg-09c2redacted"
      + to_port                  = 22
      + type                     = "ingress"
    }

unable to access private instances from bastion host

By default, there is now way to allow the bastion host to have access the instances running in my private subnets, the way i've worked around this is to add an egress rule for port 22 to my private subnet cidrs.

If there is another way to get around this, it would be helpful to know and document.

Thanks.

CloudWatch Alarms?

Hello,
I am trying to figure out how I can add CloudWatch Alarms for my bastion host(s) but I do not see an output in the module that I can feed to the CloudWatch Alarm Terraform code. Would it be possible to add this functionality or provide a workaround?

Additionally it would be nice to include an option for cloudwatch alarms that will trigger an autoscaling event of the bastion host ASG.

Thanks!

instance_type

Hi,

can we change instance_type?
maybe it will good idea to expose instance_type as variables.

Thanks

sync_users script not able to create users as the keys_retrieved_from_s3 file is malformed

Hi,
Started using this module as it was exactly what I was looking for. However I noticed that none of the user accounts that I have put ssh keys for in our S3 bucket were not being created.

I can still ssh to the bastion because i have the deployer key and upon logging on and checking the log file for the sync_users script, i saw that it was able to create 1 user but not the rest (i have 4 ssh public keys in s3 bucket).

when i looked into it, the contents of the ~/keys_retrieved_from_s3 had:

public-keys/user1.pub
public-keys/user2.pub	public-keys/user3.pub	public-keys/user4.pub

i.e. it looks like the sed -e "s/\t/\n/" at the end of the s3api list-objects call was not able to separate out the returned s3 objects output.

Sure enough, if i just output the s3api call, its output does show me differences between the tab used between user1 and user2 and the rest. (it's tricky to show this here but in my terminal, i can visually see a difference)

As a quick fix, using tr '\t' '\n' instead seems to produce me the output i wanted

public-keys/user1.pub
public-keys/user2.pub
public-keys/user3.pub
public-keys/user4.pub

Happy to PR back but was wondering if I am the only one hitting this problem.

hosted_zone_name is needed even with create_dns_record = false

I'm using version 1.0.4 and I got the following error when I tried to use this module with the create_dns_record option set to false:

Error: module.bastion.aws_route53_record.bastion_record_name: zone_id must not be empty

Checking the code I saw that zone_id uses hosted_zone_name

Reason for load balancer/access through it?

Hi, I am complete newbie to AWS resources and provisioning. I was wondering if the network load balancer was necessary for the bastion host? I understand that the reason for using it would be for optimal access to hosts in every availability zone, but if I was creating a small network, would it still be necessary?

Remove `AllowTcpForwarding` addition

I assume theres a specific reason this is here? Not sure. In my situation though, it causes my ssh's to fail

Host bastion.myhost.com
  	Hostname bastion.myhost.com
  	User aaron
    ForwardAgent no
  	IdentityFile ~/.ssh/id_rsa

Host 10.0.1.*
    User ubuntu
    IdentityFile ~/.ssh/my_key.pem
    ProxyCommand ssh -q -W %h:%p bastion.myhost.com

All the things i saw online said i need to remove it

https://unix.stackexchange.com/a/58756
https://serverfault.com/questions/535412/how-to-solve-the-open-failed-administratively-prohibited-open-failed-when-us

aws_kms_alias breaks on s3-domain names

Hi,

it seems with 1.2.1 the introduction of

resource "aws_kms_alias" "alias" {
  name          = "alias/${var.bucket_name}"
  target_key_id = aws_kms_key.key.arn
}

Error: "name" must begin with 'alias/' and be comprised of only [a-zA-Z0-9:/_-]

  on ../modules/terraform-aws-bastion/main.tf line 15, in resource "aws_kms_alias" "alias":
  15: resource "aws_kms_alias" "alias" {

Because of this restriction the module errors out when having a bucket-name with dots like 'bastion.vpc.company.net'.

[Issue/Enhancement] Wrong ami retrieved by the data source aws_ami

Hi,

The current filter on the data source aws_ami retrieves a dotnetcore linux AMI:
amzn2-ami-hvm-2.0.20180622.1-x86_64-gp2-dotnetcore-2019.04.09

The following resource should fix the issue:

data "aws_ami" "ami" {
  most_recent = true
  owners      = ["amazon"]
  name_regex  = "^amzn2-ami-hvm.*-ebs"

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }
}

So that the latest Amazon Linux 2 HVM EBS-based image will be retrieved.

Thank you.

Jeremie

allow http egress from bastion hosts.

Main Bastion Secruity group doesn't allow http egress, only https which is causing yum to fail on startup.

eg, i see this in the cloud init output log:

+ yum -y update --security
Loaded plugins: priorities, update-motd, upgrade-helper


 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Disable the repository, so yum won't use it by default. Yum will then
        just ignore the repository until you permanently enable it again or use
        --enablerepo for temporary usage:

            yum-config-manager --disable <repoid>

     4. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true

Cannot find a valid baseurl for repo: amzn-main/latest
Could not retrieve mirrorlist http://repo.us-east-1.amazonaws.com/latest/main/mirror.list error was
12: Timeout on http://repo.us-east-1.amazonaws.com/latest/main/mirror.list: (28, 'Connection timed out after 10001 milliseconds')

and I shouldn't, but it's failing because it only uses http, and not https.

Route53 type CNAME?

Since rout53 is creating record for bastion elb. Maybe type 'A' with alias should be more efficient and cheaper?

can't tunnel to the instances

Host bastion
Hostname xxxxxx
User xxxx

Host mine
hostname xxxx
user xxx
ProxyCommand ssh bastion nc %h %p

ssh bastion ==> works fine

But ssh mine ==? doesn't work

tag soon ?

Nice work.
Do you plan to tag soon ? I'd like to use the extra_user_data_content feature, but it seems module/source only fetches tagged version, and not latest.
Thanks.

NLB health checks fails when adding a CIDIR to access the bastion

Hi,

When we replace the default CIDR block for accessing the bastion by a specific one, the Network Load Balancer health checks fail resulting in unhealthy hosts.

To fix that, the cidr blocks of the LB subnets have to be added in the ingress security group rule of the bastion, because LB health checks are performed from IP addresses of the subnet list:

data "aws_subnet" "subnets" {
  count = "${length(var.elb_subnets)}"
  id    = "${var.elb_subnets[count.index]}"
}

resource "aws_security_group_rule" "ingress_bastion" {
  description = "Incoming traffic to bastion"
  type        = "ingress"
  from_port   = "${var.public_ssh_port}"
  to_port     = "${var.public_ssh_port}"
  protocol    = "TCP"
  cidr_blocks = ["${concat(data.aws_subnet.subnets.*.cidr_block, var.cidrs)}"]

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

Getting and error "Invalid argument name"

Terraform version - 0.12.8
module version - 1.1.2

Error: Invalid argument name

  on .terraform/modules/bastion/Guimove-terraform-aws-bastion-c1acebe/main.tf line 25, in resource "aws_s3_bucket" "bucket":
  25:       "rule"      = "log"

I'm not sure this module is compatible with the latest version of terraform

Feature: Log to Cloudwatch?

Hi,

it would be nice to be able to write the bsation-logs to Cloudwatch. This way it would be easier to set up Alerts / Monitoring of what's going on.

There is a nice bootstrap example here.

It probably only needs:

  • the cloudwatch-agent installed
  • iam-role with permissions to the log-group
  • a log-group

regards,
strowi

Allow setting non-standard ssh port

To add extra security to to the bastion host, it would be nice to be able to set the port that the LB uses to forward the to the ssh port, currently it's locked at 22, but according to my company's security policy, we can not have port 22 as the open port, we have to use a non-standard port.

Thanks.

Can't use bastion_host options on aws_instance with bastion created by module.

I'm trying to use this bastion to run provisioner commands on aws_isntances created by Terraform, but I'm getting the error timeout - last error: ssh: rejected: administratively prohibited (open failed) when creating the instances. I assume this is somewhat related to #32 but I thought that was fixed by #33, so I'm not sure why this is still not working. Here's a snippet of my Terraform code:

module "bastion" {
  source                      = "Guimove/bastion/aws"
  bucket_name                 = "${var.bastion_bucket_name}"
  region                      = var.region
  vpc_id                      = aws_vpc.main.id
  is_lb_private               = false
  bastion_host_key_pair       = aws_key_pair.ec2key.id
  create_dns_record           = false
  elb_subnets                 = aws_subnet.pub_sn.*.id
  auto_scaling_group_subnets  = aws_subnet.pub_sn.*.id
  tags = {
    Name =  "${var.bastion_bucket_name}"
  }
}

resource "aws_s3_bucket_object" "bastion_key" {
  bucket  = module.bastion.bucket_name
  key     = "public-keys/${var.current_user}.pub"
  source  = var.public_key_path
}

resource "aws_instance" "web" {
  # Make one for each subnet. In the future we might want to do some sort of autoscaling
  # group, but I don't think the mgmt ui is customer facing so it may not be neccessary
  count = length(aws_subnet.priv_sn)

  # The connection block tells the provisioner how to communicate with the instance 
  connection {
    # The default username for our AMI
    user = "ubuntu"
    host = self.public_ip
    # The connection will use the local SSH agent for authentication.
    bastion_host = module.bastion.elb_ip
    bastion_user = var.current_user
    bastion_private_key = file(var.private_key_path)
  }

  instance_type = "t2.medium"

  # Lookup the correct AMI based on the region we specified
  ami = lookup(var.ec2amis, var.region)

  # The name of the SSH keypair we created above
  key_name = aws_key_pair.ec2key.id

  # Security group to allow HTTP and SSH access
  vpc_security_group_ids = [
    aws_security_group.web.id, 
    module.bastion.private_instances_security_group
  ]

  # We're going to define these guys in the "private" subnet, although
  # right now our configuration only has one set of subnets
  subnet_id = aws_subnet.priv_sn[count.index].id

  # Creates a folder with the code in it on the remote. 
  provisioner "file" {
    source = var.code_folder
    destination = "/home/ubuntu/code"
  }

  # We run a remote provisioner on the instance after creating it. By default, just
  # updates apt-get, but we can add whatever we want to it.
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
    ]
  }

  tags = {
    Name = "web"
  }

  depends_on = [aws_s3_bucket_object.bastion_key]
}

I'm not seeing any logs for the connection, though that's to be expected I guess since tcp forwarding circumvents logging.

When I manually ssh into the instance and try to ssh into the instance (ssh -i ~/.ssh/mykey -A [email protected] followed by ssh [email protected] I get a Permission denied (publickey) error as well, but that might just be because I haven't set up agent forwarding correctly.

Any idea why this wouldn't be working?

Not recording sessions created with ProxyCommand and ssh -W option

Connecting to servers using ProxyCommand and ssh -W $host:$port bypasses the recording.

Example ssh config:

Host myserver.mycompany.com
    ProxyCommand ssh bastion.mycompany.com -W %h:%p

Is there a way to record the actions executed in the remote servers if connecting this way?

Thanks for this great project!

Unsupported argument "hosted_zone_name" for module "bastion"

I am getting an odd issue where my terraform plan step is now failing due to the unsupported argument "hosted_zone_name" in the "bastion" module. The below plan previously worked. It seems that "hosted_zone_name" has been replaced with "hosted_zone_id". However, I have not found any documentation indicating this change. Moreover, the terraform registry page for this repo shows that "hosted_zone_name" is still a valid input. It seems like this repo though no longer lists "hosted_zone_name" as a variable. Is there answer that the "hosted_zone_name" variable should now be "hosted_zone_id"? Thank you!

Code:

module "bastion" {
  source = "git::https://github.com/Guimove/terraform-aws-bastion.git?ref=master"
  bucket_name = module.bastion_logs_s3.bucket_id
  region = var.region
  vpc_id = module.vpc.vpc_id
  is_lb_private = "false"
  bastion_host_key_pair = var.stage
  hosted_zone_name = var.hosted_zone_id
  bastion_record_name = local.bastion_hostname
  elb_subnets = module.subnets.public_subnet_ids
  auto_scaling_group_subnets = module.subnets.public_subnet_ids
  create_dns_record = "true"
  tags = var.tags
}

Error:
Screen Shot 2020-03-19 at 7 49 35 PM

Switching from Mac to Windows forces recreation of Launch configuration and ASG

Hi

I came across this issue when following these steps :

  1. I create a bastion from a Mac with terraform apply
  2. I perform terraform plan from a Windows PC
    -> ASG and Launch configuration are planned to be recreated (edited output below)

Launch config is planned for recreation because user_data has "changed".
user_data has "changed" because the template file user_data.sh has different line ends on a Mac and on Windows

I was able to reproduce this behaviour on a Mac by changing line ends in user_data.sh

I was able to fix this behaviour by adding lifecycle { ignore_changes = [ "user_data" ] } to resource "aws_launch_configuration" "bastion_launch_configuration". I can submit a PR
with these changes

Edited terraform plan output :

  # module.bastion.aws_autoscaling_group.bastion_auto_scaling_group must be replaced
+/- resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
      ~ arn                       = "..." -> (known after apply)
      ~ availability_zones        = [ ... ] -> (known after apply)
        default_cooldown          = 180
        desired_capacity          = 1
      - enabled_metrics           = [] -> null
        force_delete              = false
        health_check_grace_period = 180
        health_check_type         = "EC2"
      ~ id                        = "..." -> (known after apply)
      ~ launch_configuration      = "..." -> (known after apply)
      ~ load_balancers            = [] -> (known after apply)
        max_size                  = 1
        metrics_granularity       = "1Minute"
        min_size                  = 1
      ~ name                      = "..." -> (known after apply) # forces replacement
        protect_from_scale_in     = false
      ~ service_linked_role_arn   = "..." -> (known after apply)
      - suspended_processes       = [] -> null
      ~ tags                      = [  ...  ] -> (known after apply)
        target_group_arns         = [
            "a...",
        ]
        termination_policies      = [
            "OldestLaunchConfiguration",
        ]
        vpc_zone_identifier       = [
            ...
        ]
        wait_for_capacity_timeout = "10m"
    }

  # module.bastion.aws_launch_configuration.bastion_launch_configuration must be replaced
+/- resource "aws_launch_configuration" "bastion_launch_configuration" {
        associate_public_ip_address      = true
      ~ ebs_optimized                    = false -> (known after apply)
        enable_monitoring                = true
        iam_instance_profile             = "..."
      ~ id                               = "..." -> (known after apply)
        image_id                         = "ami-010a7fa92957a6a71"
        instance_type                    = "t2.nano"
        key_name                         = "..."
      ~ name                             = "..." -> (known after apply)
        name_prefix                      = "..."
        security_groups                  = [ ... ]
      ~ user_data                        = "6a4294ecb9d7bcd7a1d6667c893e762a89cb637f" -> "bef2c6b2a35ff4b2641a209256fddb3bb0e0227f" # forces replacement
      - vpc_classic_link_security_groups = [] -> null

      + ebs_block_device { ... }

      + root_block_device { ... }
    }

1.1.0 Does Not Work - Wrong AMI Architecture

There appears to have been a PR #35 that was merged in, but no new release was created that actually allows the use of this module in the terraform registry.

I know this because I am currently trying to use this module and am getting an obscure error in the activity log of the ASG. This error led me to do some investigation and the AMI that was selected for the launch configuration is arm64 architecture -- which is not compatible with an instance type of t2.nano.

I believe all that needs to be done is to create a new GitHub release that includes the recently merged changes to resolve this, unless I am misunderstanding.

Issue with User Data script

Hey! Thanks for the module, it's super useful!

I ran into a little issue here: https://github.com/Guimove/terraform-aws-bastion/blob/master/user_data.sh#L116

The AWS CLI Command can result in multiple objects being listed on the same line which breaks the subsequent while read line loop.

I fixed it locally by changing it to:
aws s3api list-objects --bucket BUCKET --prefix public-keys/ --region REGION --output text --query 'Contents[?Size>0].Key' | sed -e 'y/\\t/\\n/' | tr '\t' '\n' > ~/keys_retrieved_from_s3

Note the replacement of tab characters with newlines: tr '\t' '\n'

I'd be happy to contribute a PR if it helps.

Add depends_on s3 bucket to ASG resource

In my instance I had given an S3 bucket name that was not available. However, the ASG still got created even though the S3 bucket did not, so the user data script that ran never created /logs and any keys dropped in /public-keys didn't work.

I would suggest adding a depends_on to the ASG such that the ASG will only be created IF the s3 bucket is successfully created. Let me know if you want a PR for this.

Support for Terraform 0.12

Hi!

we are using this awesome module, but we would like to migrate our terraform code to 0.12. What is required should be only syntax changes, we could even help with that with a PR

Unable to use `aws` CLI when behind proxy

Hello,

The aws CLI is unable to request S3 API when behind a HTTP Proxy.

Things I needed to do manually inside the EC2 instance to allow aws CLI was to declare *_proxy env variables :

  • http_proxy : http://my-proxy:3128/
  • https_proxy : http://my-proxy:3128/
  • no_proxy : localhost,127.0.0.1,169.254.169.254

Is there a way to provide those environment variables via the Terraform module ?

Thanks for your answer !

Add log lifecycle for S3 bucket (tunable)

Something like this: (or allow users to supply the bucket)

 lifecycle_rule {
    id      = "log"
    enabled = true

    prefix  = "log/"
    tags {
      "rule"      = "log"
      "autoclean" = "${var.log_auto_clean}"
    }

    transition {
      days          = "${var.log_standard_ia_days}"
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = "${var.log_glacier_days}"
      storage_class = "GLACIER"
    }

    expiration {
      days = "${var.log_expiry_days}"
    }
  }

Bastion security group egress is too restrictive

The current bastion SG only permits ssh (22), http (80) and https (443). We have additional ports we need to access, but cannot override this configuration.

Suggest the bastion SG is also an output variable to allow people to customise if required.

Help me vpc -> bastion -> rds

I use this great terraform module, but i don't know if this is my configuration is good

  1. I created a AWS VPC on eu-west-2 with
  • public subnets : 10.0.x.0/24, with x between 0-2
  • private subnets : 10.0.10x.0/24 with x between 0-2
  • db subnets : 10.0.20x.0/24 with between 0-2
  1. I created the security groups next :
  • sg "from-internet"
    input : 22/80/443 - 0.0.0.0/0
    output : All - 0.0.0.0/0

  • sg "from-public-subnet"
    input : 22/80/443 - 10.0.x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  • sg "from-private"
    input : 22/80/443 - 10.0.10x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  • sg "from-private-to-db"
    input : 5432 - 10.0.10x.0/24 with x between 0-2
    output : All - 0.0.0.0/0

  1. I created a RDS postgres on db subnet
    with security group from-private-to-db

I use bastion module with

  • ELB subnets in public subnets ,10.0.x.0/24 with x between 0-2
  • bastion EC2 in public subnets, 10.0.10x.0/24 with x between 0-2

but i succeed to connect to bastion host ssh -i ec2-user@, but i don't access to rds ....

are you one idea.

[Enhancement] Add description on inbound and outbound rules of SG

Hi,

The aws_security_group resource does not allow to add description on the inbound and outbound rules.

To fix this, we can add two aws_security_group_rule resources with a description of each rule as follows:

resource "aws_security_group" "bastion_host_security_group" {
  description = "Enable SSH access to the bastion host from external via SSH port"
  vpc_id      = "${var.vpc_id}"

  tags = "${merge(var.tags)}"
}

resource "aws_security_group_rule" "ingress_bastion" {
  description = "Incoming traffic to bastion"
  type        = "ingress"
  from_port   = "${var.public_ssh_port}"
  to_port     = "${var.public_ssh_port}"
  protocol    = "TCP"
  cidr_blocks = "${var.cidrs}"

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

resource "aws_security_group_rule" "egress_bastion" {
  description = "Outgoing traffic from bastion to instances"
  type        = "egress"
  from_port   = "0"
  to_port     = "65535"
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]

  security_group_id = "${aws_security_group.bastion_host_security_group.id}"
}

Thank you.

Jeremie

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.