Giter VIP home page Giter VIP logo

terraform-aws-vpc's Introduction

AWS VPC Module

This module can be used to deploy a pragmatic VPC with various subnets types in # AZs. Common deployment examples can be found in examples/.

Note: For information regarding the 4.0 upgrade see our upgrade guide.

Usage

The example below builds a dual-stack VPC with public and private subnets in 3 AZs. Each subnet calculates an IPv4 CIDR based on the netmask argument passed, and an IPv6 CIDR with a /64 prefix length. The public subnets build NAT gateways in each AZ but optionally can be switched to single_az. An Egress-only Internet gateway is created by using the variable vpc_egress_only_internet_gateway.

module "vpc" {
  source   = "aws-ia/vpc/aws"
  version = ">= 4.2.0"

  name                                 = "multi-az-vpc"
  cidr_block                           = "10.0.0.0/16"
  vpc_assign_generated_ipv6_cidr_block = true
  vpc_egress_only_internet_gateway     = true
  az_count                             = 3

  subnets = {
    # Dual-stack subnet
    public = {
      name_prefix               = "my_public" # omit to prefix with "public"
      netmask                   = 24
      assign_ipv6_cidr          = true
      nat_gateway_configuration = "all_azs" # options: "single_az", "none"
    }
    # IPv4 only subnet
    private = {
      # omitting name_prefix defaults value to "private"
      # name_prefix  = "private_with_egress"
      netmask      = 24
      connect_to_public_natgw = true
    }
    # IPv6-only subnet
    private_ipv6 = {
      ipv6_native      = true
      assign_ipv6_cidr = true
      connect_to_eigw  = true
    }
  }

  vpc_flow_logs = {
    log_destination_type = "cloud-watch-logs"
    retention_in_days    = 180
  }
}

Reserved Subnet Key Names

There are 3 reserved keys for subnet key names in var.subnets corresponding to types "public", "transit_gateway", and "core_network" (an AWS Cloud WAN feature). Other custom subnet key names are valid are and those subnets will be private subnets.

subnets = {
  public = {
    name_prefix               = "my-public" # omit to prefix with "public"
    netmask                   = 24
    nat_gateway_configuration = "all_azs" # options: "single_az", "none"
  }

  # naming private is not required, can use any key
  private = {
    # omitting name_prefix defaults value to "private"
    # name_prefix  = "private"
    netmask      = 24
    connect_to_public_natgw = true
  }

  # can be any valid key name
  privatetwo = {
    # omitting name_prefix defaults value to "privatetwo"
    # name_prefix  = "private"
    netmask      = 24
  }
transit_gateway_id = <>
transit_gateway_routes = {
  private = "0.0.0.0/0"
  vpce    = "pl-123"
}
transit_gateway_ipv6_routes = {
  private = "::/0"
}

subnets = {
  private = {
    netmask          = 24
    assign_ipv6_cidr = true
  }
  vpce = { netmask = 24}

  transit_gateway = {
    netmask                                         = 28
    assign_ipv6_cidr                                = true
    transit_gateway_default_route_table_association = true
    transit_gateway_default_route_table_propagation = true
    transit_gateway_appliance_mode_support          = "enable"
    transit_gateway_dns_support                     = "disable"

    tags = {
      subnet_type = "tgw"
    }
}
core_network = {
  id  = <>
  arn = <>
}
core_network_routes = {
  workload = "pl-123"
}
core_network_ipv6_routes = {
  workload = "::/0"
}

subnets = {
  workload = {
    name_prefix      = "workload-private"
    netmask          = 24
    assign_ipv6_cidr = true
  }

  core_network = {
    netmask                = 28
    assign_ipv6_cidr       = true
    appliance_mode_support = false
    require_acceptance     = true
    accept_attachment      = true

    tags = {
      env = "prod"
    }
}

Updating a VPC with new or removed subnets

If using netmask or assign_ipv6_cidr to calculate subnets and you wish to either add or remove subnets (ex: adding / removing an AZ), you may have to change from using netmask / assign_ipv6_cidr for some subnets and set to explicit instead. Subnets are calculated in lexicographical order, meaning the subnet named "private" is calculated before "public".

When changing to explicit cidrs, subnets are always ordered by AZ. 0 -> a, 1 -> b, etc.

Example: Changing from 2 azs to 3

Before:

cidr_block                           = "10.0.0.0/16"
vpc_assign_generated_ipv6_cidr_block = true
az_count                             = 2

subnets = {
  public = {
    netmask          = 24
    assign_ipv6_cidr = true
  }

  private = {
   netmask          = 24
   assign_ipv6_cidr = true
  }
}

After:

cidr_block                           = "10.0.0.0/16"
vpc_assign_generated_ipv6_cidr_block = true
az_count = 3

subnets = {
  public = {
    cidrs      = ["10.0.0.0/24", "10.0.1.0/24", "10.0.4.0/24"]
    ipv6_cidrs = ["2a05:d01c:bc3:b200::/64", "2a05:d01c:bc3:b201::/64", "2a05:d01c:bc3:b204::/64"]
  }

  private = {
    cidrs      = ["10.0.2.0/24", "10.0.3.0/24", "10.0.5.0/24"]
    ipv6_cidrs = ["2a05:d01c:bc3:b202::/64", "2a05:d01c:bc3:b203::/64", "2a05:d01c:bc3:b205::/64"]
  }
}

The above example will cause only creating 2 new subnets in az c of the region being used.

Output usage examples

The outputs in this module attempt to align to a methodology of outputting resource attributes in a reasonable collection. The benefit of this is that, most likely, attributes you want access to are already present without having to create new output {} for each possible attribute. The [potential] downside is that you will have to extract it yourself using HCL logic. Below are some common examples:

For more examples and explanation see [output docs]((https://github.com/aws-ia/terraform-aws-vpc/blob/main/docs/how-to-use-module-outputs.md)

Extracting subnet IDs for private subnets

Example Configuration:

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 4.2.0"

  name       = "multi-az-vpc"
  cidr_block = "10.0.0.0/20"
  az_count   = 3

  subnets = {
    private = { netmask = 24 }
  }
}

Extracting subnet_ids to a list (using terraform console for example output):

> [ for _, value in module.vpc.private_subnet_attributes_by_az: value.id]
[
  "subnet-04a86315c4839b519",
  "subnet-02a7249c8652a7136",
  "subnet-09af79b5329b3681f",
]

Alternatively, since these are maps, you can use key in another resource for_each loop. The benefit here is that your dependent resource will have keys that match the AZ the subnet is in:

resource "aws_route53recoveryreadiness_cell" "cell_per_az" {
  for_each = module.vpc.private_subnet_attributes_by_az

  cell_name = "${each.key}-failover-cell-for-subnet-${each.value.id}"
}
...

Terraform Plan:

# aws_route53recoveryreadiness_cell.cell_per_az["us-east-1a"] will be created
+ resource "aws_route53recoveryreadiness_cell" "cell_per_az" {
    + cell_name               = "us-east-1a-failover-cell-for-subnet-subnet-070696086c5864da1"
    ...
  }

# aws_route53recoveryreadiness_cell.cell_per_az["us-east-1b"] will be created
...

Common Errors and their Fixes

Error creating routes to Core Network

Error:

error creating Route in Route Table (rtb-xxx) with destination (YYY): InvalidCoreNetworkArn.NotFound: The core network arn 'arn:aws:networkmanager::XXXX:core-network/core-network-YYYYY' does not exist.

This happens when the Core Network's VPC attachment requires acceptance, so it's not possible to create the routes in the VPC until the attachment is accepted. Check the following:

  • If the VPC attachment requires acceptance and you want the module to automatically accept it, configure require_acceptance and accept_attachment to true.
subnets = {
  core_network = {
    netmask            = 28
    assign_ipv6_cidr   = true
    require_acceptance = true
    accept_attachment  = true
  }
}
  • If the VPC attachment requires acceptance but you want to accept it outside the module, first configure require_acceptance to true and accept_attachment to false.
subnets = {
  core_network = {
    netmask            = 28
    assign_ipv6_cidr   = true
    require_acceptance = true
    accept_attachment  = true
  }
}

After you apply and the attachment is accepted (outside the module), change the subnet configuration with require_acceptance to false.

subnets = {
  core_network = {
    netmask            = 28
    assign_ipv6_cidr   = true
    require_acceptance = false
  }
}
  • Alternatively, you can also not configure any subnet route (var.core_network_routes) to the Core Network until the attachment gets accepted.

Contributing

Please see our developer documentation for guidance on contributing to this module.

Requirements

Name Version
terraform >= 1.3.0
aws >= 5.0.0

Providers

Name Version
aws >= 5.0.0

Modules

Name Source Version
calculate_subnets ./modules/calculate_subnets n/a
calculate_subnets_ipv6 ./modules/calculate_subnets_ipv6 n/a
flow_logs ./modules/flow_logs n/a
subnet_tags aws-ia/label/aws 0.0.5
tags aws-ia/label/aws 0.0.5
vpc_lattice_tags aws-ia/label/aws 0.0.5

Resources

Name Type
aws_ec2_transit_gateway_vpc_attachment.tgw resource
aws_egress_only_internet_gateway.eigw resource
aws_eip.nat resource
aws_internet_gateway.main resource
aws_nat_gateway.main resource
aws_networkmanager_attachment_accepter.cwan resource
aws_networkmanager_vpc_attachment.cwan resource
aws_route.cwan_to_nat resource
aws_route.ipv6_private_to_cwan resource
aws_route.ipv6_private_to_tgw resource
aws_route.ipv6_public_to_cwan resource
aws_route.ipv6_public_to_tgw resource
aws_route.private_to_cwan resource
aws_route.private_to_egress_only resource
aws_route.private_to_nat resource
aws_route.private_to_tgw resource
aws_route.public_ipv6_to_igw resource
aws_route.public_to_cwan resource
aws_route.public_to_igw resource
aws_route.public_to_tgw resource
aws_route.tgw_to_nat resource
aws_route_table.cwan resource
aws_route_table.private resource
aws_route_table.public resource
aws_route_table.tgw resource
aws_route_table_association.cwan resource
aws_route_table_association.private resource
aws_route_table_association.public resource
aws_route_table_association.tgw resource
aws_subnet.cwan resource
aws_subnet.private resource
aws_subnet.public resource
aws_subnet.tgw resource
aws_vpc.main resource
aws_vpc_ipv4_cidr_block_association.secondary resource
aws_vpclattice_service_network_vpc_association.vpc_lattice_service_network_association resource
aws_availability_zones.current data source
aws_vpc.main data source

Inputs

Name Description Type Default Required
az_count Searches region for # of AZs to use and takes a slice based on count. Assume slice is sorted a-z. number n/a yes
azs A list of availability zones names list(string) [] no
name Name to give VPC. Note: does not effect subnet names, which get assigned name based on name_prefix. string n/a yes
subnets Configuration of subnets to build in VPC. 1 Subnet per AZ is created. Subnet types are defined as maps with the available keys: "private", "public", "transit_gateway", "core_network". Each Subnet type offers its own set of available arguments detailed below.

Attributes shared across subnet types:
- cidrs = (Optional|list(string)) Cannot set if netmask is set. List of IPv4 CIDRs to set to subnets. Count of CIDRs defined must match quantity of azs in az_count.
- netmask = (Optional|Int) Cannot set if cidrs is set. Netmask of the var.cidr_block to calculate for each subnet.
- assign_ipv6_cidr = (Optional|bool) Cannot set if ipv6_cidrs is set. If true, it will calculate a /64 block from the IPv6 VPC CIDR to set in the subnets.
- ipv6_cidrs = (Optional|list(string)) Cannot set if assign_ipv6_cidr is set. List of IPv6 CIDRs to set to subnets. The subnet size must use a /64 prefix length. Count of CIDRs defined must match quantity of azs in az_count.
- name_prefix = (Optional|String) A string prefix to use for the name of your subnet and associated resources. Subnet type key name is used if omitted (aka private, public, transit_gateway). Example name_prefix = "private" for var.subnets.private is redundant.
- tags = (Optional|map(string)) Tags to set on the subnet and associated resources.

Any private subnet type options:
- All shared keys above
- connect_to_public_natgw = (Optional|bool) Determines if routes to NAT Gateways should be created. Must also set var.subnets.public.nat_gateway_configuration in public subnets.
- ipv6_native = (Optional|bool) Indicates whether to create an IPv6-ony subnet. Either var.assign_ipv6_cidr or var.ipv6_cidrs should be defined to allocate an IPv6 CIDR block.
- connect_to_eigw = (Optional|bool) Determines if routes to the Egress-only Internet gateway should be created. Must also set var.vpc_egress_only_internet_gateway.

public subnet type options:
- All shared keys above
- nat_gateway_configuration = (Optional|string) Determines if NAT Gateways should be created and in how many AZs. Valid values = "none", "single_az", "all_azs". Default = "none". Must also set var.subnets.private.connect_to_public_natgw = true.
- connect_to_igw = (Optional|bool) Determines if the default route (0.0.0.0/0 or ::/0) is created in the public subnets with destination the Internet gateway. Defaults to true.
- ipv6_native = (Optional|bool) Indicates whether to create an IPv6-ony subnet. Either var.assign_ipv6_cidr or var.ipv6_cidrs should be defined to allocate an IPv6 CIDR block.
- map_public_ip_on_launch = (Optional|bool) Specify true to indicate that instances launched into the subnet should be assigned a public IP address. Default to false.

transit_gateway subnet type options:
- All shared keys above
- connect_to_public_natgw = (Optional|string) Determines if routes to NAT Gateways should be created. Specify the CIDR range or a prefix-list-id that you want routed to nat gateway. Usually 0.0.0.0/0. Must also set var.subnets.public.nat_gateway_configuration.
- transit_gateway_default_route_table_association = (Optional|bool) Boolean whether the VPC Attachment should be associated with the EC2 Transit Gateway association default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways.
- transit_gateway_default_route_table_propagation = (Optional|bool) Boolean whether the VPC Attachment should propagate routes with the EC2 Transit Gateway propagation default route table. This cannot be configured or perform drift detection with Resource Access Manager shared EC2 Transit Gateways.
- transit_gateway_appliance_mode_support = (Optional|string) Whether Appliance Mode is enabled. If enabled, a traffic flow between a source and a destination uses the same Availability Zone for the VPC attachment for the lifetime of that flow. Valid values: disable (default) and enable.
- transit_gateway_dns_support = (Optional|string) DNS Support is used if you need the VPC to resolve public IPv4 DNS host names to private IPv4 addresses when queried from instances in another VPC attached to the transit gateway. Valid values: enable (default) and disable.

core_network subnet type options:
- All shared keys abovce
- connect_to_public_natgw = (Optional|string) Determines if routes to NAT Gateways should be created. Specify the CIDR range or a prefix-list-id that you want routed to nat gateway. Usually 0.0.0.0/0. Must also set var.subnets.public.nat_gateway_configuration.
- appliance_mode_support = (Optional|bool) Indicates whether appliance mode is supported. If enabled, traffic flow between a source and destination use the same Availability Zone for the VPC attachment for the lifetime of that flow. Defaults to false.
- require_acceptance = (Optional|bool) Boolean whether the core network VPC attachment to create requires acceptance or not. Defaults to false.
- accept_attachment = (Optional|bool) Boolean whether the core network VPC attachment is accepted or not in the segment. Only valid if require_acceptance is set to true. Defaults to true.

Example:
subnets = {
# Dual-stack subnet
public = {
netmask = 24
assign_ipv6_cidr = true
nat_gateway_configuration = "single_az"
}
# IPv4 only subnet
private = {
netmask = 24
connect_to_public_natgw = true
}
# IPv6 only subnet
ipv6 = {
ipv6_native = true
assign_ipv6_cidr = true
connect_to_eigw = true
}
# Transit gateway subnets (dual-stack)
transit_gateway = {
netmask = 24
assign_ipv6_cidr = true
connect_to_public_natgw = true
transit_gateway_default_route_table_association = true
transit_gateway_default_route_table_propagation = true
}
# Core Network subnets (dual-stack)
core_network = {
netmask = 24
assign_ipv6_cidr = true
connect_to_public_natgw = true
appliance_mode_support = true
require_acceptance = true
accept_attachment = true
}
}
any n/a yes
cidr_block IPv4 CIDR range to assign to VPC if creating VPC or to associate as a secondary IPv6 CIDR. Overridden by var.vpc_id output from data.aws_vpc. string null no
core_network AWS Cloud WAN's core network information - to create a VPC attachment. Required when cloud_wan subnet is defined. Two attributes are required: the id and arn of the resource.
object({
id = string
arn = string
})
{
"arn": null,
"id": null
}
no
core_network_ipv6_routes Configuration of IPv6 route(s) to AWS Cloud WAN's core network.
For each public and/or private subnets named in the subnets variable, optionally create routes from the subnet to the core network.
You can specify either a CIDR range or a prefix-list-id that you want routed to the core network.
Example:
core_network_ivp6_routes = {
public = "::/0"
private = "pl-123"
}
any {} no
core_network_routes Configuration of route(s) to AWS Cloud WAN's core network.
For each public and/or private subnets named in the subnets variable, optionally create routes from the subnet to the core network.
You can specify either a CIDR range or a prefix-list-id that you want routed to the core network.
Example:
core_network_routes = {
public = "10.0.0.0/8"
private = "pl-123"
}
any {} no
create_vpc Determines whether to create the VPC or not; defaults to enabling the creation. bool true no
tags Tags to apply to all resources. map(string) {} no
transit_gateway_id Transit gateway id to attach the VPC to. Required when transit_gateway subnet is defined. string null no
transit_gateway_ipv6_routes Configuration of IPv6 route(s) to transit gateway.
For each public and/or private subnets named in the subnets variable,
Optionally create routes from the subnet to transit gateway. Specify the CIDR range or a prefix-list-id that you want routed to the transit gateway.
Example:
transit_gateway_ipv6_routes = {
public = "::/0"
private = "pl-123"
}
any {} no
transit_gateway_routes Configuration of route(s) to transit gateway.
For each public and/or private subnets named in the subnets variable,
Optionally create routes from the subnet to transit gateway. Specify the CIDR range or a prefix-list-id that you want routed to the transit gateway.
Example:
transit_gateway_routes = {
public = "10.0.0.0/8"
private = "pl-123"
}
any {} no
vpc_assign_generated_ipv6_cidr_block Requests and Amazon-provided IPv6 CIDR block with a /56 prefix length. You cannot specify the range of IP addresses, or the size of the CIDR block. Conflicts with vpc_ipv6_ipam_pool_id. bool null no
vpc_egress_only_internet_gateway Set to use the Egress-only Internet gateway for all IPv6 traffic going to the Internet. bool false no
vpc_enable_dns_hostnames Indicates whether the instances launched in the VPC get DNS hostnames. If enabled, instances in the VPC get DNS hostnames; otherwise, they do not. Disabled by default for nondefault VPCs. bool true no
vpc_enable_dns_support Indicates whether the DNS resolution is supported for the VPC. If enabled, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC network range "plus two" succeed. If disabled, the Amazon provided DNS service in the VPC that resolves public DNS hostnames to IP addresses is not enabled. Enabled by default. bool true no
vpc_flow_logs Whether or not to create VPC flow logs and which type. Options: "cloudwatch", "s3", "none". By default creates flow logs to cloudwatch. Variable overrides null value types for some keys, defined in defaults.tf.
object({
name_override = optional(string, "")
log_destination = optional(string)
iam_role_arn = optional(string)
kms_key_id = optional(string)

log_destination_type = string
retention_in_days = optional(number)
tags = optional(map(string))
traffic_type = optional(string, "ALL")
destination_options = optional(object({
file_format = optional(string, "plain-text")
hive_compatible_partitions = optional(bool, false)
per_hour_partition = optional(bool, false)
}))
})
{
"log_destination_type": "none"
}
no
vpc_id VPC ID to use if not creating VPC. string null no
vpc_instance_tenancy The allowed tenancy of instances launched into the VPC. string "default" no
vpc_ipv4_ipam_pool_id Set to use IPAM to get an IPv4 CIDR block. string null no
vpc_ipv4_netmask_length Set to use IPAM to get an IPv4 CIDR block using a specified netmask. Must be set with var.vpc_ipv4_ipam_pool_id. string null no
vpc_ipv6_cidr_block IPv6 CIDR range to assign to VPC if creating VPC. You need to use vpc_ipv6_ipam_pool_id and set explicitly the CIDR block to use, or derived from IPAM using using vpc_ipv6_netmask_length. string null no
vpc_ipv6_ipam_pool_id Set to use IPAM to get an IPv6 CIDR block. string null no
vpc_ipv6_netmask_length Set to use IPAM to get an IPv6 CIDR block using a specified netmask. Must be set with var.vpc_ipv6_ipam_pool_id. string null no
vpc_lattice Amazon VPC Lattice Service Network VPC association. You can only associate one Service Network to the VPC. This association also support Security Groups (more than 1).
This variable expects the following attributes:
- service_network_identifier = (Required|string) The ID or ARN of the Service Network to associate. You must use the ARN if the Service Network and VPC resources are in different AWS Accounts.
- security_group_ids = (Optional|list(string)) The IDs of the security groups to attach to the association.
- tags = (Optional|map(string)) Tags to set on the Lattice VPC association resource.
any {} no
vpc_secondary_cidr If true the module will create a aws_vpc_ipv4_cidr_block_association and subnets for that secondary cidr. If using IPAM for both primary and secondary CIDRs, you may only call this module serially (aka using -target, etc). bool false no
vpc_secondary_cidr_natgw If attaching a secondary IPv4 CIDR instead of creating a VPC, you can map private/ tgw subnets to your public NAT GW with this argument. Simply pass the output nat_gateway_attributes_by_az, ex: vpc_secondary_cidr_natgw = module.vpc.natgw_id_per_az. If you did not build your primary with this module, you must construct a map { az : { id : nat-123asdb }} for each az. any {} no

Outputs

Name Description
azs List of AZs where subnets are created.
core_network_attachment AWS Cloud WAN's core network attachment. Full output of aws_networkmanager_vpc_attachment.
core_network_subnet_attributes_by_az Map of all core_network subnets containing their attributes.

Example:
core_network_subnet_attributes_by_az = {
"us-east-1a" = {
"arn" = "arn:aws:ec2:us-east-1:<>:subnet/subnet-04a86315c4839b519"
"assign_ipv6_address_on_creation" = false
...
<all attributes of subnet: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet#attributes-reference>
}
"us-east-1b" = {...)
}
egress_only_internet_gateway Egress-only Internet gateway attributes. Full output of aws_egress_only_internet_gateway.
flow_log_attributes Flow Log information.
internet_gateway Internet gateway attributes. Full output of aws_internet_gateway.
nat_gateway_attributes_by_az Map of nat gateway resource attributes by AZ.

Example:
nat_gateway_attributes_by_az = {
"us-east-1a" = {
"allocation_id" = "eipalloc-0e8b20303eea88b13"
"connectivity_type" = "public"
"id" = "nat-0fde39f9550f4abb5"
"network_interface_id" = "eni-0d422727088bf9a86"
"private_ip" = "10.0.3.40"
"public_ip" = <>
"subnet_id" = "subnet-0f11c92e439c8ab4a"
"tags" = tomap({
"Name" = "nat-my-public-us-east-1a"
})
"tags_all" = tomap({
"Name" = "nat-my-public-us-east-1a"
})
}
"us-east-1b" = { ... }
}
natgw_id_per_az Map of nat gateway IDs for each resource. Will be duplicate ids if your var.subnets.public.nat_gateway_configuration = "single_az".

Example:
natgw_id_per_az = {
"us-east-1a" = {
"id" = "nat-0fde39f9550f4abb5"
}
"us-east-1b" = {
"id" = "nat-0fde39f9550f4abb5"
}
}
private_subnet_attributes_by_az Map of all private subnets containing their attributes.

Example:
private_subnet_attributes_by_az = {
"private/us-east-1a" = {
"arn" = "arn:aws:ec2:us-east-1:<>:subnet/subnet-04a86315c4839b519"
"assign_ipv6_address_on_creation" = false
...
<all attributes of subnet: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet#attributes-reference>
}
"us-east-1b" = {...)
}
public_subnet_attributes_by_az Map of all public subnets containing their attributes.

Example:
public_subnet_attributes_by_az = {
"us-east-1a" = {
"arn" = "arn:aws:ec2:us-east-1:<>:subnet/subnet-04a86315c4839b519"
"assign_ipv6_address_on_creation" = false
...
<all attributes of subnet: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet#attributes-reference>
}
"us-east-1b" = {...)
}
rt_attributes_by_type_by_az Map of route tables by type => az => route table attributes. Example usage: module.vpc.rt_attributes_by_type_by_az.private.id

Example:
rt_attributes_by_type_by_az = {
"private" = {
"us-east-1a" = {
"id" = "rtb-0e77040c0598df003"
"tags" = tolist([
{
"key" = "Name"
"value" = "private-us-east-1a"
},
])
"vpc_id" = "vpc-033e054f49409592a"
...
<all attributes of route: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route_table#attributes-reference>
}
"us-east-1b" = { ... }
"public" = { ... }
tgw_subnet_attributes_by_az Map of all tgw subnets containing their attributes.

Example:
tgw_subnet_attributes_by_az = {
"us-east-1a" = {
"arn" = "arn:aws:ec2:us-east-1:<>:subnet/subnet-04a86315c4839b519"
"assign_ipv6_address_on_creation" = false
...
<all attributes of subnet: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/subnet#attributes-reference>
}
"us-east-1b" = {...)
}
transit_gateway_attachment_id Transit gateway attachment id.
vpc_attributes VPC resource attributes. Full output of aws_vpc.
vpc_lattice_service_network_association VPC Lattice Service Network VPC association. Full output of aws_vpclattice_service_network_vpc_association

terraform-aws-vpc's People

Contributors

adamtylerlynch avatar adrianbegg avatar adrianeib avatar andrew-glenn avatar censullo avatar darrenhorwitz1 avatar dawright22 avatar dralbert avatar drewmullen avatar fe-ax avatar herrerasm avatar jaymccon avatar maheshr-amzn avatar netdevautomate avatar pablo19sc avatar sshvans avatar tbulding avatar tlindsay42 avatar tonynv avatar troy-ameigh avatar vivgoyal-aws avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-vpc's Issues

Please define the scope and goals of the AWS-IA Modules vs Terraform AWS Modules

Hi AWS Integration and Automation 👋

There are several concerns over the modules being developed under AWS-IA vs Terraform AWS Modules

https://www.youtube.com/watch?v=h21sd7-hQoc&t=911s
https://registry.terraform.io/namespaces/terraform-aws-modules
https://twitter.com/andrewbrown/status/1442915716177350657

Due to:

  • a lack of understanding of the scope and goals of AWS-IA modules
  • the alpha state of the AWS-IA modules
  • the marketing rollout of the alpha modules
  • the replacement in AWS Samples and the purpose to serve them via AWS QuickStarts

It's drawing lots of speculation in the AWS Terraform community and there is fear of negative impact to the community and reasons why.

Define the Scope

My recommendation to AWS is to clearly define the goals of these modules and how they will be different from Terraform AWS Modules. Here are reasons why you want to develop your own modules:

  • The open-source Licensing of Terraform AWS Modules does not meet the legal expectations for AWS or its customers and serves to cause friction for the adoption of AWS and Terraform for specific organizations, and so these alternative modules are provided to alleviate that issue.
  • AWS wishes to provide AWS modules with more robust testing, and these modules are being written to adopt TerraTest end-to-end integration tests
  • AWS wants to provide simple sample modules not intended recommended for enterprise production use cases but rather a base template that customers can fork and adapt to meet their organization's requirements. AWS wants to set a good baseline to put customers on the right path but these AWS modules are limited compares to Terraform AWS Modules.
  • AWS wants to provide the benefit of technical support to a suite of AWS modules for their customers and has created their own modules so they can control the speed at which they can roll out bug fixes to meet SLAs. AWS-IA modules are limited compared to Terraform AWS Modules but that's the trade.

Make Effort to Work with the Community

AWS does not have to but it's appreciated to include community members in their OSS efforts who have grown the AWS Terraform space for the last 5 years to see if there is inclusion or collaboration or at least acknowledge the existence of these other projects to date.

AWS track record with OSS projects has been spotty, we (the community) understand AWS is a company with self-interests and sometimes they end up eating other people's lunch as of course of business, no different than the whale sifting the sea of phytoplankton.

https://news.ycombinator.com/item?id=24802924

It's just how you go about it.

— Andrew Brown (AWS Community Hero)

TGW route back to tgw

    transit_gateway = {
      netmask                                         = 28
      transit_gateway_id                              = aws_ec2_transit_gateway.example.id
      route_to_nat                                    = false
      transit_gateway_default_route_table_association = true
      transit_gateway_default_route_table_propagation = true
     route_to_tgw = [""]
    }

(association to rt)

Support to enable "appliance_mode_support" in the TGW attachment

For some type of VPCs (Inspection VPCs), appliance mode is needed to not drop packets when the response traffic comes from a different AZ. By default, this value is "disable", so having the possibility to change it when defining the transit_gateway subnets will enable the configuration.
Proposed add-on in transit_gateway subnets:

subnets = {
  transit_gateway = {
    netmask                                         = 24
    transit_gateway_id                              = aws_ec2_transit_gateway.example.id
    route_to_nat                                    = false
    transit_gateway_default_route_table_association = true
    transit_gateway_default_route_table_propagation = true
    appliance_mode_support                          = "enable"
  }
}

support cloudwan core network subnets

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 1.0.0"

  name       = "tgw"
  cidr_block = "10.0.0.0/16"
  az_count   = 2

  subnets = {
    public = {
      netmask                   = 24
      nat_gateway_configuration = "single_az"
      route_to_core_network  = ["10.0.0.0/8"]
    }

    private = {
      netmask                  = 24
      route_to_nat             = true
      route_to_core_network = ["10.0.0.0/8"]
    }

    core_network = {
      netmask                = 28
      core_network_id  =  awscc_networkmanager_core_network.example.id
      route_to_nat         = false
      ipv6_support        = true
    }
  }
}

validation bug

│ Error: Invalid variable validation result
│ 
│   on .terraform/modules/vpcs/variables.tf line 197, in variable "subnets":
│  197:     condition     = try(var.subnets.private.route_to_nat, false) ? try(var.subnets.private.route_to_transit_gateway[0] != "0.0.0.0/0", true) : null
│     ├────────────────
│     │ var.subnets.private is object with 3 attributes
│     │ var.subnets.private.route_to_nat is false
│ 
│ Validation condition expression must return either true or false, not null.

secondary cidr feature breaks if connect_to_public_natgw = true

If I create secondary cidr as below

module "secondary" {
  source  = "aws-ia/vpc/aws"
  version = ">= 2.0.0"

  name       = "secondary-cidr"
  cidr_block = "10.2.0.0/16"

  vpc_secondary_cidr = true
  vpc_id             = module.vpc.vpc_attributes.id
  az_count           = 2

  subnets = {
    eks = {
      name_prefix             = "vpce"
      cidrs                   = ["10.2.0.0/18", "10.2.64.0/18"]
      connect_to_public_natgw = true
      #route_to_transit_gateway = "pl-"
      #tags = var.vpc_shared_svc_vpce_subnet_tags
    }
  }
}

it breaks with following error

   on .terraform\modules\secondary\main.tf line 172, in resource "aws_route" "private_to_nat":
│  172:   nat_gateway_id = try(aws_nat_gateway.main[split("/", each.key)[1]].id, aws_nat_gateway.main[local.nat_configuration[0]].id)
│     ├────────────────
│     │ aws_nat_gateway.main is object with no attributes
│     │ each.key is "eks/us-east-1a"
│     │ local.nat_configuration is empty tuple
│
│ Call to function "try" failed: no expression succeeded:
│ - Invalid index (at .terraform\modules\secondary\main.tf:172,44-69)
│   The given key does not identify an element in this collection value.
│ - Invalid index (at .terraform\modules\secondary\main.tf:172,118-121)
│   The given key does not identify an element in this collection value: the collection has no elements.
│
│ At least one expression must produce a successful result.

Because aws_nat_gateway.main is not available in module.secondary.
without connect_to_public_natgw = true, it works but route to NATGW is missing in the subnet route tables of the secondary

tags specified at the subnet level has no effect

I am expecting that tags specified at each subnet level are applied to the specified subnets, merged with the tags specified at the VPC level.
The outcome is that only VPC level tags are applied. Subnet level tags are ignored.

[Enhancement] Export route table associations

Problem statement:
I have a resource (RDS custom for Oracle) in a private subnet that needs to communicate with S3 via a Gateway endpoint. During the termination of the instance, the instance will pull scripts from S3 to execute on the instance prior to shutting down. When operating terraform destroy, the Terraform engine tears down the route table associations early in the lifecycle, and the RDS instance is not able to connect to the S3 Gateway endpoint.

To be able to explicitly add a depends_on to the route table association, I would like to request the route table associations be exported.

I also welcome other suggestions regarding the route table associations.

Region not declared in deploy

│ Error: Reference to undeclared input variable

│ on main.tf line 14, in provider "aws":
│ 14: region = var.region

│ An input variable with the name "region" has not been declared. This
│ variable can be declared with a variable "region" {} block.

Module not rerunable with ipam

When I call the modules to create the VPC and pass it IPAM and rerun the terraform, because the CIDR assigned by ipam changes every time you run the terraform. This appears to be caused by the data call:

data "aws_vpc_ipam_preview_next_cidr" "main" {
  count = var.vpc_ipv4_ipam_pool_id == null ? 0 : 1

  ipam_pool_id   = var.vpc_ipv4_ipam_pool_id
  netmask_length = var.vpc_ipv4_netmask_length
}

If you rerun the module after creating a VPC, it gets a new CIDR and tries to destroy the VPC and all resources because the VPC cidr "changed".

example (run the ipam first and then uncomment the VPC module call and run it twice more)

variable "network_map" {
  description = "map of network modules, check variable details on modules/network"
  type        = map(any)
  default = {
    "ci" = {
      suffix = "ci_vpc"
      ipam_region = "us-west-2/non_prod"
      netmask_length = 22
    },
  }
}

module "create_ipam" {
  source   = "github.com/aws-ia/terraform-aws-ipam"
  
  top_cidr = ["10.0.0.0/8"]
  top_name = "Global ipam"

  pool_configurations = {
    us-west-2 = {
      description = "2nd level, locale us-west-2 pool"
      cidr        = ["10.0.0.0/14"]
      locale = "us-west-2"

      sub_pools = {
        non_prod = {
          name                 = "non_prod_ipam"
          cidr                 = ["10.0.0.0/16"]
        }

        prod = {
          name                 = "prod_ipam"
          cidr                 = ["10.1.0.0/16"]
        }
      }
    },
  }
}

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 1.4.1"

  for_each = var.network_map

  name     = each.value.suffix
  az_count = 2

  vpc_ipv4_ipam_pool_id   = module.create_ipam.pools_level_2[each.value.ipam_region].id
  vpc_ipv4_netmask_length = each.value.netmask_length

  subnets = {
    public = {
      netmask                   = 24
      nat_gateway_configuration = "all_azs"
    }
    private = {
      netmask      = 24
      route_to_nat = true
    }
  }
}

additional problematic behavior

Because it uses the data command, if a ipam pool is "full" even the destroy command will fail because it tries to acquire a new ipam and it throws that there isn't enough room in ipam given the netmask.

The data command also becomes an issue when you try to spool ipam and VPC in the same module (for the multi-account setup I'm building with the ipam and VPC stuff in a central "networking" account). It throws the following error:

│ Error: Invalid count argument
│
│   on .terraform/modules/vpc/data.tf line 41, in data "aws_vpc_ipam_preview_next_cidr" "main":
│   41:   count = var.vpc_ipv4_ipam_pool_id == null ? 0 : 1
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.

Explore implementing Local Zone support

its possible we could allow users to pass a list of local-zone ids instead of a az_count. I can think of 2 primary considerations:

  1. updating local.azs to be a list of LZs if provided

Add VPC CREATE

Adding the following in main.tf with the variable defined in variables.tf will allow us to create the VPC conditionally similar to while calling from deploy directory.

create_vpc = var.create_vpc

Experiment "module_variable_optional_attrs" is no longer available.

Hello!
Getting below error when trying to use the module from TF v1.3.2
Github Issue

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = "3.0.0"

  name       = "vpc"
  cidr_block = "10.0.0.0/20"
  az_count   = 3

  subnets = {
    public = {
      netmask                   = 24
      nat_gateway_configuration = "all_azs" # options: "single_az", "none"
    }

    private = {
      netmask                 = 24
      connect_to_public_natgw = true
    }
  }

  vpc_flow_logs = {
    log_destination_type = "cloud-watch-logs"
    retention_in_days    = 180
    traffic_type         = "REJECT"
  }
}

image

aws_route involving TGW should wait until TGW is attached to VPC

I found an issue with terraform-aws-vpc module.
When using TGW
resource aws_route.public_to_tgw and resource aws_route.private_to_tgw may sometime fail with error that tgw does not exist.

I suspect it happens because
resource aws_ec2_transit_gateway_vpc_attachment.tgw and resource aws_ec2_transit_gateway_route_table_association.tgw are not yet complete.

I think, if aws_routes in question wait for these the TGW VPC attachments and TGW route table association, these errors can be avoided

Optionally route to TGW from public/private subnets

updates have been requested:

  • route from public/private to tgw
  • allow tgw to route to nat
  subnets = {
    public = {
      netmask = 24
      # "0.0.0.0/0" is explicitly blocked for map to tgw
      route_to_transit_gateway = ["10.0.0.0/8"]
    }

    private = {
      netmask = 24
      # if route_to_nat = true
      # route_to_transit_gateway != 0.0.0.0/0
      route_to_nat = true
      route_to_transit_gateway = ["0.0.0.0/0"]
    }

    transit_gateway = {
      netmask            = 24
      transit_gateway_id = aws_ec2_transit_gateway.example.id
      route_to_nat       = true # default false
      transit_gateway_default_route_table_association = true
      transit_gateway_default_route_table_propagation = true
    }
  }

Replace awscc resources with aws resources

Replace these resources:

  • awscc_ec2_route_table_association
  • awscc_ec2_route_table

Make sure tags are still working

Determine if we should remove calls to the label module

Cannot route to internet through transit gateway

Attempting to setup a multaccount transit network where the spoke account can route through the transit gateway and get to the internet. Created two new fresh aws accounts to apply this terraform and let it setup all of the attachments, route tables, subnets,etc. but I can't seem to get to the internet from the spoke. The instance sitting in the spoke can ping things in the network account but can't get to the internet.

                  - Core Network Account -                                                              - Spoke Account -
<igw>--<publicsubnet>--<natgw>--<privatesubnet>--<tgw_attachment>--<tgw>--<tgw_attachment>--<privatesubnet>--<instance>

main.tf

resource "aws_ec2_transit_gateway" "network_tgw" {
  provider                       = aws.core_prod_network
  amazon_side_asn                = 65412
  auto_accept_shared_attachments = "enable"
  dns_support                    = "enable"
  description                    = "Core Prod Network Transit Gateway"
}

resource "aws_ram_resource_share" "core_prod_network" {
  provider                  = aws.core_prod_network
  name                      = "Transit Gateway Resource Share"
  allow_external_principals = true
  tags = {
    Name = "core-prod-network-tgw-resource-share"
  }
}

# Share the transit gateway...
resource "aws_ram_resource_association" "core_prod_network" {
  provider           = aws.core_prod_network
  resource_arn       = aws_ec2_transit_gateway.network_tgw.arn
  resource_share_arn = aws_ram_resource_share.core_prod_network.id
}

#### For every Spoke Account, add this to attach tgw
## ..with the core_prod_devops account
resource "aws_ram_principal_association" "core_prod_devops" {
  provider           = aws.core_prod_network
  principal          = var.vega_accounts.core_prod_devops.id
  resource_share_arn = aws_ram_resource_share.core_prod_network.id
}

## core_prod_network
module "core_prod_network_vpc" {
  providers = {
    aws   = aws.core_prod_network
    awscc = awscc.core_prod_network
  }
  source     = "aws-ia/vpc/aws"
  name       = "core-prod-network-vpc"
  cidr_block = "10.3.0.0/16"
  az_count   = 2

  subnets = {
    public = {
      name_prefix               = "core-prod-network-public"
      netmask                   = 24
      nat_gateway_configuration = "all_azs"
      route_to_transit_gateway  = ["10.0.0.0/8"]
    }

    private = {
      name_prefix              = "core-prod-network-private"
      netmask                  = 24
      route_to_nat             = true
      route_to_transit_gateway = ["10.0.0.0/8"]
    }

    transit_gateway = {
      name_prefix                                     = "core-prod-network-transit"
      netmask                                         = 28
      transit_gateway_id                              = aws_ec2_transit_gateway.network_tgw.id
      route_to_nat                                    = false
      transit_gateway_default_route_table_association = true
      transit_gateway_default_route_table_propagation = true
    }
  }
}


### Output Block we need after every VPC creation
output "core_prod_network_vpc_id" {
  value = module.core_prod_network_vpc.vpc_attributes.id
}

output "core_prod_network_vpc_info" {
  value = module.core_prod_network_vpc
}

## core_prod_devops
module "core_prod_devops_vpc" {
  providers = {
    aws   = aws.core_prod_devops
    awscc = awscc.core_prod_devops
  }
  source     = "aws-ia/vpc/aws"
  name       = "core-prod-devops-vpc"
  cidr_block = "10.7.0.0/16"
  az_count   = 2

  subnets = {
    private = {
      name_prefix              = "core-prod-devops-private"
      netmask                  = 24
      route_to_nat             = false
      route_to_transit_gateway = ["0.0.0.0/0"]
    }

    transit_gateway = {
      name_prefix                                     = "core-prod-devops-transit"
      netmask                                         = 28
      transit_gateway_id                              = aws_ec2_transit_gateway.network_tgw.id
      route_to_nat                                    = false
      transit_gateway_default_route_table_association = true
      transit_gateway_default_route_table_propagation = true
    }
  }
}

I don't know if this is something that you guys have to add to the routing tables in the terraform to account for this use case but something is missing from routing.

2.x Breaking Changes

Changes from 1.x to 2.x

Features & Enhancements

  • Ability to create arbitrary amounts of private subnets. Previously was only capable of 3 types: public, private, transit gateway. The terms public and transit_gateway are reserved keywords for those subnet types and all other keys used in var.subnets.<> are assumed to be type private.
  • Many private subnet related resources had to be renamed. Most changes are accomplished programatically using a moved blocks but some require manual terraform state mv commands. see below.
  • route_to_nat has been changed to connect_to_public_natgw to clarify the nat is in the public subnet & to diverge from the route_to nomenclature which expects a route destination like input.
  • Can pass cidr or prefix list id to route_to_transit_gateway argument. Previously was a list of CIDRs that could only accept 1 item.
  • Many changes to Outputs available. Removed outputs marked as deprecated, separated grouped subnet attribute outputs into 3 public_, tgw_, and private_. Since you can have several private subnet declarations we group based on the name scheme <your_key_name>/az.

Bugs

  • Fixed a bug where VPCs that were built with a CIDR from IPAM were not idempotent between terraform runs

For help upgrading see our upgrading guide

For the public subnet nat_gateway_configuration = "none" is broken

For the public subnet
nat_gateway_configuration = "none"
or
nat_gateway_configuration = null
or
#nat_gateway_configuration = "none"
Gives following error

Error: Error in function call
│
│   on .terraform\modules\shared_services_vpc\data.tf line 53, in locals:
│   53:     { for az in local.azs : az => { id : try(aws_nat_gateway.main[az].id, aws_nat_gateway.main[local.nat_configuration[0]].id) } }
│     ├────────────────
│     │ aws_nat_gateway.main is object with no attributes
│     │ local.nat_configuration is empty tuple
│
│ Call to function "try" failed: no expression succeeded:
│ - Invalid index (at .terraform\modules\shared_services_vpc\data.tf:53,66-70)
│   The given key does not identify an element in this collection value.
│ - Invalid index (at .terraform\modules\shared_services_vpc\data.tf:53,119-122)
│   The given key does not identify an element in this collection value: the collection has no elements.
│
│ At least one expression must produce a successful result.

for the above conditions local.nat_configuration is [] therefore local.nat_configuration[0] fails.

[Bug]Enable Local zones breaks network topology

Issue Details:

Enabled the AKL local zone for Sydney region, after re-run a terraform plan, it’s going to recreate the resources in the new subnets and all resources.

Also local zones can only have limit services, no TGW support, no Nat gateway support, it will break the desired network configuration and topology.

Further investigation:

calculate_subnets module are using data resources to fetch from AWS.

so before data.aws_availability_zones.current returns:

    “ap-southeast-2a”,
    “ap-southeast-2b”,
    “ap-southeast-2c”,

but post-enabled local zones:
it returns:

    “ap-southeast-2-akl-1a”,
    “ap-southeast-2a”,
    “ap-southeast-2b”,
    “ap-southeast-2c”,

the current logics slice the first x (based on the az_count)

so it and destroy and create resources for those azs.

“ap-southeast-2-akl-1a”,
“ap-southeast-2a”,
“ap-southeast-2b”,

Suggest add aws-ia/vpc/aws to allow explicit specify configuration for az for VPC

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = "= 4.2.1"

  name       = "demo-vpc"
  cidr_block = "10.0.0.0/20"
  az_count   = 3
  azs = ["ap-southeast-2a", "ap-southeast-2b", "ap-southeast-2c"]

To prevent the similar issues in the future.

include some examples for data abstraction from outputs

It's not always immediately obvious how to get specific data out of outputs. for example: how to get a list of subnet_ids

example configuration:

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 1.0.0"

  name       = "multi-az-vpc"
  cidr_block = "10.0.0.0/20"
  az_count   = 3

  subnets = {
    private = { netmask = 24 }
  }
}

list of private subnet_ids

> [ for _, value in module.vpc.private_subnet_attributes_by_az: value.id]
[
  "subnet-04a86315c4839b519",
  "subnet-02a7249c8652a7136",
  "subnet-09af79b5329b3681f",
]

map of subnet_ids by az

> { for key, value in module.vpc.private_subnet_attributes_by_az: key => value.id}
{
  "us-east-1a" = "subnet-04a86315c4839b519"
  "us-east-1b" = "subnet-02a7249c8652a7136"
  "us-east-1c" = "subnet-09af79b5329b3681f"
}

AWS provider version constraints require bump to support newer resources

The current version constraint for the AWS provider in the module is set to >=3.73.0 however the module uses resources that do not exist in this version of the provider specifically aws_networkmanager_vpc_attachment and aws_networkmanager_attachment_accepter. These resources were added in v4.27.0.

If the resolved/matched version of the provider in a child module is below v4.27 the following is thrown:

│ Error: Invalid resource type
│
│   on .terraform\modules\vpc\main.tf line 351, in resource "aws_networkmanager_vpc_attachment" "cwan":
│  351: resource "aws_networkmanager_vpc_attachment" "cwan" {
│
│ The provider hashicorp/aws does not support resource type "aws_networkmanager_vpc_attachment".
╵
╷
│ Error: Invalid resource type
│
│   on .terraform\modules\vpc\main.tf line 369, in resource "aws_networkmanager_attachment_accepter" "cwan":
│  369: resource "aws_networkmanager_attachment_accepter" "cwan" {
│
│ The provider hashicorp/aws does not support resource type "aws_networkmanager_attachment_accepter".

Further the submodule modules/flow_logs/modules/s3_log_bucket uses the aws_s3_bucket_server_side_encryption_configuration and aws_s3_bucket_lifecycle_configuration resources introduced in v4.0.0 however the version constraint is set to ">= 3.72.0"

 Error: Invalid resource type
│
│   on .terraform\modules\vpc\modules\flow_logs\modules\s3_log_bucket\main.tf line 17, in resource "aws_s3_bucket_server_side_encryption_configuration" "flow_logs":
│   17: resource "aws_s3_bucket_server_side_encryption_configuration" "flow_logs" {
│
│ The provider hashicorp/aws does not support resource type "aws_s3_bucket_server_side_encryption_configuration".
╵
╷
│ Error: Invalid resource type
│
│   on .terraform\modules\vpc\modules\flow_logs\modules\s3_log_bucket\main.tf line 27, in resource "aws_s3_bucket_lifecycle_configuration" "flow_logs":
│   27: resource "aws_s3_bucket_lifecycle_configuration" "flow_logs" {
│
│ The provider hashicorp/aws does not support resource type "aws_s3_bucket_lifecycle_configuration".

Possible bug with tags module

We are seeing an error recently. Our use of the module looks like this:

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 1.0.0"
...

And we're seeing this now in the terraform output:

Error: Unsupported argument

  on .terraform/modules/vpc/data.tf line 68, in module "tags":
  68:   tags = var.tags

An argument named "tags" is not expected here.

Hmm. Any ideas? Thanks!

`aws_vpc_ipv4_cidr_block_association`

  • ability to create secondary cidr blocks
  • ability to create public/private subnets for the secondary cidrs
  • will only handle routing for igw & nat gateways

Unsupported Attribute in "nat_eip_#".

Error: Unsupported attribute

│ on outputs.tf line 22, in output "nat_eip_3":
│ 22: value = module.aws-ia_vpc.nat_eip_3
│ ├────────────────
│ │ module.aws-ia_vpc is a object, known only after apply

│ This object does not have an attribute named "nat_eip_3".

Subnets should be a generic map of subnets

Proposed idea: do not enforce naming conventions on subnet types (private, public, transit_gateway, etc). This will allow users to create arbitrary subnet amounts. For example, currently, users can only create 1 grouping of private subnets.

Idea 1: create abstract module concepts for each and allow users to specify in the map itself:

Pros/Cons:

  • - breaking change
  • + allows for defining subnet types in modules that are easier to understand
subnets = {
  myprivate = {
    type         = "private"
    netmask      = 24
    route_to_nat = "publicsubnets"
  }

  publicsubnets = {
    type                      = "public"
    netmask                   = 24
    nat_gateway_configuration = "all_azs" # options: "single_az", "none"
  }
}

Idea 2: create generic subnet module and allow any variable to be passed:

Pros/Cons:

  • + likely non breaking change
  • - code inside new module subnet would be complex
subnets = {
  myprivate = {
    netmask      = 24
    route_to_nat = "publicsubnets"
    routes = [{
       subnet  = "tgw"
       cidr    = "10.0.0.0/8"
    },
    {
       subnet = "nat"
       cidr   = "0.0.0.0/0"
  }]]
  }

  publicsubnets = {
    type     = "public"
    netmask  = 24
    nat_gateway_configuration = "all_azs" # options: "single_az", "none"
    routes = [{
       subnet = "tgw"
       cidr   = "10.0.0.0/8"
    },
    {
       subnet = "igw"
       cidr   = "0.0.0.0/0"
    }]
}

idea 2 open questions:

  • how to signify that a route should go to an appliance (igw, nat)?

Support for enabling vpc flow logs

We should have an optional argument to enable VPC flow logs. To enable flow logs we will need to take a log group name and iam role as inputs, or create them dynamically.

VPC not found when using ipam

I am trying to create a VPC with IPAM:

module "vpc" {
  source  = "aws-ia/vpc/aws"
  version = ">= 1.0.0"

  name     = var.network_map["example"].suffix
  az_count = 2

  vpc_ipv4_ipam_pool_id   = module.create_ipam.pools_level_2[var.network_map["example"].ipam_region].id
  vpc_ipv4_netmask_length = var.network_map["example"].netmask_length

  subnets = {
    public = {
      netmask                   = 24
      nat_gateway_configuration = "all_azs"
    }
    private = {
      netmask      = 24
      route_to_nat = true
    }
  }
}

And the first pass would fail with the following error:
│ Error: AWS SDK Go Service Operation Incomplete

│ with module.vpc.awscc_ec2_route_table.public["item"],
│ on .terraform/modules/vpc/main.tf line 47, in resource "awscc_ec2_route_table" "public":
│ 47: resource "awscc_ec2_route_table" "public" {
│ Waiting for Cloud Control API service CreateResource operation completion returned: waiter state transitioned to FAILED. StatusMessage: The
│ 'vpc-XXXXXXXXXXXXXXXXXXX' does not exist (Service: Ec2, Status Code: 400, Request ID: ....

When I tried to rerun it (making no changes) the script would try to destroy the VPC which would fail because of the dependencies. It was trying to destroy the VPC because the first pass already created a VPC which assigned a cidr range and the 2nd pass would have a difference cidr range. I tried to resolve the initial failure by replacing all the vpc_id calls to the following, but it didn't do any good:

vpc_id             = local.create_vpc ? aws_vpc.main[0].id : local.vpc.id

I also tried putting depends-on statements for various resources thinking it might just need time, but that didn't help. It keeps throwing the same error even when its done toward the end of execution. I even put a data command in to check the VPC existed and that works without issue. The VPC does exist when I go look at it in the console so I'm unsure how to resolve this.

For the private subnet connect_to_public_natgw = false or connect_to_public_natgw = null is treated as connect_to_public_natgw = true

For the private subnet
connect_to_public_natgw = false
or
connect_to_public_natgw = null
or
connect_to_public_natgw = true

has same result. The route to NATGW is created.

Only way to prevent route to NATGW is to remove the key
#connect_to_public_natgw = false

The expectation is that for connect_to_public_natgw = false or connect_to_public_natgw = null the route to NATGW is not created

Tags for every resource show needing updates every time I run apply

So even if I do not make any changes to terraform code and expect to see 0 diffs I get this for every resource created by the module.
I must be doing something wrong because I'm sure someone would have complained by now, but can't seem to figure it out.
It looks like the Key vs Value pairs just get rotated around in the list of tags.
image

Here is how I pass common tags in:

module "vpc" {
  source  = "registry.terraform.io/aws-ia/vpc/aws"
  version = "= 3.2.1"

  name = "${var.name}-vpc"
  tags = local.common_tags

  cidr_block = var.cidr_block
  az_count   = var.az_count
  ...
}

And here is how they are defined (merged with top level tags):

locals {
  common_tags = merge(var.common_tags, {
    environment = var.name
  })
}

and the top level module passes these tags in:

locals {
  common_tags = {
    managed_by        = "terraform"
    terraform_project = "core_infra"
  }
}

Any help is appreciated here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.