Giter VIP home page Giter VIP logo

terraform-provider-aws's People

Contributors

adamtylerlynch avatar albsilv-aws avatar angie44 avatar bflad avatar breathingdust avatar brittandeyoung avatar bschaatsbergen avatar catsby avatar dependabot[bot] avatar drfaust92 avatar ewbankkit avatar gdavison avatar glennchia avatar grubernaut avatar jar-b avatar jen20 avatar johnsonaj avatar justinretzolk avatar kamilturek avatar mitchellh avatar ninir avatar phinze avatar radeksimko avatar roberth-k avatar ryndaniels avatar saravanan30erd avatar stack72 avatar teraken0509 avatar yakdriver avatar zhelding avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-aws's Issues

Support Metrics Configurations for S3 Buckets

(Migrated from hashicorp/terraform#13247)

AWS has recently announced some support for additional analytics and metrics for S3 buckets: https://aws.amazon.com/blogs/aws/s3-storage-management-update-analytics-object-tagging-inventory-and-metrics/

In order to support the above, the implementation in the AWS API is with methods like GetBucketMetricsConfiguration and PutBucketMetricsConfiguration which were added in go-aws-sdk 1.5.11.

Affected Resource(s)

  • aws_s3_bucket (but likely a new resource)

Expected Behavior

Terraform configuration should support enabling and managing AWS S3 metrics configurations.

References

AWS: plan always out of date when AMI includes an EBS device

This issue was originally opened by @dmaze as hashicorp/terraform#3725. It was migrated here as part of the provider split. The original body of the issue is below.


If you create an AMI that has an attached block device mapping that creates an EBS disk, and then create an instance that explicitly declares a block device mapping for that device, the plan will always be out-of-date unless you dig into the AMI data and explicitly declare a snapshot_id for the mapping. I'd guess this is related to the hash calculation for the block device mapping: since snapshot_id is empty Terraform always plugs in an empty string for it, but when the instance is created from the AMI it comes from a non-empty snapshot and the calculated hashes differ.

Example Packer file (based on a CentOS 6 public AMI):

{
    "min_packer_version": "0.8.5",
    "builders": [{
        "type": "amazon-ebs",
        "source_ami": "ami-c2a818aa",
        "ami_name": "second-device-in-ebs",
        "ssh_username": "root",

        "instance_type": "m3.medium",
        "region": "us-east-1",
        "availability_zone": "us-east-1d",

        "launch_block_device_mappings": [{
            "device_name": "/dev/sdb",
            "volume_type": "gp2",
            "volume_size": 60,
            "delete_on_termination": true
        }]
    }]
}

Terraform description (substitute the AMI key from the Packer output):

provider "aws" {
        region = "us-east-1"
}

resource "aws_instance" "instance" {
        ami = "ami-xxxxxxxx"
        instance_type = "m3.medium"

        root_block_device {
                volume_type = "gp2"
        }
        ebs_block_device {
                device_name = "/dev/sdb"
                volume_type = "gp2"
                volume_size = 60
        }
}

terraform plan output:

Refreshing Terraform state prior to plan...

aws_instance.instance: Refreshing state... (ID: i-2a4d5895)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

-/+ aws_instance.instance
    ami:                                               "ami-0e92e064" => "ami-0e92e064"
    availability_zone:                                 "us-east-1c" => "<computed>"
    ebs_block_device.#:                                "1" => "1"
    ebs_block_device.2576023345.delete_on_termination: "" => "1" (forces new resource)
    ebs_block_device.2576023345.device_name:           "" => "/dev/sdb" (forces new resource)
    ebs_block_device.2576023345.encrypted:             "" => "<computed>" (forces new resource)
    ebs_block_device.2576023345.iops:                  "" => "<computed>" (forces new resource)
    ebs_block_device.2576023345.snapshot_id:           "" => "<computed>" (forces new resource)
    ebs_block_device.2576023345.volume_size:           "" => "60" (forces new resource)
    ebs_block_device.2576023345.volume_type:           "" => "gp2" (forces new resource)
    ebs_block_device.3425641721.delete_on_termination: "1" => "0"
    ebs_block_device.3425641721.device_name:           "/dev/sdb" => ""
    ephemeral_block_device.#:                          "0" => "<computed>"
    instance_type:                                     "m3.medium" => "m3.medium"
    key_name:                                          "" => "<computed>"
    placement_group:                                   "" => "<computed>"
    private_dns:                                       "ip-10-230-203-234.ec2.internal" => "<computed>"
    private_ip:                                        "10.230.203.234" => "<computed>"
    public_dns:                                        "ec2-54-145-167-177.compute-1.amazonaws.com" => "<computed>"
    public_ip:                                         "54.145.167.177" => "<computed>"
    root_block_device.#:                               "1" => "1"
    root_block_device.0.delete_on_termination:         "true" => "1"
    root_block_device.0.iops:                          "24" => "<computed>"
    root_block_device.0.volume_size:                   "8" => "<computed>"
    root_block_device.0.volume_type:                   "gp2" => "gp2"
    security_groups.#:                                 "1" => "<computed>"
    source_dest_check:                                 "true" => "1"
    subnet_id:                                         "" => "<computed>"
    tenancy:                                           "default" => "<computed>"
    vpc_security_group_ids.#:                          "0" => "<computed>"


Plan: 1 to add, 0 to change, 1 to destroy.

(I am actually trying to do two things. The CentOS 6 base AMI looks like it's difficult to resize its root image, so if you need more than 8 GB of storage you must use an external disk. Also, I have a side Ansible setup that allows the system to run either from an AMI with our software preinstalled or from a plain CentOS 6 image to install it later; otherwise it might make sense to not declare any block devices at all and just let the AMI do its thing.)

Changes to aws_elasticsearch_domain not applying

This issue was originally opened by @rafaelmagu as hashicorp/terraform#3539. It was migrated here as part of the provider split. The original body of the issue is below.


Now that #3381 has been released, I've tried adding a couple of ES domains to our plans. We already had ES domains created manually via the AWS GUI, so I expected TF to warn me about a name conflict. Instead, it just added the two ARNs to its state.

I've then proceeded to change a snapshot setting, and TF hangs while trying to apply modifications:

2015/10/19 11:23:03 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:03 [TRACE] Waiting 1.6s before next try
2015/10/19 11:23:03 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:03 [TRACE] Waiting 1.6s before next try
2015/10/19 11:23:04 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:05 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:05 [TRACE] Waiting 3.2s before next try
2015/10/19 11:23:05 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:05 [TRACE] Waiting 3.2s before next try
2015/10/19 11:23:08 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:09 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:09 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:09 [TRACE] Waiting 6.4s before next try
2015/10/19 11:23:09 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:09 [TRACE] Waiting 6.4s before next try
2015/10/19 11:23:13 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:14 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:15 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:15 [TRACE] Waiting 10s before next try
2015/10/19 11:23:15 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:15 [TRACE] Waiting 10s before next try
2015/10/19 11:23:18 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:19 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:23 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:24 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:26 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:26 [TRACE] Waiting 10s before next try
2015/10/19 11:23:26 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:26 [TRACE] Waiting 10s before next try
2015/10/19 11:23:28 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:29 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:33 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:34 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:36 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:36 [TRACE] Waiting 10s before next try
2015/10/19 11:23:36 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:36 [TRACE] Waiting 10s before next try
2015/10/19 11:23:38 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:39 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:43 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:44 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:46 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:46 [TRACE] Waiting 10s before next try
2015/10/19 11:23:46 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:46 [TRACE] Waiting 10s before next try
2015/10/19 11:23:48 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:49 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:53 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:54 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:57 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:57 [TRACE] Waiting 10s before next try
2015/10/19 11:23:57 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:57 [TRACE] Waiting 10s before next try
2015/10/19 11:23:58 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:59 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:03 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:04 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:07 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:07 [TRACE] Waiting 10s before next try
2015/10/19 11:24:07 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:07 [TRACE] Waiting 10s before next try
2015/10/19 11:24:08 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:09 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:13 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:14 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:17 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:17 [TRACE] Waiting 10s before next try
2015/10/19 11:24:17 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:17 [TRACE] Waiting 10s before next try
2015/10/19 11:24:18 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:19 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:23 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:24 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:27 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:27 [TRACE] Waiting 10s before next try
2015/10/19 11:24:28 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:28 [TRACE] Waiting 10s before next try
2015/10/19 11:24:28 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:29 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:33 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:34 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:38 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:38 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:38 [TRACE] Waiting 10s before next try
2015/10/19 11:24:38 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:38 [TRACE] Waiting 10s before next try
2015/10/19 11:24:39 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:43 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:44 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:48 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:48 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:48 [TRACE] Waiting 10s before next try
2015/10/19 11:24:48 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:48 [TRACE] Waiting 10s before next try
2015/10/19 11:24:49 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:53 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:54 [DEBUG] vertex root, waiting for: provider.aws (close)
(...)

It is also detecting a change in access policies applied to the domains even though the policies are the same. I am assuming the latter is because it cannot apply the changes correctly, so it isn't saving the policies either.

plan is in a loop changing my "main" route table

This issue was originally opened by @egarbi as hashicorp/terraform#2879. It was migrated here as part of the provider split. The original body of the issue is below.


$ terraform --version
 Terraform v0.6.1
# Main routing table private to nat
resource "aws_route_table" "main" {
   route {
       cidr_block = "0.0.0.0/0"
       network_interface_id = "eni-7aXXXX"
   }
   route {
       cidr_block = "${var.vpn_cidr}"
       instance_id = "${module.vpn.minion_id}"
   }
   route {
       cidr_block = "${lookup(var.tokio_vpc_cidr,var.env)}"
       instance_id = "${module.vpn-gw.id}"
   }
   route {
       cidr_block = "${lookup(var.usa_vpc_cidr,var.env)}"
       instance_id = "${module.gw-free.id}"
   }
   tags {
       Name = "${lookup(var.domain,var.env)}-vpc-to-nat"
   }
   vpc_id = "${aws_vpc.testing.id}"
}

terraform plan...

~ aws_route_table.main
    route.1714403534.cidr_block:                "10.32.0.0/16" => "10.32.0.0/16"
    route.1714403534.gateway_id:                "" => ""
    route.1714403534.instance_id:               "i-64XXXX" => "i-64XXXX"
    route.1714403534.network_interface_id:      "eni-2aXXXX" => ""
    route.1714403534.vpc_peering_connection_id: "" => ""
    route.2010688411.cidr_block:                "10.52.0.0/16" => "10.52.0.0/16"
    route.2010688411.gateway_id:                "" => ""
    route.2010688411.instance_id:               "i-36XXXX" => "i-36XXXX"
    route.2010688411.network_interface_id:      "eni-a7XXXX" => ""
    route.2010688411.vpc_peering_connection_id: "" => ""
    route.3093398733.cidr_block:                "192.168.240.0/24" => "192.168.240.0/24"
    route.3093398733.gateway_id:                "" => ""
    route.3093398733.instance_id:               "i-d5XXXX" => "i-d52XXXX"
    route.3093398733.network_interface_id:      "eni-69XXXX" => ""
    route.3093398733.vpc_peering_connection_id: "" => ""
    route.3891333365.cidr_block:                "0.0.0.0/0" => ""
    route.3891333365.gateway_id:                "" => ""
    route.3891333365.instance_id:               "i-0fXXXX" => ""
    route.3891333365.network_interface_id:      "eni-7aXXXX" => ""
    route.3891333365.vpc_peering_connection_id: "" => ""
    route.3991494531.cidr_block:                "" => "0.0.0.0/0"
    route.3991494531.gateway_id:                "" => ""
    route.3991494531.instance_id:               "" => ""
    route.3991494531.network_interface_id:      "" => "eni-7aXXXX"
    route.3991494531.vpc_peering_connection_id: "" => ""

aws_security_group_rule schema

This issue was originally opened by @jeffw-wherethebitsroam as hashicorp/terraform#4109. It was migrated here as part of the provider split. The original body of the issue is below.


Currently, source_security_group_id conflicts with cidr_blocks and self, self conflicts with cidr_blocks. Also, source_security_group_id can only have a single value. It would be useful to be able to write:

resource "aws_security_group_rule" "tcp-80" {
  type = "ingress"
  security_group_id = "${aws_security_group.some_group.id}"
  from_port = 80
  to_port = 80
  protocol = "tcp"
  cidr_blocks = ["1.2.3.4/24", "5.6.7.8/32"]
  source_security_group_id = ["${aws_security_group.other_group.id}", "sg-abcdef"]
  self = true
}

It doesn't look like source_security_group_id needs to conflict with self - this look like it is already supported in the code.

I'm not sure if source_security_group_id and self needs to conflict with cidr_blocks. The API reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_IpPermission.html_) doesn't mention IpRanges and UserIdGroupPairs conflicting, but I don't know how this is in practice. In worst case it would mean creating 2 separate ec2.IpPermission's.

Finally the inline ingress block for aws_security_group already support multiple source security groups in the security_groups argument. It would be good if source_security_group_id did the same (or if source_security_group_ids could be added to take a list).

Happy to write a pull request if this is wanted.

aws_eip โ€” automatically disassociate eip when changing instance

This issue was originally opened by @little-arhat as hashicorp/terraform#3429. It was migrated here as part of the provider split. The original body of the issue is below.


If you create aws_eip in terraform with some instance, you get following error, trying to change instance argument of aws_eip

* aws_eip.serv: Failure associating EIP: Resource.AlreadyAssociated: resource eipalloc-* is already associated with associate-id eipassoc-*
    status code: 400, request id: []

Shouldn't Terraform automatically disassociate eip in order to apply changes?

"terraform refresh" not picking up changes to policy text

This issue was originally opened by @jgross206 as hashicorp/terraform#3517. It was migrated here as part of the provider split. The original body of the issue is below.


If a Terraform-managed policy is modified via Web console, the changes are not picked up on terraform refresh so they are not corrected on next terraform apply

Repro:

  • create a new configuration with the following in policies.tf:
provider "aws" {
  # access_key and secret_key should be set using "aws configure"
  region = "us-west-2"
}

resource "aws_iam_policy" "test" {
  name  = "test-policy"
  description = "A test policy"
  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "glacier:ListVaults",
        "glacier:DescribeVault",
        "glacier:GetVaultNotifications",
        "glacier:ListJobs",
        "glacier:DescribeJob",
        "glacier:GetJobOutput"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
EOF
}
  • run terraform plan then terraform apply. The policy should be created.
  • log in to the web console and modify the policy text, e.g. change "Allow" to "Deny"
  • go back to the terminal and run terraform refresh
  • Expected: the change should be noticed and updated into the state file. The next terraform apply should change the policy back to the one existing in our .tf file.
  • Actual: no changes are detected. Subsequent terraform plan terraform apply succeed but the policy in AWS does not match the policy in source.

Happy to provide any more information needed.

aws_instance: set windows_password attr on windows instances

This issue was originally opened by @phinze as hashicorp/terraform#3148. It was migrated here as part of the provider split. The original body of the issue is below.


On Read for AWS instances, if platform is Windows, call GetPasswordData and populate a windows_password attribute with its contents.

This should allow the remote_exec provisioner to be used on Windows instances like this:

resource "aws_instance" "foo" {
  # ...
  provisioner "remote-exec" {
    connection {
      type     = "winrm"
      password = "${self.windows_password}"
    }
  }
}

Consider getting AWS partition and account id from dedicated EC2 metadata endpoints

If you're using the EC2 metadata service for authentication credentials, Terraform calls the /latest/meta-data/iam/info endpoint and attempts to parse the AWS partition and account id from the InstanceProfileArn field by splitting the value into : separated fields.

Alternatives:

  • The /latest/meta-data/services/partition endpoint returns the AWS partition as an unformatted string.
  • The /latest/dynamic/instance-identity/document endpoint has accountId as a field.

Neither of those require parsing the InstanceProfileArn field, and it opens the possibility of using the EC2 metadata service to learn the partition and account id that you're operating in, while statically configured or entered credentials, so you could use Terraform on an EC2 instance that didn't have an instance profile on it, if someone had a use for that situation.

This is definitely P3 territory, but it's worth putting on the record in case of future refactorings in this area. It seems related to hashicorp/terraform#12704 as well.

Whitespace changes in heredoc force new resources

This issue was originally opened by @yissachar as hashicorp/terraform#4435. It was migrated here as part of the provider split. The original body of the issue is below.


I have the following ECS task:

resource "aws_ecs_task_definition" "web" {
  family = "web"
  container_definitions = <<EOF
[{
  "name": "web",
  "image": "imagelocation",
  "memory": 500,
  "essential": true,
  "portMappings": [
    {
      "containerPort": 80,
      "hostPort": 80
    }
  ]
}]
EOF
}

If I then add an extra space after one of the commas, this will force a new version of the task as Terraform thinks that it has changed.

For my particular situation this causes problems since we have developers on Mac and Windows. Since we have Git set to convert line-endings, this means the local file on Windows uses \r\n for line-endings, and Mac uses \n. Terraform sees that the container_definitions has some changes and forces an update of all the resources involved.

I understand that it might be out of scope for Terraform to handle something like this, but it would be nice if there were some way of dealing with the issue. For now we are forced to restrict Terraform plan and apply usage to one OS.

aws_spot_instance_request tags won't apply to instance

This issue was originally opened by @caarlos0 as hashicorp/terraform#3263. It was migrated here as part of the provider split. The original body of the issue is below.


I have an aws_spot_instance_request like this:

resource "aws_spot_instance_request" "seleniumgrid" {
    ami = "${var.amiPuppet}"
    key_name = "${var.key}"
    instance_type = "c3.4xlarge"
    subnet_id = "${var.subnet}"
    vpc_security_group_ids = [ "${var.securityGroup}" ]
    user_data = "${template_file.userdata.rendered}"
    wait_for_fulfillment = true
    spot_price = "${var.price}"
    availability_zone = "${var.zone}"
    instance_initiated_shutdown_behavior = "terminate"
    root_block_device {
      volume_size = 100
      volume_type = "gp2"
    }
    tags {
      Name = "${var.name}.${var.domain}"
      Provider = "Terraform"
      CA_TEAM = "${var.team}"
      CA_ROLE = "${var.role}"
      CA_SERVICE = "${var.service}"
    }
}

The tags are being applied only to the spot request itself, not to the underlying instance. Is this an expected behavior? How can I change this?

Thanks!

aws_cloudformation_stack parameters of type CommaDelimitedList cause stack updates to fail when list contains spaces

This issue was originally opened by @tangentspace as hashicorp/terraform#4561. It was migrated here as part of the provider split. The original body of the issue is below.


Passing a list containing spaces to a CommaDelimitedList parameter in a aws_cloudformation_stack causes stack updates to fail when the parameter has not been modified.

For example, the following stack is created fine on the first apply operation, but further attempts to run apply fail because terraform always thinks the ELBListener1 parameter has changed:

resource "aws_cloudformation_stack" "api" {
    name = "${var.environment}-api"
    template_url = "https://s3.amazonaws.com/${var.template_bucket}/${var.environment}/ApplicationStack.json"
    parameters {
        ApplicationName = "api"
        ELBListener1 = "HTTP, 443, HTTP, 8080"
    }
}

The error message is:

* aws_cloudformation_stack.api: ValidationError: No updates are to be performed.
    status code: 400, request id: 2f0d2349-b571-11e5-86b1-d56736d214fa

I believe what is happening is that the AWS API returns the string with the spaces removed when terraform queries it for the current value, so terraform always thinks the parameter has changed. For the time being I am able to work around this issue by simply removing the spaces from the list.

Can't interpolate multi resources for nested fields

This issue was originally opened by @calvinfo as hashicorp/terraform#3902. It was migrated here as part of the provider split. The original body of the issue is below.


We're attempting to create an elasticache cluster, but unfortunately the interpolation doesn't work for splats of nested fields.

resource "aws_elasticache_cluster" "cache" {
  cluster_id = "api-dedupe-memcached"
  engine = "memcached"
  node_type = "cache.r3.8xlarge"
  port = 11211
  num_cache_nodes = 2
  parameter_group_name = "default.memcached1.4"
  subnet_group_name = "${aws_elasticache_subnet_group.default.name}"
  security_group_ids = ["${aws_security_group.memcached.id}"]
}

# and then elsewhere...

value = "${join(",", aws_elasticache_cluster.cache.cache_nodes.*.address)}"

It could be fixed in the config/interpolation, to mark it as a multi, but that'd require changing around the field name behavior as well. Is that behavior which you're willing to support? Otherwise it might be worth changing the cache resource.

RDS creation often timing out

This issue was originally opened by @thegedge as hashicorp/terraform#2886. It was migrated here as part of the provider split. The original body of the issue is below.


I've been frequently hitting the 40 minute timeout for aws_db_instance creation lately. I would consider bumping this timeout up to 60+ minutes, or maybe reset the timeout counter back to zero it if the state changes at all.

Another related note is that terraform doesn't consider the resource tainted or anything if there's a timeout. Not sure if this is by design, but it saves me a lot of hassle right now because the DB does eventually get created.

Feature Request: Naming standard for AWS "tags"

This issue was originally opened by @xsmaster as hashicorp/terraform#3596. It was migrated here as part of the provider split. The original body of the issue is below.


I would just like to see if we could get a standard for "tag" or "tags".

Example:

https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html
"tag" is used.

https://www.terraform.io/docs/providers/aws/r/elb.html

"tags" is used.

So I did not see this and was doing come clean up and found this nice error message:

There are warnings and/or errors related to your configuration. Please
fix these before continuing.

Errors:

  • aws_elb.web-e_load_balancer: : invalid or unknown key: tag

I feel dumb now as the error message seems correct but it would be nice to have the same "tag" keyword used for all aws tags.

aws: Config* tests fail when ~/.aws/credentials is present

This issue was originally opened by @radeksimko as hashicorp/terraform#4409. It was migrated here as part of the provider split. The original body of the issue is below.


Try the following:

~/.aws/credentials

[default]
aws_region = us-west-2
aws_access_key_id = Example_KEY_ID
aws_secret_access_key = ExampleSecretAccessKey
$ make test TEST=./builtin/providers/aws TESTARGS='-run=AWSConfig -v'
==> Checking that code complies with gofmt requirements...
go generate ./...
TF_ACC= go test ./builtin/providers/aws -run=AWSConfig -v -timeout=30s -parallel=4
=== RUN   TestAWSConfig_shouldError
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- FAIL: TestAWSConfig_shouldError (0.00s)
    config_test.go:27: Expected an error with empty env, keys, and IAM in AWS Config, got credentials.Value{AccessKeyID:"Example_KEY_ID", SecretAccessKey:"ExampleSecretAccessKey", SessionToken:""}
=== RUN   TestAWSConfig_shouldBeStatic
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldBeStatic (0.00s)
=== RUN   TestAWSConfig_shouldIAM
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
--- FAIL: TestAWSConfig_shouldIAM (0.00s)
    config_test.go:97: AccessKeyID mismatch, expected: (somekey), got (Example_KEY_ID)
=== RUN   TestAWSConfig_shouldIgnoreIAM
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldIgnoreIAM (0.00s)
=== RUN   TestAWSConfig_shouldBeENV
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldBeENV (0.00s)
FAIL
exit status 1
FAIL    github.com/hashicorp/terraform/builtin/providers/aws    0.017s
make: *** [test] Error 1

Possible workaround is to temporarily rename the default profile name from default to something else (e.g. defaultx), so it's not loaded.

Real solution will differ based on whether we want to be actually testing the SharedCredentialsProvider (i.e. credentials provided by ~/.aws/credentials).

aws_autoscaling_notification tries to manage subscriptions not controlled by Terraform

This issue was originally opened by @johnrengelman as hashicorp/terraform#2758. It was migrated here as part of the provider split. The original body of the issue is below.


Trying to use Terraform to manage an ASG Notification subscription to an SNS topic that is not managed by Terraform. First apply works but subsequent applies attempt to remove subscriptions that are not managed by terraform (i.e. an ASG that isn't terraform managed and has a subscription to the same SNS topic)

This is the same problem that appears in other areas (such as hashicorp/terraform#2100) where Terraform is attempting to access far too much information about a particular resource...generally in the areas where the Terraform resource is an ephemeral mapping of concrete AWS relationships.

Non-descript error message when IAM permissions are too restrictive.

This issue was originally opened by @clstokes as hashicorp/terraform#2875. It was migrated here as part of the provider split. The original body of the issue is below.


When running a Terraform plan or apply using AWS keys that do not have sufficient permission, an error message occurs. The error message is sometimes descriptive enough to instruct the user of the permission that is needed, but in some cases it isn't.

When managing S3 buckets, if the IAM user does not have s3:GetBucket* permissions, a Terraform execution will result in a generic Access Denied message. Terraform should provide context of the provider API call when the API call fails so that the user knows which operation failed and can troubleshoot it - eg. Error while calling [S3 GetBucketLocation] - 403 Access Denied.

Terraform Output
$ terraform apply
aws_s3_bucket.bucket_test: Refreshing state... (ID: test_bucket_123321)
Error refreshing state: 1 error(s) occurred:

* 1 error(s) occurred:

* AccessDenied: Access Denied
    status code: 403, request id: []
$
Terraform Configuration
variable "access_key" {}
variable "secret_key" {}
variable "region" {}

provider "aws" {
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
    region = "${var.region}"
}

resource "aws_s3_bucket" "bucket_test" {
    bucket = "test_bucket_123321"
}

aws_elasticsearch_domain.es: InvalidTypeException: Error setting policy

Creating an aws elastic search domain and setting the access policy using a new iam role, throws an error.

Terraform Version

0.9.7

Affected Resource(s)

  • aws_elasticsearch_domain

Steps to reproduce issue

Using the aws CLI

  1. Create a file policy.json
    Contents of policy json:
	{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": "sts:AssumeRole",
          "Principal": {
            "Service": "ec2.amazonaws.com"
          },
          "Effect": "Allow",
          "Sid": ""
        }
      ]
    }
  1. Run command aws iam create-role and aws es create-elasticsearch-domain

aws iam create-role --role-name ed-test-1 --assume-role-policy-document file://policy.json && aws es create-elasticsearch-domain --region us-west-2 --domain-name "ed-test-1" --ebs-options 'EBSEnabled=true,VolumeSize=10' --access-policies '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::288840537196:role/ed-test-1"},"Action":"es:*","Resource":"arn:aws:es:*"}]}'

Expected: no error and resources to have been created
Actual:

An error occurred (InvalidTypeException) when calling the CreateElasticsearchDomain operation: Error setting policy: [{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::288840537196:role/ed-test-1"},"Action":"es:*","Resource":"arn:aws:es:*"}]}]

Using terraform

resource "aws_elasticsearch_domain" "example" {
  domain_name = "tf-test-1"
   ebs_options {
    ebs_enabled = true
    volume_size = 10
  }
  access_policies = <<CONFIG
  {
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
	"AWS": "${aws_iam_role.example_role.arn}"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:*"
    }
  ]
  }
CONFIG
}
resource "aws_iam_role" "example_role" {
  name = "es-domain-role-1"
  assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
}
data "aws_iam_policy_document" "instance-assume-role-policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

Expected: no error and resources to have been created
Actual:

Error applying plan:

1 error(s) occurred:

* aws_elasticsearch_domain.elastic: 1 error(s) occurred:

* aws_elasticsearch_domain.elastic: InvalidTypeException: Error setting policy: [{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": ["arn:aws:iam::13884053156:role/es-domain-role-1"]
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:eu-west-1:13884053156:domain/tf-test-1/*"
    }
  ]
}
]
	status code: 409, request id: 4070cc55-49e2-11e7-a4d9-15d110862029

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
make: *** [apply] Error 1

References

Better logging on 403 of aws_iam_policy delete

This issue was originally opened by @gtmtech as hashicorp/terraform#3625. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform is much better at telling you the "Action" permission required in AWS in order to achieve things these days. If it cant complete an action because the terraform credentials arent associated with an iam policy with sufficient actions, it tells you which action is lacking - this is very useful.

However, I spotted an omission when dealing with an aws_iam_policy resource. On trying to terraform plan -destroy, this is the error:

aws_iam_policy.my-iam-policy: Error reading IAM policy arn:aws:iam::###########:policy/my-iam-policy: &awserr.requestError{awsError:(*awserr.baseError)(0x#########), statusCode:403, requestID:"e8474b15-####-####-####-e754d2c48ddd"}

To keep with the rest of the helpful messages, it would be much more useful if the message said that it couldnt do its job because the iam:DeletePolicy action was missing from the policy attached to the terraform credentials.

Missing EC2 dependency when destroying OpsWorks stack

When destroying OpsWorks cluster it seems like there is missing dependency: All OpsWorks objects should depend on OpsWorks instance, so that when stack is being destroyed, all instances are destroyed first and only then any other resources should be destroyed. Otherwise all sorts of weird behavior can happen (such as OpsWorks not being able to destroy instance because Terraform already destroyed its IAM role that allowed OpsWorks to act on EC2 instances)

Terraform Version

Terraform v.0.9.4

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_opsworks_stack
  • aws_opsworks_instance
  • aws_opsworks_*

Terraform Configuration Files

Basic configuration for OpsWorks stack with some aws_opsworks_instance resources.

Expected Behavior

Terraform shoud wait for all EC2 instances being destroyed before destroying any other aws_opswors_* resources

Actual Behavior

aws_opsworks_* resources are being deleted regardless of whether aws_opsworks_instance has been destroyed or not.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Configure, build and apply OpsWorks stack without any aws_opsworks_instance resources
  2. Destroy given stack - it's being destroyed as epected
  3. Create same stack with one aws_opsworks_instance resource
  4. Destroy stack - behavior undefined (race condition in resources being destroyed)

Cannot terraform apply aws_instances into consistent state 0.6.5 (EBS)

This issue was originally opened by @gtmtech as hashicorp/terraform#3654. It was migrated here as part of the provider split. The original body of the issue is below.


Using terraform 0.6.5

I am unable to get terraform to apply into a consistent state - that is , an apply followed by a plan always wants to blow away all my instance resources and start over. Every time. Despite the fact terraform apply says everything was successful. Obviously this renders terraform an unusable tool for this usecase

It seems the offensive part relates to each instance having an EBS block volume, forcing a new resource despite the fact pre and post are the same values. Note the force new resource sections below when doing a subsequent plan, after a successful apply

-/+ aws_instance.foo_2
    ...
    associate_public_ip_address:                       "false" => "0"
    ...
    ebs_block_device.#:                                "2" => "1"
    ebs_block_device.158414664.delete_on_termination:  "1" => "0"
    ebs_block_device.158414664.device_name:            "/dev/xvda" => ""
    ebs_block_device.3905984573.delete_on_termination: "true" => "1" (forces new resource)
    ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb" (forces new resource)
    ebs_block_device.3905984573.encrypted:             "false" => "<computed>"
    ebs_block_device.3905984573.iops:                  "30" => "<computed>"
    ebs_block_device.3905984573.snapshot_id:           "" => "<computed>"
    ebs_block_device.3905984573.volume_size:           "10" => "10" (forces new resource)
    ebs_block_device.3905984573.volume_type:           "gp2" => "gp2" (forces new resource)
    ephemeral_block_device.#:                          "0" => "<computed>"
   ...


As you can see from above, its pretty screwed, and terraform is now unusable.

This is my code for an instance (obviously a fair amount of variables in there)

resource "aws_instance" "foo_2" {
    ami                         = "${var.foo.ami_image_2}"
    availability_zone           = "${var.foo.availability_zone_2}"
    instance_type               = "${var.foo.instance_type_2}"
    key_name                    = "${var.foo.key_name_2}"
    vpc_security_group_ids      = ["${aws_security_group.base.id}",
                                   "${aws_security_group.foo.id}"]
    subnet_id                   = "${aws_subnet.foo_az2.id}"
    associate_public_ip_address = false
    source_dest_check           = true
    monitoring                  = true
    iam_instance_profile        = "${aws_iam_instance_profile.foo.name}"
    count                       = "${var.foo.large_cluster_size}"
    user_data                   = "${template_file.userdata_foo_2.rendered}"

    ebs_block_device {
        device_name = "/dev/xvdb"
        volume_type = "gp2"
        volume_size = "${var.foo.disk_size}"
        delete_on_termination = "${var.foo.disk_termination}"
    }

}

A couple of other things I've noticed -

You see the ebs block device id above? "ebs_block_device.3905984573" - well the ID is the same for all instance resources in the output, that strikes me as a bit odd.

After terraform applying all my instances, I commented out all the ebs_block_device settings in all the instance definitions, and then tried a plan. Now terraform plan says there are no changes.

It looks like EBS block devices are severely not working in terraform 0.6.5

provider/aws: Troubles destroying large ASGs

This issue was originally opened by @phinze as hashicorp/terraform#3495. It was migrated here as part of the provider split. The original body of the issue is below.


hashicorp/terraform#3485 attempted to fix force_delete, but @maratoid provided a test config that still yields errors.

Current status with his config is:

  • we skip the Terraform-managed drain properly
  • we specify ForceDelete on the API call
  • AWS still seems to need to churn through terminations of all nodes before the ASG is deleted
  • Terraform times out waiting for the ASG to show up as deleted
  • The internet gateway also times out... still haven't figured out if that's related or not.

In order to fix, I think we need to:

  • parameterize the timeout value for both ASGs and IGWs to allow users to crank that up
  • potentially figure out the "not found" issue if it's a separate IGW bug

This is all a bit jargon-y, happy to clarify anything if someone else digs into this before I do.

proposal: aws_iam_group_membership optionally only ensures membership, does not remove memberships

Terraform Version

% terraform -v          
Terraform v0.9.8

Affected Resource(s)

Terraform Configuration Files

resource "aws_iam_group_membership" "release_manager_dynamo" {
  provider = "aws.release_manager_iam"
  name     = "release-manager-${var.spaceID}-dynamo-membership"
  group    = "${var.release_manager_iam_group}"
  users    = ["${aws_iam_user.release_manager.name}"]
}

Expected Behavior

Hi, here is where I will propose what I'd like:

In the example snippet above, with just the one user listed, membership for that user would be ensured but any other members of the group would be left alone. This could be achieved with a new only_present or similar boolean. This would make the resource behave more like, say, heroku_app when it manages config vars.

Actual Behavior

If the group at AWS has users A and B but applying the above config only lists B, A will be removed. If the config above then works out to list A (such as if using a different terraform env), A
would be re-added and B would be removed.

Steps to Reproduce

  1. terraform apply

Important Factoids

We are using a trick sort of described here where using the magic ${aws:username} variable in policies you can create a user which can manage users named eg ${aws:username}-* and can only add them to a group named ${aws:username}. Elsewhere in our terraform config we have an aliased aws provider (release_manager_iam in config above) using credentials which have this ability.

Missing stage when apply aws_api_gateway_deployment

We are experiencing some issues at the time we deploy our API Gateway into AWS. It seems to be an error recently introduced since we had never seen it before and we haven't really modified our code base.

At the time we are deploying the API for the first time every resource seems to applied correctly. However, at the second and so on deployments, terraform is deleting the old API stage without creating the new one. This especially confusing since when we check the output and .tfstate file the stage seems to have been created correctly. It's only at the time you either hit an end-point or double check the console when you can see that the stage hasn't been created.

After some testing, we have found out that if we change the stage_name we can replicate the behaviour in subsequent deployments. Thus we can reproduce this behaviour:

  1. Run terraform apply with a "dev" stage_name. "dev" stage will be created.
  2. Run terraform apply with a "dev" stage_name. Old "dev" stage will be deleted without the new one being created.
  3. Run terraform apply with a "dev2" stage_name. Old "dev" stage will be deleted and new "dev2" won't be created.
  4. Run terraform apply with a "dev2" stage_name. Back to the missing stage, the old "dev2" stage will be deleted and new "dev2" won't be created.

Terraform Version

0.9.6

Affected Resource(s)

  • aws_api_gateway_deployment

Terraform Configuration Files

resource "aws_api_gateway_deployment" "football_api" {
  lifecycle {
    create_before_destroy = true
  }
  depends_on = [
    <List of aws_api_gateway_integracion dependencies>
  ]
  rest_api_id = "${aws_api_gateway_rest_api.football.id}"
  stage_name = "${var.env}"
  stage_description = "${timestamp()}"
  description = "Deployed ${var.deployed}"
}

Debug Output

https://gist.github.com/martinezp/7662132b5f24463414ebac0a5c413128

Expected Behaviour

Old stage removal, and new stage creation

Actual Behaviour

Old stage removed but no new stage created even though it's apparently been created if we check the .tfstate

Steps to Reproduce

  1. terraform apply
  2. terraform apply

aws_network_interface can't find securitygroupID.

This issue was originally opened by @TheM0ng00se as hashicorp/terraform#3372. It was migrated here as part of the provider split. The original body of the issue is below.


In Terraform 0.6.3 I'm noticing that attaching a security group to an ENI isn't working as expected.

I've tried both with {$aws_security_group.csr_sg_inside.name} AND {$aws_security_group.csr_sg_inside.id} and both throw the error.

Error applying plan:

2 error(s) occurred:

* aws_network_interface.csr_inside.0: Error creating ENI: InvalidSecurityGroupID.NotFound: The securityGroup ID '{$aws_security_group.csr_sg_inside.name}' does not exist
    status code: 400, request id: []
* aws_network_interface.csr_inside.1: Error creating ENI: InvalidSecurityGroupID.NotFound: The securityGroup ID '{$aws_security_group.csr_sg_inside.name}' does not exist
    status code: 400, request id: []

Terraform does indeed create the security group and it exists. I had a brief look at the source...seems like I should be able to do this..

resource "aws_network_interface" "csr_inside" {
    count = "2"
    subnet_id = "${element(aws_subnet.tools.*.id, count.index)}"
    source_dest_check = "false"
    security_groups = [ "{$aws_security_group.csr_sg_inside.id}" ]
    attachment {
        instance = "${element(aws_instance.csr.*.id, count.index)}"
        device_index = "2"
    }
}

aws_cloudformation_stack indefinitely reapplies changes in NoEcho parameters

This issue was originally opened by @mtb-xt as hashicorp/terraform#4335. It was migrated here as part of the provider split. The original body of the issue is below.


If a cloudformation template contains a parameter with NoEcho set to true (like a password), the parameter value is shown as asterisks "****".

Terraform 0.6.8 detects this as a parameter change and tries to reapply it on every 'plan' or 'apply':

aws_cloudformation_stack.redshift-prod: Modifying...
  parameters.MasterUserPassword: "****" => "lolmegapassword"

aws_network_acl with icmp rule always recreates network acl

This issue was originally opened by @slshen as hashicorp/terraform#4423. It was migrated here as part of the provider split. The original body of the issue is below.


A rule like:

egress {
    rule_no = "100"
    protocol = "icmp"
    from_port = -1
    to_port = -1
    icmp_type = -1
    icmp_code = -1
    cidr_block = "0.0.0.0/0"
    action = "allow"
}

Creates an icmp any rule, but terraform always thinks it's different. plan says:

egress.2826807916.action:     "allow" => ""
egress.2826807916.cidr_block: "0.0.0.0/0" => ""
egress.2826807916.from_port:  "0" => "0"
egress.2826807916.icmp_code:  "-1" => "0"
egress.2826807916.icmp_type:  "-1" => "0"
egress.2826807916.protocol:   "1" => ""
egress.2826807916.rule_no:    "100" => "0"
egress.2826807916.to_port:    "0" => "0"

Changing the egress rule so that from_port = 0 and to_port = 0 eliminates the diff.

Sporadic issues installing Ubuntu packages after apt-get update on AWS

This issue was originally opened by @llevar as hashicorp/terraform#2995. It was migrated here as part of the provider split. The original body of the issue is below.


I'm trying to install Saltstack packages using apt-get and am getting very inconsistent behaviour. About 60% of my provisioning runs fail complaining about not being able to find the salt packages. Others finish with no problems. This is deploying to AWS US-East with a Ubuntu 14.04 AMI.

My .tf file:

https://gist.github.com/llevar/b9a2bd8823d8bd0f2993

A typical error looks like this:

aws_instance.salt_master (remote-exec): Connecting to remote host via SSH...
aws_instance.salt_master (remote-exec):   Host: 54.205.141.103
aws_instance.salt_master (remote-exec):   User: ubuntu
aws_instance.salt_master (remote-exec):   Password: false
aws_instance.salt_master (remote-exec):   Private key: true
aws_instance.salt_master (remote-exec):   SSH Agent: true
aws_instance.salt_master (remote-exec): Connected!
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:

aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec):  salt-common : Depends: python-dateutil but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-jinja2 but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-croniter but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:

aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec):  salt-master : Depends: salt-common (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec):                Depends: python-m2crypto but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-crypto but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-msgpack but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-zmq (>= 13.1.0) but it is not installable
aws_instance.salt_master (remote-exec):                Recommends: python-git but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:

aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec):  salt-minion : Depends: salt-common (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec):                Depends: python-m2crypto but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-crypto but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-msgpack but it is not installable
aws_instance.salt_master (remote-exec):                Depends: python-zmq (>= 13.1.0) but it is not installable
aws_instance.salt_master (remote-exec):                Depends: dctrl-tools but it is not installable
aws_instance.salt_master (remote-exec):                Recommends: debconf-utils but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:

aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec):  salt-syndic : Depends: salt-master (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): /tmp/terraform_2019727887.sh: 6: /tmp/terraform_2019727887.sh: cannot create /etc/salt/minion: Directory nonexistent
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): E: Unable to locate package python-pip
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Package python-git is not available, but is referred to by another package.
aws_instance.salt_master (remote-exec): This may mean that the package is missing, has been obsoleted, or
aws_instance.salt_master (remote-exec): is only available from another source

aws_instance.salt_master (remote-exec): E: Package 'python-git' has no installation candidate
aws_instance.salt_master (remote-exec): sudo: salt-master: command not found
aws_instance.salt_master (remote-exec): sudo: salt-minion: command not found
Error applying plan:

1 error(s) occurred:

* Script exited with non-zero exit status: 1

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Associating EIP to an "in-use" ENI causes errors on future runs

This issue was originally opened by @justnom as hashicorp/terraform#3672. It was migrated here as part of the provider split. The original body of the issue is below.


Once the following state is achieved:

resource "aws_eip" "nat" {
    vpc = true
    network_interface = "${aws_network_interface.nat.id}"
    depends_on = ["aws_network_interface.nat"]
    lifecycle {
        prevent_destroy = true
    }
}

If the ENI is "in-use" by an instance - then Terraform assumes that the state of the resource has drifted and during a plan run says that the instance should be changed:

~ aws_eip.nat
    instance: "i-66f125bc" => ""

The aws_eip resource should either;

  1. Allow both the instance and network_interface to be set
  2. Ignore what instance the network_interface is attached to

RDS replicas do not "take" some parameters

This issue was originally opened by @jessemyers as hashicorp/terraform#3493. It was migrated here as part of the provider split. The original body of the issue is below.


If you create an RDS replica, the multi_az and backup_retention_period both end up as zero, no matter what attribute you specify in the terraform resource declaration. This means that a plan or apply will see a difference between the current state (zero) and the specified state (if it is non-zero) and will spuriously attempt to apply a change. Applying the change will fail.

Either the aws_db_instance resource needs to be smarter about these values if replicate_source_db is set or the documentation needs to be clearer that these parameters should be omitted for replicas.

aws_elasticsearch_domain Advanced options are not applied properly

This issue was originally opened by @robertfirek as hashicorp/terraform#3980. It was migrated here as part of the provider split. The original body of the issue is below.


When tried to apply the following advanced option on aws_elasticsearch_domain

(...)
advanced_options {
    "rest.action.multi.allow_explicit_index" = true
}
(...)

it was not applied properly in amazon (rest.action.multi.allow_explicit_index option was set to false).

Changing this option doesn't change plan (I've changed true to false) and it is impossible to control options.

State file:

(...)
"advanced_options.#": "1",
"advanced_options.rest.action.multi.allow_explicit_index": "false", 
(...)

Terraform version:
v0.6.5

Name update of aws_vpn_connection_route resource caused unwanted effects

This issue was originally opened by @jwadolowski as hashicorp/terraform#4341. It was migrated here as part of the provider split. The original body of the issue is below.


Hey,

I've just renamed 2 aws_vpn_connection_route resources (all arguments stayed the same) and applied the changes, but it turned out my static routes ended in deleted state.

This is how plan looked like:

+ aws_vpn_connection_route.cognifide-office-int-route
    destination_cidr_block: "" => "192.168.0.0/16"
    vpn_connection_id:      "" => "vpn-48bdca03"

- aws_vpn_connection_route.cognifide-office-route

+ aws_vpn_connection_route.cognifide-vpn-int-route
    destination_cidr_block: "" => "10.214.0.0/16"
    vpn_connection_id:      "" => "vpn-48bdca03"

- aws_vpn_connection_route.cognifide-vpn-route

and this was the output of terraform apply command:

aws_vpn_connection_route.cognifide-office-route: Destroying...
aws_vpn_connection_route.cognifide-vpn-route: Destroying...
aws_vpn_connection_route.cognifide-office-int-route: Creating...
  destination_cidr_block: "" => "192.168.0.0/16"
  vpn_connection_id:      "" => "vpn-48bdca03"
aws_vpn_connection_route.cognifide-vpn-int-route: Creating...
  destination_cidr_block: "" => "10.214.0.0/16"
  vpn_connection_id:      "" => "vpn-48bdca03"
aws_vpn_connection_route.cognifide-office-route: Destruction complete
aws_vpn_connection_route.cognifide-vpn-route: Destruction complete
aws_vpn_connection_route.cognifide-vpn-int-route: Creation complete
aws_vpn_connection_route.cognifide-office-int-route: Creation complete

Apply complete! Resources: 2 added, 0 changed, 2 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: .terraform/terraform.tfstate

Immediately after this change I lost connectivity to my EC2 machines. Once I logged in to AWS control panel I saw both routes were in deleted state. After manual change connectivity was restored.

This is the place I've seen these deleted states (unfortunately didn't take screenshot of that):
aws_vpn_connection_route_20151216

Terraform plan and apply gets hung when executed from VPN connection

This issue was originally opened by @saswatp as hashicorp/terraform#3736. It was migrated here as part of the provider split. The original body of the issue is below.


After I upgraded to 0.6.5 a while back I am not able to execute terraform plan and apply when I am WFH. I use the corporate VPN connection to the corporate network.

This was working on 0.5.3 version and I had to downgrade.
Don't see this issue when not using VPN connection.

Tainted resources do not consider create_before_destroy in ordering

This issue was originally opened by @mwagg as hashicorp/terraform#2910. It was migrated here as part of the provider split. The original body of the issue is below.


I have an existing autoscaling launch configuration created by terraform for which I need to change the root block device volume size.

Contents of the .tf look like this.

resource "aws_launch_configuration" "some_launch_config" {
  image_id = "..."
  instance_type = "..."
  iam_instance_profile = "..."
  key_name = "..."
  security_groups = ["...","..."]

  lifecycle {
    create_before_destroy = true
  }

  root_block_device {
    volume_size = "${var.some_volume_size}"
  }
}

After changing the value of the some_volume_size variable the change does not seem to be recognized either when running plan or apply.

I have also tried replacing the variable with a hard coded value with the same result.

Confusing docs regarding aws elb ssl_certificate_id

This issue was originally opened by @aldarund as hashicorp/terraform#4612. It was migrated here as part of the provider split. The original body of the issue is below.


https://www.terraform.io/docs/providers/aws/r/iam_server_certificate.html
Have
instance_port = 8000
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"

Instance protocol http and lb_protocol https. And have ssl_certificate_id attribute.
While in
https://www.terraform.io/docs/providers/aws/r/elb.html
stated that ssl_certificate_id is ๐Ÿ‘
Only valid when instance_protocol and lb_protocol are either HTTPS or SSL

So in first page only one protocol is https and on second page its said that both should be https.

Support Analytics Configurations for S3 Buckets

(Migrating from hashicorp/terraform#13246)

AWS has recently announced some support for additional analytics and metrics for S3 buckets: https://aws.amazon.com/blogs/aws/s3-storage-management-update-analytics-object-tagging-inventory-and-metrics/

In order to support the above, the implementation in the AWS API is with methods like GetBucketAnalyticsConfiguration and PutBucketAnalyticsConfiguration which were added in go-aws-sdk 1.5.11.

New Resource

  • aws_s3_bucket_analytics

Expected Behavior

Terraform configuration should support enabling and managing AWS S3 analytics configurations.

References

what's the proper way of dealing with an instance marked for retirement?

This issue was originally opened by @rgs1 as hashicorp/terraform#3915. It was migrated here as part of the provider split. The original body of the issue is below.


In AWS (surely there's something similar in other environments) an instance can be marked for retirement:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html

From a quick git grep in the repo and search in the issues tracker, I couldn't see any logic to handle these instances. Has any thought been put into what Terraform should provide to deal with these instances (or maybe there is a way of doing this already)?

aws_elb with a security group does not always successfully destroy

This issue was originally opened by @bitglue as hashicorp/terraform#3758. It was migrated here as part of the provider split. The original body of the issue is below.


A configuration which creates a security group which is used by an ELB also created in the same configuration won't always destroy successfully.

When an ELB is destroyed, its network interfaces can take a while to go away (some minutes). And these network interfaces reference the security group, so the security group can't be deleted until the network interfaces are gone.

Sometimes this problem is masked since Terraform has started retrying anything that results in a DependencyViolation. But eventually this retry times out, and sometimes it times out before the ELB's network interfaces have gone away, and then you'll get an error like:

Error applying plan:

1 error(s) occurred:

* aws_security_group.www_elb: DependencyViolation: resource sg-1945f97f has a dependent object
        status code: 400, request id: 

Adding aws_eip to existing instance does not trigger update to other resources.

This issue was originally opened by @rbowlby as hashicorp/terraform#3216. It was migrated here as part of the provider split. The original body of the issue is below.


Adding an EIP to an existing resource does not cause other aws terraform resources to appropriately update to reflect this change.

If I add an aws_eip to an existing ec2 instance and that instance has a corresponding route53 cname that uses the ec2 public_dns it will NOT update those route53 entries till a subsequent apply is performed.

TF silently fails to set AWS tags

This issue was originally opened by @feliksik as hashicorp/terraform#4163. It was migrated here as part of the provider split. The original body of the issue is below.


Using the AWS provider, it seems that setting a tag with a a failed lookup() value prevents any tag from being set. I think we should receive an error here?

summary:

        # this would be good
        #Name = "${var.prefix} subnet ${ lookup(var.zones, count.index+1) }"

        # this silently fails so set the tags, because I forgot the .index in count
        Name = "${var.prefix} subnet ${ lookup(var.zones, count+1) }"
        Maintainer = "${var.maintainer}"

full tf file:

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "prefix" {}
variable "maintainer" {}

variable "az_count" { default = 3 }

variable "zones" {
    default = {
        "1" = "eu-west-1a"
        "2" = "eu-west-1b"
        "3" = "eu-west-1c"
    }
}

output "vpc_id" { value = "${aws_vpc.main.id}" }

output "subnet_ids" { 
        value = "${ join(\",\", aws_subnet.sdnet.*.id) }" 
}

provider "aws" {
    access_key = "${var.aws_access_key}"
    secret_key = "${var.aws_secret_key}"
    region = "eu-west-1"
}

resource "aws_vpc" "main" {
    cidr_block = "10.10.0.0/16"
    enable_dns_support = true
    enable_dns_hostnames = true
    tags {
        Name = "${var.prefix} vpc"
        Maintainer = "${var.maintainer}"
    }
}

resource "aws_internet_gateway" "main" {
    vpc_id = "${aws_vpc.main.id}"
    tags {
        Name = "${var.prefix} gateway"
        Maintainer = "${var.maintainer}"
    }
}

resource "aws_route_table" "main" {
    vpc_id = "${aws_vpc.main.id}"
    route {
        cidr_block = "0.0.0.0/0"
        gateway_id = "${aws_internet_gateway.main.id}"
    }
    tags {
        Name = "${var.prefix} route table"
        Maintainer = "${var.maintainer}"
    }
}

resource "aws_route_table_association" "main" {
    subnet_id = "${element(aws_subnet.sdnet.*.id, count.index)}"
    route_table_id = "${aws_route_table.main.id}"
}

resource "aws_subnet" "sdnet" {
    count = "${var.az_count}"
    vpc_id = "${aws_vpc.main.id}"
    availability_zone = "${ lookup(var.zones, count.index+1) }"
    cidr_block = "10.10.${count.index+1}.0/24"
    map_public_ip_on_launch = true
    tags {

        # this would be good
        #Name = "${var.prefix} subnet ${ lookup(var.zones, count.index+1) }"

        # this silently fails so set the tags
        Name = "${var.prefix} subnet ${ lookup(var.zones, count+1) }"
        Maintainer = "${var.maintainer}"
    }
}

AWS route table and nat instance route modification

This issue was originally opened by @igoratencompass as hashicorp/terraform#4105. It was migrated here as part of the provider split. The original body of the issue is below.


Hi all,

I have a private route table resource in ec2 being created as:

resource "aws_route_table" "private-subnet" {
    vpc_id = "${aws_vpc.environment.id}"{
        Name        = "${var.vpc.tag}-private-subnet-route-table"
        Environment = "${var.vpc.tag}"
    }
}

resource "aws_route_table_association" "private-subnet" {
    count          = "${var.vpc.az_count}"
    subnet_id      = "${element(aws_subnet.private-subnets.*.id, count.index)}"
    route_table_id = "${aws_route_table.private-subnet.id}"
}

Then later in the same play, a nat instance gets launched that modifies the route table adding it's self as default gateway via user-data script. Now when ever I run the plan again after adding new resource lets say, TF modifies the route table removing the nat instance record.

Since TF is aware of the changes made by user-data of another resource, reverting the changes is a wrong thing to do.

I have asked about this in TF google group but never got any response back explaining why is this happening, please see https://groups.google.com/forum/#!topic/terraform-tool/MxEXo9hhqHk

Thanks,
Igor

aws_db_instance final_snapshot_identifier add flag (or some mechanism) to manage snapshots

This issue was originally opened by @geek876 as hashicorp/terraform#3450. It was migrated here as part of the provider split. The original body of the issue is below.


The final_snapshot_identifier and snapshot_identifier arguments within aws_db_instance resource are quite useful however I don't believe we can directly manage snapshots via Terraform as they don't exist as resources. It is also not possible to directly use something like 'date' for instance within variable names within terraform and this makes it difficult to manage snapshot/create regime for RDS. For instance, one has to manually delete old snapshot so that a new snap with same name can be created.

Would it be more useful to have some or all of the features included within aws_db_instance resource?

  1. Have a flag that deletes old snapshot if it exists with same name as specified by final_snapshot_identifier at the time of deleting the RDS.
  2. Have an argument that allows you to control how many snapshots to keep and house-keep them.
  3. For 2, have an argument that lets you control what version of snapshot to apply (i.e. latest)

If the above is implemented, I believe it would be possible to then manage RDS snapshot regime entirely via TF.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.