hashicorp / terraform-provider-aws Goto Github PK
View Code? Open in Web Editor NEWThe AWS Provider enables Terraform to manage AWS resources.
Home Page: https://registry.terraform.io/providers/hashicorp/aws
License: Mozilla Public License 2.0
The AWS Provider enables Terraform to manage AWS resources.
Home Page: https://registry.terraform.io/providers/hashicorp/aws
License: Mozilla Public License 2.0
(Migrated from hashicorp/terraform#13247)
AWS has recently announced some support for additional analytics and metrics for S3 buckets: https://aws.amazon.com/blogs/aws/s3-storage-management-update-analytics-object-tagging-inventory-and-metrics/
In order to support the above, the implementation in the AWS API is with methods like GetBucketMetricsConfiguration
and PutBucketMetricsConfiguration
which were added in go-aws-sdk 1.5.11.
Terraform configuration should support enabling and managing AWS S3 metrics configurations.
This issue was originally opened by @dmaze as hashicorp/terraform#3725. It was migrated here as part of the provider split. The original body of the issue is below.
If you create an AMI that has an attached block device mapping that creates an EBS disk, and then create an instance that explicitly declares a block device mapping for that device, the plan will always be out-of-date unless you dig into the AMI data and explicitly declare a snapshot_id
for the mapping. I'd guess this is related to the hash calculation for the block device mapping: since snapshot_id
is empty Terraform always plugs in an empty string for it, but when the instance is created from the AMI it comes from a non-empty snapshot and the calculated hashes differ.
Example Packer file (based on a CentOS 6 public AMI):
{
"min_packer_version": "0.8.5",
"builders": [{
"type": "amazon-ebs",
"source_ami": "ami-c2a818aa",
"ami_name": "second-device-in-ebs",
"ssh_username": "root",
"instance_type": "m3.medium",
"region": "us-east-1",
"availability_zone": "us-east-1d",
"launch_block_device_mappings": [{
"device_name": "/dev/sdb",
"volume_type": "gp2",
"volume_size": 60,
"delete_on_termination": true
}]
}]
}
Terraform description (substitute the AMI key from the Packer output):
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "instance" {
ami = "ami-xxxxxxxx"
instance_type = "m3.medium"
root_block_device {
volume_type = "gp2"
}
ebs_block_device {
device_name = "/dev/sdb"
volume_type = "gp2"
volume_size = 60
}
}
terraform plan
output:
Refreshing Terraform state prior to plan...
aws_instance.instance: Refreshing state... (ID: i-2a4d5895)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
-/+ aws_instance.instance
ami: "ami-0e92e064" => "ami-0e92e064"
availability_zone: "us-east-1c" => "<computed>"
ebs_block_device.#: "1" => "1"
ebs_block_device.2576023345.delete_on_termination: "" => "1" (forces new resource)
ebs_block_device.2576023345.device_name: "" => "/dev/sdb" (forces new resource)
ebs_block_device.2576023345.encrypted: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.iops: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.snapshot_id: "" => "<computed>" (forces new resource)
ebs_block_device.2576023345.volume_size: "" => "60" (forces new resource)
ebs_block_device.2576023345.volume_type: "" => "gp2" (forces new resource)
ebs_block_device.3425641721.delete_on_termination: "1" => "0"
ebs_block_device.3425641721.device_name: "/dev/sdb" => ""
ephemeral_block_device.#: "0" => "<computed>"
instance_type: "m3.medium" => "m3.medium"
key_name: "" => "<computed>"
placement_group: "" => "<computed>"
private_dns: "ip-10-230-203-234.ec2.internal" => "<computed>"
private_ip: "10.230.203.234" => "<computed>"
public_dns: "ec2-54-145-167-177.compute-1.amazonaws.com" => "<computed>"
public_ip: "54.145.167.177" => "<computed>"
root_block_device.#: "1" => "1"
root_block_device.0.delete_on_termination: "true" => "1"
root_block_device.0.iops: "24" => "<computed>"
root_block_device.0.volume_size: "8" => "<computed>"
root_block_device.0.volume_type: "gp2" => "gp2"
security_groups.#: "1" => "<computed>"
source_dest_check: "true" => "1"
subnet_id: "" => "<computed>"
tenancy: "default" => "<computed>"
vpc_security_group_ids.#: "0" => "<computed>"
Plan: 1 to add, 0 to change, 1 to destroy.
(I am actually trying to do two things. The CentOS 6 base AMI looks like it's difficult to resize its root image, so if you need more than 8 GB of storage you must use an external disk. Also, I have a side Ansible setup that allows the system to run either from an AMI with our software preinstalled or from a plain CentOS 6 image to install it later; otherwise it might make sense to not declare any block devices at all and just let the AMI do its thing.)
This issue was originally opened by @rafaelmagu as hashicorp/terraform#3539. It was migrated here as part of the provider split. The original body of the issue is below.
Now that #3381 has been released, I've tried adding a couple of ES domains to our plans. We already had ES domains created manually via the AWS GUI, so I expected TF to warn me about a name conflict. Instead, it just added the two ARNs to its state.
I've then proceeded to change a snapshot setting, and TF hangs while trying to apply modifications:
2015/10/19 11:23:03 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:03 [TRACE] Waiting 1.6s before next try
2015/10/19 11:23:03 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:03 [TRACE] Waiting 1.6s before next try
2015/10/19 11:23:04 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:05 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:05 [TRACE] Waiting 3.2s before next try
2015/10/19 11:23:05 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:05 [TRACE] Waiting 3.2s before next try
2015/10/19 11:23:08 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:09 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:09 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:09 [TRACE] Waiting 6.4s before next try
2015/10/19 11:23:09 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:09 [TRACE] Waiting 6.4s before next try
2015/10/19 11:23:13 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:14 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:15 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:15 [TRACE] Waiting 10s before next try
2015/10/19 11:23:15 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:15 [TRACE] Waiting 10s before next try
2015/10/19 11:23:18 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:19 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:23 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:24 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:26 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:26 [TRACE] Waiting 10s before next try
2015/10/19 11:23:26 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:26 [TRACE] Waiting 10s before next try
2015/10/19 11:23:28 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:29 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:33 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:34 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:36 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:36 [TRACE] Waiting 10s before next try
2015/10/19 11:23:36 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:36 [TRACE] Waiting 10s before next try
2015/10/19 11:23:38 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:39 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:43 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:44 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:46 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:46 [TRACE] Waiting 10s before next try
2015/10/19 11:23:46 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:46 [TRACE] Waiting 10s before next try
2015/10/19 11:23:48 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:49 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:53 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:54 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:23:57 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:57 [TRACE] Waiting 10s before next try
2015/10/19 11:23:57 [DEBUG] terraform-provider-aws: 2015/10/19 11:23:57 [TRACE] Waiting 10s before next try
2015/10/19 11:23:58 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:23:59 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:03 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:04 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:07 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:07 [TRACE] Waiting 10s before next try
2015/10/19 11:24:07 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:07 [TRACE] Waiting 10s before next try
2015/10/19 11:24:08 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:09 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:13 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:14 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:17 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:17 [TRACE] Waiting 10s before next try
2015/10/19 11:24:17 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:17 [TRACE] Waiting 10s before next try
2015/10/19 11:24:18 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:19 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:23 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:24 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:27 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:27 [TRACE] Waiting 10s before next try
2015/10/19 11:24:28 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:28 [TRACE] Waiting 10s before next try
2015/10/19 11:24:28 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:29 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:33 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:34 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:38 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:38 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:38 [TRACE] Waiting 10s before next try
2015/10/19 11:24:38 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:38 [TRACE] Waiting 10s before next try
2015/10/19 11:24:39 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:43 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:44 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:48 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:48 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:48 [TRACE] Waiting 10s before next try
2015/10/19 11:24:48 [DEBUG] terraform-provider-aws: 2015/10/19 11:24:48 [TRACE] Waiting 10s before next try
2015/10/19 11:24:49 [DEBUG] vertex root, waiting for: provider.aws (close)
2015/10/19 11:24:53 [DEBUG] vertex provider.aws (close), waiting for: aws_elasticsearch_domain.elastic-log
2015/10/19 11:24:54 [DEBUG] vertex root, waiting for: provider.aws (close)
(...)
It is also detecting a change in access policies applied to the domains even though the policies are the same. I am assuming the latter is because it cannot apply the changes correctly, so it isn't saving the policies either.
This issue was originally opened by @egarbi as hashicorp/terraform#2879. It was migrated here as part of the provider split. The original body of the issue is below.
$ terraform --version
Terraform v0.6.1
# Main routing table private to nat
resource "aws_route_table" "main" {
route {
cidr_block = "0.0.0.0/0"
network_interface_id = "eni-7aXXXX"
}
route {
cidr_block = "${var.vpn_cidr}"
instance_id = "${module.vpn.minion_id}"
}
route {
cidr_block = "${lookup(var.tokio_vpc_cidr,var.env)}"
instance_id = "${module.vpn-gw.id}"
}
route {
cidr_block = "${lookup(var.usa_vpc_cidr,var.env)}"
instance_id = "${module.gw-free.id}"
}
tags {
Name = "${lookup(var.domain,var.env)}-vpc-to-nat"
}
vpc_id = "${aws_vpc.testing.id}"
}
terraform plan...
~ aws_route_table.main
route.1714403534.cidr_block: "10.32.0.0/16" => "10.32.0.0/16"
route.1714403534.gateway_id: "" => ""
route.1714403534.instance_id: "i-64XXXX" => "i-64XXXX"
route.1714403534.network_interface_id: "eni-2aXXXX" => ""
route.1714403534.vpc_peering_connection_id: "" => ""
route.2010688411.cidr_block: "10.52.0.0/16" => "10.52.0.0/16"
route.2010688411.gateway_id: "" => ""
route.2010688411.instance_id: "i-36XXXX" => "i-36XXXX"
route.2010688411.network_interface_id: "eni-a7XXXX" => ""
route.2010688411.vpc_peering_connection_id: "" => ""
route.3093398733.cidr_block: "192.168.240.0/24" => "192.168.240.0/24"
route.3093398733.gateway_id: "" => ""
route.3093398733.instance_id: "i-d5XXXX" => "i-d52XXXX"
route.3093398733.network_interface_id: "eni-69XXXX" => ""
route.3093398733.vpc_peering_connection_id: "" => ""
route.3891333365.cidr_block: "0.0.0.0/0" => ""
route.3891333365.gateway_id: "" => ""
route.3891333365.instance_id: "i-0fXXXX" => ""
route.3891333365.network_interface_id: "eni-7aXXXX" => ""
route.3891333365.vpc_peering_connection_id: "" => ""
route.3991494531.cidr_block: "" => "0.0.0.0/0"
route.3991494531.gateway_id: "" => ""
route.3991494531.instance_id: "" => ""
route.3991494531.network_interface_id: "" => "eni-7aXXXX"
route.3991494531.vpc_peering_connection_id: "" => ""
This issue was originally opened by @jeffw-wherethebitsroam as hashicorp/terraform#4109. It was migrated here as part of the provider split. The original body of the issue is below.
Currently, source_security_group_id conflicts with cidr_blocks and self, self conflicts with cidr_blocks. Also, source_security_group_id can only have a single value. It would be useful to be able to write:
resource "aws_security_group_rule" "tcp-80" {
type = "ingress"
security_group_id = "${aws_security_group.some_group.id}"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["1.2.3.4/24", "5.6.7.8/32"]
source_security_group_id = ["${aws_security_group.other_group.id}", "sg-abcdef"]
self = true
}
It doesn't look like source_security_group_id needs to conflict with self - this look like it is already supported in the code.
I'm not sure if source_security_group_id and self needs to conflict with cidr_blocks. The API reference (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_IpPermission.html_) doesn't mention IpRanges and UserIdGroupPairs conflicting, but I don't know how this is in practice. In worst case it would mean creating 2 separate ec2.IpPermission's.
Finally the inline ingress block for aws_security_group already support multiple source security groups in the security_groups argument. It would be good if source_security_group_id did the same (or if source_security_group_ids could be added to take a list).
Happy to write a pull request if this is wanted.
This issue was originally opened by @little-arhat as hashicorp/terraform#3429. It was migrated here as part of the provider split. The original body of the issue is below.
If you create aws_eip
in terraform with some instance, you get following error, trying to change instance
argument of aws_eip
* aws_eip.serv: Failure associating EIP: Resource.AlreadyAssociated: resource eipalloc-* is already associated with associate-id eipassoc-*
status code: 400, request id: []
Shouldn't Terraform automatically disassociate eip in order to apply changes?
This issue was originally opened by @jgross206 as hashicorp/terraform#3517. It was migrated here as part of the provider split. The original body of the issue is below.
If a Terraform-managed policy is modified via Web console, the changes are not picked up on terraform refresh
so they are not corrected on next terraform apply
Repro:
provider "aws" {
# access_key and secret_key should be set using "aws configure"
region = "us-west-2"
}
resource "aws_iam_policy" "test" {
name = "test-policy"
description = "A test policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"glacier:ListVaults",
"glacier:DescribeVault",
"glacier:GetVaultNotifications",
"glacier:ListJobs",
"glacier:DescribeJob",
"glacier:GetJobOutput"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
terraform plan
then terraform apply
. The policy should be created.terraform refresh
terraform apply
should change the policy back to the one existing in our .tf file.terraform plan
terraform apply
succeed but the policy in AWS does not match the policy in source.Happy to provide any more information needed.
This issue was originally opened by @phinze as hashicorp/terraform#3148. It was migrated here as part of the provider split. The original body of the issue is below.
On Read
for AWS instances, if platform
is Windows
, call GetPasswordData
and populate a windows_password
attribute with its contents.
This should allow the remote_exec
provisioner to be used on Windows instances like this:
resource "aws_instance" "foo" {
# ...
provisioner "remote-exec" {
connection {
type = "winrm"
password = "${self.windows_password}"
}
}
}
If you're using the EC2 metadata service for authentication credentials, Terraform calls the /latest/meta-data/iam/info
endpoint and attempts to parse the AWS partition and account id from the InstanceProfileArn
field by splitting the value into :
separated fields.
Alternatives:
/latest/meta-data/services/partition
endpoint returns the AWS partition as an unformatted string./latest/dynamic/instance-identity/document
endpoint has accountId
as a field.Neither of those require parsing the InstanceProfileArn
field, and it opens the possibility of using the EC2 metadata service to learn the partition and account id that you're operating in, while statically configured or entered credentials, so you could use Terraform on an EC2 instance that didn't have an instance profile on it, if someone had a use for that situation.
This is definitely P3 territory, but it's worth putting on the record in case of future refactorings in this area. It seems related to hashicorp/terraform#12704 as well.
This issue was originally opened by @yissachar as hashicorp/terraform#4435. It was migrated here as part of the provider split. The original body of the issue is below.
I have the following ECS task:
resource "aws_ecs_task_definition" "web" {
family = "web"
container_definitions = <<EOF
[{
"name": "web",
"image": "imagelocation",
"memory": 500,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}]
EOF
}
If I then add an extra space after one of the commas, this will force a new version of the task as Terraform thinks that it has changed.
For my particular situation this causes problems since we have developers on Mac and Windows. Since we have Git set to convert line-endings, this means the local file on Windows uses \r\n
for line-endings, and Mac uses \n
. Terraform sees that the container_definitions
has some changes and forces an update of all the resources involved.
I understand that it might be out of scope for Terraform to handle something like this, but it would be nice if there were some way of dealing with the issue. For now we are forced to restrict Terraform plan
and apply
usage to one OS.
This issue was originally opened by @caarlos0 as hashicorp/terraform#3263. It was migrated here as part of the provider split. The original body of the issue is below.
I have an aws_spot_instance_request
like this:
resource "aws_spot_instance_request" "seleniumgrid" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
availability_zone = "${var.zone}"
instance_initiated_shutdown_behavior = "terminate"
root_block_device {
volume_size = 100
volume_type = "gp2"
}
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
CA_TEAM = "${var.team}"
CA_ROLE = "${var.role}"
CA_SERVICE = "${var.service}"
}
}
The tags are being applied only to the spot request itself, not to the underlying instance. Is this an expected behavior? How can I change this?
Thanks!
This issue was originally opened by @tangentspace as hashicorp/terraform#4561. It was migrated here as part of the provider split. The original body of the issue is below.
Passing a list containing spaces to a CommaDelimitedList parameter in a aws_cloudformation_stack causes stack updates to fail when the parameter has not been modified.
For example, the following stack is created fine on the first apply operation, but further attempts to run apply fail because terraform always thinks the ELBListener1 parameter has changed:
resource "aws_cloudformation_stack" "api" {
name = "${var.environment}-api"
template_url = "https://s3.amazonaws.com/${var.template_bucket}/${var.environment}/ApplicationStack.json"
parameters {
ApplicationName = "api"
ELBListener1 = "HTTP, 443, HTTP, 8080"
}
}
The error message is:
* aws_cloudformation_stack.api: ValidationError: No updates are to be performed.
status code: 400, request id: 2f0d2349-b571-11e5-86b1-d56736d214fa
I believe what is happening is that the AWS API returns the string with the spaces removed when terraform queries it for the current value, so terraform always thinks the parameter has changed. For the time being I am able to work around this issue by simply removing the spaces from the list.
This issue was originally opened by @calvinfo as hashicorp/terraform#3902. It was migrated here as part of the provider split. The original body of the issue is below.
We're attempting to create an elasticache cluster, but unfortunately the interpolation doesn't work for splats of nested fields.
resource "aws_elasticache_cluster" "cache" {
cluster_id = "api-dedupe-memcached"
engine = "memcached"
node_type = "cache.r3.8xlarge"
port = 11211
num_cache_nodes = 2
parameter_group_name = "default.memcached1.4"
subnet_group_name = "${aws_elasticache_subnet_group.default.name}"
security_group_ids = ["${aws_security_group.memcached.id}"]
}
# and then elsewhere...
value = "${join(",", aws_elasticache_cluster.cache.cache_nodes.*.address)}"
It could be fixed in the config/interpolation, to mark it as a multi, but that'd require changing around the field name behavior as well. Is that behavior which you're willing to support? Otherwise it might be worth changing the cache resource.
testing
This issue was originally opened by @stevendborrelli as hashicorp/terraform#3891. It was migrated here as part of the provider split. The original body of the issue is below.
When an ELB resource is using a IAM certificate created in another module, the dependency is not created. This means that terraform apply will often fail due to ordering issues.
Below is a picture of the graph:
testing
This issue was originally opened by @thegedge as hashicorp/terraform#2886. It was migrated here as part of the provider split. The original body of the issue is below.
I've been frequently hitting the 40 minute timeout for aws_db_instance
creation lately. I would consider bumping this timeout up to 60+ minutes, or maybe reset the timeout counter back to zero it if the state changes at all.
Another related note is that terraform doesn't consider the resource tainted or anything if there's a timeout. Not sure if this is by design, but it saves me a lot of hassle right now because the DB does eventually get created.
This issue was originally opened by @jwaldrip as hashicorp/terraform#1887. It was migrated here as part of the provider split. The original body of the issue is below.
when updating user-data, its not required to destroy the instance. In fact, this can be devastating to a etcd cluster. What would be acceptable is for the machines to be stopped, userdata updated, and then started.
This issue was originally opened by @st-isidore-de-seville as hashicorp/terraform#3475. It was migrated here as part of the provider split. The original body of the issue is below.
Is there a way that encryption can be specified for aws_s3_bucket_object?
This issue was originally opened by @xsmaster as hashicorp/terraform#3596. It was migrated here as part of the provider split. The original body of the issue is below.
I would just like to see if we could get a standard for "tag" or "tags".
Example:
https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html
"tag" is used.
https://www.terraform.io/docs/providers/aws/r/elb.html
"tags" is used.
So I did not see this and was doing come clean up and found this nice error message:
There are warnings and/or errors related to your configuration. Please
fix these before continuing.
Errors:
I feel dumb now as the error message seems correct but it would be nice to have the same "tag" keyword used for all aws tags.
This issue was originally opened by @radeksimko as hashicorp/terraform#4409. It was migrated here as part of the provider split. The original body of the issue is below.
Try the following:
~/.aws/credentials
[default]
aws_region = us-west-2
aws_access_key_id = Example_KEY_ID
aws_secret_access_key = ExampleSecretAccessKey
$ make test TEST=./builtin/providers/aws TESTARGS='-run=AWSConfig -v'
==> Checking that code complies with gofmt requirements...
go generate ./...
TF_ACC= go test ./builtin/providers/aws -run=AWSConfig -v -timeout=30s -parallel=4
=== RUN TestAWSConfig_shouldError
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- FAIL: TestAWSConfig_shouldError (0.00s)
config_test.go:27: Expected an error with empty env, keys, and IAM in AWS Config, got credentials.Value{AccessKeyID:"Example_KEY_ID", SecretAccessKey:"ExampleSecretAccessKey", SessionToken:""}
=== RUN TestAWSConfig_shouldBeStatic
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldBeStatic (0.00s)
=== RUN TestAWSConfig_shouldIAM
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
--- FAIL: TestAWSConfig_shouldIAM (0.00s)
config_test.go:97: AccessKeyID mismatch, expected: (somekey), got (Example_KEY_ID)
=== RUN TestAWSConfig_shouldIgnoreIAM
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service found, adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldIgnoreIAM (0.00s)
=== RUN TestAWSConfig_shouldBeENV
2015/12/22 13:09:42 [DEBUG] EC2 Metadata service not found, not adding EC2 Role Credential Provider
--- PASS: TestAWSConfig_shouldBeENV (0.00s)
FAIL
exit status 1
FAIL github.com/hashicorp/terraform/builtin/providers/aws 0.017s
make: *** [test] Error 1
Possible workaround is to temporarily rename the default profile name from default
to something else (e.g. defaultx
), so it's not loaded.
Real solution will differ based on whether we want to be actually testing the SharedCredentialsProvider
(i.e. credentials provided by ~/.aws/credentials
).
This issue was originally opened by @johnrengelman as hashicorp/terraform#2758. It was migrated here as part of the provider split. The original body of the issue is below.
Trying to use Terraform to manage an ASG Notification subscription to an SNS topic that is not managed by Terraform. First apply works but subsequent applies attempt to remove subscriptions that are not managed by terraform (i.e. an ASG that isn't terraform managed and has a subscription to the same SNS topic)
This is the same problem that appears in other areas (such as hashicorp/terraform#2100) where Terraform is attempting to access far too much information about a particular resource...generally in the areas where the Terraform resource is an ephemeral mapping of concrete AWS relationships.
This issue was originally opened by @clstokes as hashicorp/terraform#2875. It was migrated here as part of the provider split. The original body of the issue is below.
When running a Terraform plan or apply using AWS keys that do not have sufficient permission, an error message occurs. The error message is sometimes descriptive enough to instruct the user of the permission that is needed, but in some cases it isn't.
When managing S3 buckets, if the IAM user does not have s3:GetBucket*
permissions, a Terraform execution will result in a generic Access Denied message. Terraform should provide context of the provider API call when the API call fails so that the user knows which operation failed and can troubleshoot it - eg. Error while calling [S3 GetBucketLocation] - 403 Access Denied.
$ terraform apply
aws_s3_bucket.bucket_test: Refreshing state... (ID: test_bucket_123321)
Error refreshing state: 1 error(s) occurred:
* 1 error(s) occurred:
* AccessDenied: Access Denied
status code: 403, request id: []
$
variable "access_key" {}
variable "secret_key" {}
variable "region" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_s3_bucket" "bucket_test" {
bucket = "test_bucket_123321"
}
Creating an aws elastic search domain and setting the access policy using a new iam role, throws an error.
0.9.7
policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
aws iam create-role
and aws es create-elasticsearch-domain
aws iam create-role --role-name ed-test-1 --assume-role-policy-document file://policy.json && aws es create-elasticsearch-domain --region us-west-2 --domain-name "ed-test-1" --ebs-options 'EBSEnabled=true,VolumeSize=10' --access-policies '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::288840537196:role/ed-test-1"},"Action":"es:*","Resource":"arn:aws:es:*"}]}'
Expected: no error and resources to have been created
Actual:
An error occurred (InvalidTypeException) when calling the CreateElasticsearchDomain operation: Error setting policy: [{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::288840537196:role/ed-test-1"},"Action":"es:*","Resource":"arn:aws:es:*"}]}]
resource "aws_elasticsearch_domain" "example" {
domain_name = "tf-test-1"
ebs_options {
ebs_enabled = true
volume_size = 10
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_role.example_role.arn}"
},
"Action": "es:*",
"Resource": "arn:aws:es:*"
}
]
}
CONFIG
}
resource "aws_iam_role" "example_role" {
name = "es-domain-role-1"
assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
}
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
Expected: no error and resources to have been created
Actual:
Error applying plan:
1 error(s) occurred:
* aws_elasticsearch_domain.elastic: 1 error(s) occurred:
* aws_elasticsearch_domain.elastic: InvalidTypeException: Error setting policy: [{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::13884053156:role/es-domain-role-1"]
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-west-1:13884053156:domain/tf-test-1/*"
}
]
}
]
status code: 409, request id: 4070cc55-49e2-11e7-a4d9-15d110862029
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
make: *** [apply] Error 1
This issue was originally opened by @gtmtech as hashicorp/terraform#3625. It was migrated here as part of the provider split. The original body of the issue is below.
Terraform is much better at telling you the "Action" permission required in AWS in order to achieve things these days. If it cant complete an action because the terraform credentials arent associated with an iam policy with sufficient actions, it tells you which action is lacking - this is very useful.
However, I spotted an omission when dealing with an aws_iam_policy resource. On trying to terraform plan -destroy
, this is the error:
aws_iam_policy.my-iam-policy: Error reading IAM policy arn:aws:iam::###########:policy/my-iam-policy: &awserr.requestError{awsError:(*awserr.baseError)(0x#########), statusCode:403, requestID:"e8474b15-####-####-####-e754d2c48ddd"}
To keep with the rest of the helpful messages, it would be much more useful if the message said that it couldnt do its job because the iam:DeletePolicy action was missing from the policy attached to the terraform credentials.
When destroying OpsWorks cluster it seems like there is missing dependency: All OpsWorks objects should depend on OpsWorks instance, so that when stack is being destroyed, all instances are destroyed first and only then any other resources should be destroyed. Otherwise all sorts of weird behavior can happen (such as OpsWorks not being able to destroy instance because Terraform already destroyed its IAM role that allowed OpsWorks to act on EC2 instances)
Terraform v.0.9.4
Please list the resources as a list, for example:
Basic configuration for OpsWorks stack with some aws_opsworks_instance resources.
Terraform shoud wait for all EC2 instances being destroyed before destroying any other aws_opswors_* resources
aws_opsworks_* resources are being deleted regardless of whether aws_opsworks_instance has been destroyed or not.
Please list the steps required to reproduce the issue, for example:
This issue was originally opened by @gtmtech as hashicorp/terraform#3654. It was migrated here as part of the provider split. The original body of the issue is below.
Using terraform 0.6.5
I am unable to get terraform to apply into a consistent state - that is , an apply followed by a plan always wants to blow away all my instance resources and start over. Every time. Despite the fact terraform apply says everything was successful. Obviously this renders terraform an unusable tool for this usecase
It seems the offensive part relates to each instance having an EBS block volume, forcing a new resource despite the fact pre and post are the same values. Note the force new resource sections below when doing a subsequent plan, after a successful apply
-/+ aws_instance.foo_2
...
associate_public_ip_address: "false" => "0"
...
ebs_block_device.#: "2" => "1"
ebs_block_device.158414664.delete_on_termination: "1" => "0"
ebs_block_device.158414664.device_name: "/dev/xvda" => ""
ebs_block_device.3905984573.delete_on_termination: "true" => "1" (forces new resource)
ebs_block_device.3905984573.device_name: "/dev/xvdb" => "/dev/xvdb" (forces new resource)
ebs_block_device.3905984573.encrypted: "false" => "<computed>"
ebs_block_device.3905984573.iops: "30" => "<computed>"
ebs_block_device.3905984573.snapshot_id: "" => "<computed>"
ebs_block_device.3905984573.volume_size: "10" => "10" (forces new resource)
ebs_block_device.3905984573.volume_type: "gp2" => "gp2" (forces new resource)
ephemeral_block_device.#: "0" => "<computed>"
...
As you can see from above, its pretty screwed, and terraform is now unusable.
This is my code for an instance (obviously a fair amount of variables in there)
resource "aws_instance" "foo_2" {
ami = "${var.foo.ami_image_2}"
availability_zone = "${var.foo.availability_zone_2}"
instance_type = "${var.foo.instance_type_2}"
key_name = "${var.foo.key_name_2}"
vpc_security_group_ids = ["${aws_security_group.base.id}",
"${aws_security_group.foo.id}"]
subnet_id = "${aws_subnet.foo_az2.id}"
associate_public_ip_address = false
source_dest_check = true
monitoring = true
iam_instance_profile = "${aws_iam_instance_profile.foo.name}"
count = "${var.foo.large_cluster_size}"
user_data = "${template_file.userdata_foo_2.rendered}"
ebs_block_device {
device_name = "/dev/xvdb"
volume_type = "gp2"
volume_size = "${var.foo.disk_size}"
delete_on_termination = "${var.foo.disk_termination}"
}
}
A couple of other things I've noticed -
You see the ebs block device id above? "ebs_block_device.3905984573" - well the ID is the same for all instance resources in the output, that strikes me as a bit odd.
After terraform applying all my instances, I commented out all the ebs_block_device settings in all the instance definitions, and then tried a plan. Now terraform plan says there are no changes.
It looks like EBS block devices are severely not working in terraform 0.6.5
This issue was originally opened by @phinze as hashicorp/terraform#3495. It was migrated here as part of the provider split. The original body of the issue is below.
hashicorp/terraform#3485 attempted to fix force_delete
, but @maratoid provided a test config that still yields errors.
Current status with his config is:
In order to fix, I think we need to:
This is all a bit jargon-y, happy to clarify anything if someone else digs into this before I do.
% terraform -v
Terraform v0.9.8
resource "aws_iam_group_membership" "release_manager_dynamo" {
provider = "aws.release_manager_iam"
name = "release-manager-${var.spaceID}-dynamo-membership"
group = "${var.release_manager_iam_group}"
users = ["${aws_iam_user.release_manager.name}"]
}
Hi, here is where I will propose what I'd like:
In the example snippet above, with just the one user listed, membership for that user would be ensured but any other members of the group would be left alone. This could be achieved with a new only_present
or similar boolean. This would make the resource behave more like, say, heroku_app when it manages config vars.
If the group at AWS has users A and B but applying the above config only lists B, A will be removed. If the config above then works out to list A (such as if using a different terraform env), A
would be re-added and B would be removed.
terraform apply
We are using a trick sort of described here where using the magic ${aws:username}
variable in policies you can create a user which can manage users named eg ${aws:username}-*
and can only add them to a group named ${aws:username}
. Elsewhere in our terraform config we have an aliased aws provider (release_manager_iam
in config above) using credentials which have this ability.
We are experiencing some issues at the time we deploy our API Gateway into AWS. It seems to be an error recently introduced since we had never seen it before and we haven't really modified our code base.
At the time we are deploying the API for the first time every resource seems to applied correctly. However, at the second and so on deployments, terraform is deleting the old API stage without creating the new one. This especially confusing since when we check the output and .tfstate file the stage seems to have been created correctly. It's only at the time you either hit an end-point or double check the console when you can see that the stage hasn't been created.
After some testing, we have found out that if we change the stage_name we can replicate the behaviour in subsequent deployments. Thus we can reproduce this behaviour:
0.9.6
resource "aws_api_gateway_deployment" "football_api" {
lifecycle {
create_before_destroy = true
}
depends_on = [
<List of aws_api_gateway_integracion dependencies>
]
rest_api_id = "${aws_api_gateway_rest_api.football.id}"
stage_name = "${var.env}"
stage_description = "${timestamp()}"
description = "Deployed ${var.deployed}"
}
https://gist.github.com/martinezp/7662132b5f24463414ebac0a5c413128
Old stage removal, and new stage creation
Old stage removed but no new stage created even though it's apparently been created if we check the .tfstate
terraform apply
terraform apply
This issue was originally opened by @TheM0ng00se as hashicorp/terraform#3372. It was migrated here as part of the provider split. The original body of the issue is below.
In Terraform 0.6.3 I'm noticing that attaching a security group to an ENI isn't working as expected.
I've tried both with {$aws_security_group.csr_sg_inside.name}
AND {$aws_security_group.csr_sg_inside.id}
and both throw the error.
Error applying plan:
2 error(s) occurred:
* aws_network_interface.csr_inside.0: Error creating ENI: InvalidSecurityGroupID.NotFound: The securityGroup ID '{$aws_security_group.csr_sg_inside.name}' does not exist
status code: 400, request id: []
* aws_network_interface.csr_inside.1: Error creating ENI: InvalidSecurityGroupID.NotFound: The securityGroup ID '{$aws_security_group.csr_sg_inside.name}' does not exist
status code: 400, request id: []
Terraform does indeed create the security group and it exists. I had a brief look at the source...seems like I should be able to do this..
resource "aws_network_interface" "csr_inside" {
count = "2"
subnet_id = "${element(aws_subnet.tools.*.id, count.index)}"
source_dest_check = "false"
security_groups = [ "{$aws_security_group.csr_sg_inside.id}" ]
attachment {
instance = "${element(aws_instance.csr.*.id, count.index)}"
device_index = "2"
}
}
This issue was originally opened by @mtb-xt as hashicorp/terraform#4335. It was migrated here as part of the provider split. The original body of the issue is below.
If a cloudformation template contains a parameter with NoEcho set to true (like a password), the parameter value is shown as asterisks "****".
Terraform 0.6.8 detects this as a parameter change and tries to reapply it on every 'plan' or 'apply':
aws_cloudformation_stack.redshift-prod: Modifying...
parameters.MasterUserPassword: "****" => "lolmegapassword"
This issue was originally opened by @slshen as hashicorp/terraform#4423. It was migrated here as part of the provider split. The original body of the issue is below.
A rule like:
egress {
rule_no = "100"
protocol = "icmp"
from_port = -1
to_port = -1
icmp_type = -1
icmp_code = -1
cidr_block = "0.0.0.0/0"
action = "allow"
}
Creates an icmp any rule, but terraform always thinks it's different. plan
says:
egress.2826807916.action: "allow" => ""
egress.2826807916.cidr_block: "0.0.0.0/0" => ""
egress.2826807916.from_port: "0" => "0"
egress.2826807916.icmp_code: "-1" => "0"
egress.2826807916.icmp_type: "-1" => "0"
egress.2826807916.protocol: "1" => ""
egress.2826807916.rule_no: "100" => "0"
egress.2826807916.to_port: "0" => "0"
Changing the egress rule so that from_port = 0
and to_port = 0
eliminates the diff.
This issue was originally opened by @llevar as hashicorp/terraform#2995. It was migrated here as part of the provider split. The original body of the issue is below.
I'm trying to install Saltstack packages using apt-get and am getting very inconsistent behaviour. About 60% of my provisioning runs fail complaining about not being able to find the salt packages. Others finish with no problems. This is deploying to AWS US-East with a Ubuntu 14.04 AMI.
My .tf file:
https://gist.github.com/llevar/b9a2bd8823d8bd0f2993
A typical error looks like this:
aws_instance.salt_master (remote-exec): Connecting to remote host via SSH...
aws_instance.salt_master (remote-exec): Host: 54.205.141.103
aws_instance.salt_master (remote-exec): User: ubuntu
aws_instance.salt_master (remote-exec): Password: false
aws_instance.salt_master (remote-exec): Private key: true
aws_instance.salt_master (remote-exec): SSH Agent: true
aws_instance.salt_master (remote-exec): Connected!
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:
aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec): salt-common : Depends: python-dateutil but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-jinja2 but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-croniter but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:
aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec): salt-master : Depends: salt-common (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec): Depends: python-m2crypto but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-crypto but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-msgpack but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-zmq (>= 13.1.0) but it is not installable
aws_instance.salt_master (remote-exec): Recommends: python-git but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:
aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec): salt-minion : Depends: salt-common (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec): Depends: python-m2crypto but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-crypto but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-msgpack but it is not installable
aws_instance.salt_master (remote-exec): Depends: python-zmq (>= 13.1.0) but it is not installable
aws_instance.salt_master (remote-exec): Depends: dctrl-tools but it is not installable
aws_instance.salt_master (remote-exec): Recommends: debconf-utils but it is not installable
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Some packages could not be installed. This may mean that you have
aws_instance.salt_master (remote-exec): requested an impossible situation or if you are using the unstable
aws_instance.salt_master (remote-exec): distribution that some required packages have not yet been created
aws_instance.salt_master (remote-exec): or been moved out of Incoming.
aws_instance.salt_master (remote-exec): The following information may help to resolve the situation:
aws_instance.salt_master (remote-exec): The following packages have unmet dependencies:
aws_instance.salt_master (remote-exec): salt-syndic : Depends: salt-master (= 2015.5.3+ds-1trusty1) but it is not going to be installed
aws_instance.salt_master (remote-exec): E: Unable to correct problems, you have held broken packages.
aws_instance.salt_master (remote-exec): /tmp/terraform_2019727887.sh: 6: /tmp/terraform_2019727887.sh: cannot create /etc/salt/minion: Directory nonexistent
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): E: Unable to locate package python-pip
aws_instance.salt_master (remote-exec): Reading package lists... 0%
aws_instance.salt_master (remote-exec): Reading package lists... 100%
aws_instance.salt_master (remote-exec): Reading package lists... Done
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 0%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree... 50%
aws_instance.salt_master (remote-exec): Building dependency tree
aws_instance.salt_master (remote-exec): Reading state information... 0%
aws_instance.salt_master (remote-exec): Reading state information... 8%
aws_instance.salt_master (remote-exec): Reading state information... Done
aws_instance.salt_master (remote-exec): Package python-git is not available, but is referred to by another package.
aws_instance.salt_master (remote-exec): This may mean that the package is missing, has been obsoleted, or
aws_instance.salt_master (remote-exec): is only available from another source
aws_instance.salt_master (remote-exec): E: Package 'python-git' has no installation candidate
aws_instance.salt_master (remote-exec): sudo: salt-master: command not found
aws_instance.salt_master (remote-exec): sudo: salt-minion: command not found
Error applying plan:
1 error(s) occurred:
* Script exited with non-zero exit status: 1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
This issue was originally opened by @justnom as hashicorp/terraform#3672. It was migrated here as part of the provider split. The original body of the issue is below.
Once the following state is achieved:
resource "aws_eip" "nat" {
vpc = true
network_interface = "${aws_network_interface.nat.id}"
depends_on = ["aws_network_interface.nat"]
lifecycle {
prevent_destroy = true
}
}
If the ENI is "in-use" by an instance - then Terraform assumes that the state of the resource has drifted and during a plan
run says that the instance should be changed:
~ aws_eip.nat
instance: "i-66f125bc" => ""
The aws_eip
resource should either;
instance
and network_interface
to be setnetwork_interface
is attached toThis issue was originally opened by @jessemyers as hashicorp/terraform#3493. It was migrated here as part of the provider split. The original body of the issue is below.
If you create an RDS replica, the multi_az
and backup_retention_period
both end up as zero, no matter what attribute you specify in the terraform resource declaration. This means that a plan or apply will see a difference between the current state (zero) and the specified state (if it is non-zero) and will spuriously attempt to apply a change. Applying the change will fail.
Either the aws_db_instance
resource needs to be smarter about these values if replicate_source_db
is set or the documentation needs to be clearer that these parameters should be omitted for replicas.
This issue was originally opened by @robertfirek as hashicorp/terraform#3980. It was migrated here as part of the provider split. The original body of the issue is below.
When tried to apply the following advanced option on aws_elasticsearch_domain
(...)
advanced_options {
"rest.action.multi.allow_explicit_index" = true
}
(...)
it was not applied properly in amazon (rest.action.multi.allow_explicit_index option was set to false).
Changing this option doesn't change plan (I've changed true to false) and it is impossible to control options.
State file:
(...)
"advanced_options.#": "1",
"advanced_options.rest.action.multi.allow_explicit_index": "false",
(...)
Terraform version:
v0.6.5
This issue was originally opened by @jwadolowski as hashicorp/terraform#4341. It was migrated here as part of the provider split. The original body of the issue is below.
Hey,
I've just renamed 2 aws_vpn_connection_route
resources (all arguments stayed the same) and applied the changes, but it turned out my static routes ended in deleted
state.
This is how plan looked like:
+ aws_vpn_connection_route.cognifide-office-int-route
destination_cidr_block: "" => "192.168.0.0/16"
vpn_connection_id: "" => "vpn-48bdca03"
- aws_vpn_connection_route.cognifide-office-route
+ aws_vpn_connection_route.cognifide-vpn-int-route
destination_cidr_block: "" => "10.214.0.0/16"
vpn_connection_id: "" => "vpn-48bdca03"
- aws_vpn_connection_route.cognifide-vpn-route
and this was the output of terraform apply
command:
aws_vpn_connection_route.cognifide-office-route: Destroying...
aws_vpn_connection_route.cognifide-vpn-route: Destroying...
aws_vpn_connection_route.cognifide-office-int-route: Creating...
destination_cidr_block: "" => "192.168.0.0/16"
vpn_connection_id: "" => "vpn-48bdca03"
aws_vpn_connection_route.cognifide-vpn-int-route: Creating...
destination_cidr_block: "" => "10.214.0.0/16"
vpn_connection_id: "" => "vpn-48bdca03"
aws_vpn_connection_route.cognifide-office-route: Destruction complete
aws_vpn_connection_route.cognifide-vpn-route: Destruction complete
aws_vpn_connection_route.cognifide-vpn-int-route: Creation complete
aws_vpn_connection_route.cognifide-office-int-route: Creation complete
Apply complete! Resources: 2 added, 0 changed, 2 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: .terraform/terraform.tfstate
Immediately after this change I lost connectivity to my EC2 machines. Once I logged in to AWS control panel I saw both routes were in deleted
state. After manual change connectivity was restored.
This is the place I've seen these deleted
states (unfortunately didn't take screenshot of that):
This issue was originally opened by @saswatp as hashicorp/terraform#3736. It was migrated here as part of the provider split. The original body of the issue is below.
After I upgraded to 0.6.5 a while back I am not able to execute terraform plan and apply when I am WFH. I use the corporate VPN connection to the corporate network.
This was working on 0.5.3 version and I had to downgrade.
Don't see this issue when not using VPN connection.
This issue was originally opened by @mwagg as hashicorp/terraform#2910. It was migrated here as part of the provider split. The original body of the issue is below.
I have an existing autoscaling launch configuration created by terraform for which I need to change the root block device volume size.
Contents of the .tf look like this.
resource "aws_launch_configuration" "some_launch_config" {
image_id = "..."
instance_type = "..."
iam_instance_profile = "..."
key_name = "..."
security_groups = ["...","..."]
lifecycle {
create_before_destroy = true
}
root_block_device {
volume_size = "${var.some_volume_size}"
}
}
After changing the value of the some_volume_size variable the change does not seem to be recognized either when running plan or apply.
I have also tried replacing the variable with a hard coded value with the same result.
This issue was originally opened by @johnjelinek as hashicorp/terraform#1579. It was migrated here as part of the provider split. The original body of the issue is below.
How can I specify the state of my instances to be stopped
?
This issue was originally opened by @aldarund as hashicorp/terraform#4612. It was migrated here as part of the provider split. The original body of the issue is below.
https://www.terraform.io/docs/providers/aws/r/iam_server_certificate.html
Have
instance_port = 8000
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
Instance protocol http and lb_protocol https. And have ssl_certificate_id attribute.
While in
https://www.terraform.io/docs/providers/aws/r/elb.html
stated that ssl_certificate_id is ๐
Only valid when instance_protocol and lb_protocol are either HTTPS or SSL
So in first page only one protocol is https and on second page its said that both should be https.
(Migrating from hashicorp/terraform#13246)
AWS has recently announced some support for additional analytics and metrics for S3 buckets: https://aws.amazon.com/blogs/aws/s3-storage-management-update-analytics-object-tagging-inventory-and-metrics/
In order to support the above, the implementation in the AWS API is with methods like GetBucketAnalyticsConfiguration
and PutBucketAnalyticsConfiguration
which were added in go-aws-sdk 1.5.11.
Terraform configuration should support enabling and managing AWS S3 analytics configurations.
This issue was originally opened by @rgs1 as hashicorp/terraform#3915. It was migrated here as part of the provider split. The original body of the issue is below.
In AWS (surely there's something similar in other environments) an instance can be marked for retirement:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html
From a quick git grep in the repo and search in the issues tracker, I couldn't see any logic to handle these instances. Has any thought been put into what Terraform should provide to deal with these instances (or maybe there is a way of doing this already)?
This issue was originally opened by @bitglue as hashicorp/terraform#3758. It was migrated here as part of the provider split. The original body of the issue is below.
A configuration which creates a security group which is used by an ELB also created in the same configuration won't always destroy successfully.
When an ELB is destroyed, its network interfaces can take a while to go away (some minutes). And these network interfaces reference the security group, so the security group can't be deleted until the network interfaces are gone.
Sometimes this problem is masked since Terraform has started retrying anything that results in a DependencyViolation. But eventually this retry times out, and sometimes it times out before the ELB's network interfaces have gone away, and then you'll get an error like:
Error applying plan:
1 error(s) occurred:
* aws_security_group.www_elb: DependencyViolation: resource sg-1945f97f has a dependent object
status code: 400, request id:
This issue was originally opened by @rbowlby as hashicorp/terraform#3216. It was migrated here as part of the provider split. The original body of the issue is below.
Adding an EIP to an existing resource does not cause other aws terraform resources to appropriately update to reflect this change.
If I add an aws_eip to an existing ec2 instance and that instance has a corresponding route53 cname that uses the ec2 public_dns it will NOT update those route53 entries till a subsequent apply is performed.
This issue was originally opened by @feliksik as hashicorp/terraform#4163. It was migrated here as part of the provider split. The original body of the issue is below.
Using the AWS provider, it seems that setting a tag with a a failed lookup() value prevents any tag from being set. I think we should receive an error here?
summary:
# this would be good
#Name = "${var.prefix} subnet ${ lookup(var.zones, count.index+1) }"
# this silently fails so set the tags, because I forgot the .index in count
Name = "${var.prefix} subnet ${ lookup(var.zones, count+1) }"
Maintainer = "${var.maintainer}"
full tf file:
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "prefix" {}
variable "maintainer" {}
variable "az_count" { default = 3 }
variable "zones" {
default = {
"1" = "eu-west-1a"
"2" = "eu-west-1b"
"3" = "eu-west-1c"
}
}
output "vpc_id" { value = "${aws_vpc.main.id}" }
output "subnet_ids" {
value = "${ join(\",\", aws_subnet.sdnet.*.id) }"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "eu-west-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.10.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags {
Name = "${var.prefix} vpc"
Maintainer = "${var.maintainer}"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = "${aws_vpc.main.id}"
tags {
Name = "${var.prefix} gateway"
Maintainer = "${var.maintainer}"
}
}
resource "aws_route_table" "main" {
vpc_id = "${aws_vpc.main.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.main.id}"
}
tags {
Name = "${var.prefix} route table"
Maintainer = "${var.maintainer}"
}
}
resource "aws_route_table_association" "main" {
subnet_id = "${element(aws_subnet.sdnet.*.id, count.index)}"
route_table_id = "${aws_route_table.main.id}"
}
resource "aws_subnet" "sdnet" {
count = "${var.az_count}"
vpc_id = "${aws_vpc.main.id}"
availability_zone = "${ lookup(var.zones, count.index+1) }"
cidr_block = "10.10.${count.index+1}.0/24"
map_public_ip_on_launch = true
tags {
# this would be good
#Name = "${var.prefix} subnet ${ lookup(var.zones, count.index+1) }"
# this silently fails so set the tags
Name = "${var.prefix} subnet ${ lookup(var.zones, count+1) }"
Maintainer = "${var.maintainer}"
}
}
This issue was originally opened by @igoratencompass as hashicorp/terraform#4105. It was migrated here as part of the provider split. The original body of the issue is below.
Hi all,
I have a private route table resource in ec2 being created as:
resource "aws_route_table" "private-subnet" {
vpc_id = "${aws_vpc.environment.id}"{
Name = "${var.vpc.tag}-private-subnet-route-table"
Environment = "${var.vpc.tag}"
}
}
resource "aws_route_table_association" "private-subnet" {
count = "${var.vpc.az_count}"
subnet_id = "${element(aws_subnet.private-subnets.*.id, count.index)}"
route_table_id = "${aws_route_table.private-subnet.id}"
}
Then later in the same play, a nat instance gets launched that modifies the route table adding it's self as default gateway via user-data script. Now when ever I run the plan again after adding new resource lets say, TF modifies the route table removing the nat instance record.
Since TF is aware of the changes made by user-data of another resource, reverting the changes is a wrong thing to do.
I have asked about this in TF google group but never got any response back explaining why is this happening, please see https://groups.google.com/forum/#!topic/terraform-tool/MxEXo9hhqHk
Thanks,
Igor
This issue was originally opened by @johnjelinek as hashicorp/terraform#1579. It was migrated here as part of the provider split. The original body of the issue is below.
How can I specify the state of my instances to be stopped
?
This issue was originally opened by @geek876 as hashicorp/terraform#3450. It was migrated here as part of the provider split. The original body of the issue is below.
The final_snapshot_identifier
and snapshot_identifier
arguments within aws_db_instance
resource are quite useful however I don't believe we can directly manage snapshots via Terraform as they don't exist as resources. It is also not possible to directly use something like 'date' for instance within variable names within terraform and this makes it difficult to manage snapshot/create regime for RDS. For instance, one has to manually delete old snapshot so that a new snap with same name can be created.
Would it be more useful to have some or all of the features included within aws_db_instance
resource?
If the above is implemented, I believe it would be possible to then manage RDS snapshot regime entirely via TF.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.