Giter VIP home page Giter VIP logo

terraform-provider-opc's People

Contributors

aareet avatar adormanau avatar appilon avatar grubernaut avatar juliosueiras avatar katbyte avatar mbfrahry avatar pwelch avatar radeksimko avatar scross01 avatar stack72 avatar tombuildsstuff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-opc's Issues

Creating additional opc compute instances will destroy the existing compute instances

Hi there,

I am running into an issue where I can create multiple opc compute instances just fine, but when I want to add additional compute instances, the execution plan destroys all previously created machines and redeploys the new instances.

Terraform Version

[deniz@lem]$ terraform -v
Terraform v0.9.11

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume
  • I think this is also resource independent because during new instance creation, everything else gets destroyed

Terraform Configuration Files

I have divided the configuration files in smaller pieces in my environment

_**main.tf**_
data "template_file" "userdata" {
  template = <<JSON
{
  "userdata": {
    "pre-bootstrap": {
      "script": [
        "( echo n;echo p;echo 1;echo; echo; echo w;) | sudo fdisk /dev/xvdb",
        "sudo mkfs -t ext3 /dev/xvdb",
        "sudo mkdir /mnt/store",
        "sudo mount /dev/xvdb /mnt/store",
        "sudo chown -R opc:opc /mnt/store/",
        "echo '/dev/xvdb /mnt/store ext3 defaults 1 2' | sudo tee -a /etc/fstab"
      ]
    }
  }
}
JSON
}

resource "opc_compute_ip_reservation" "ipreservation" {
  count       = "${var.instance_count}"
  parent_pool = "/oracle/public/ippool"
  permanent   = true
}


resource "opc_compute_storage_volume" "data" {
  count = "${var.instance_count}"
  name  = "data-${count.index}"
  size  = 10
}

resource "opc_compute_instance" "test" {
  count      = "${var.instance_count}"
  name       = "deniz-test-instance-${count.index}"
  label      = "Terraform Provisioned Instance"
  shape      = "oc3"
  image_list = "/oracle/public/OL_6.8_UEKR3_x86_64"

  ssh_keys            = ["${opc_compute_ssh_key.test.name}"]
  instance_attributes = "${data.template_file.userdata.rendered}"

  storage {
    volume = "${element(opc_compute_storage_volume.data.*.id, count.index)}"
    index  = 1
  }

  networking_info {
    index          = 0
    shared_network = true
    nat            = ["${element(opc_compute_ip_reservation.ipreservation.*.id, count.index)}"]
  }
}

output "public_ip" {
  value = "${opc_compute_ip_reservation.ipreservation.*.ip}"
}
secrule.tf

#secrules, seclists, sec associations

resource "opc_compute_sec_rule" "ssh-vm-secrule" {
  name             = "ssh-vm-secrule"
  source_list      = "seciplist:${opc_compute_security_ip_list.public_internet.name}"
  destination_list = "seclist:${opc_compute_security_list.allow-ssh-access.name}"
  action      = "permit"
  application = "${opc_compute_security_application.all.name}"
}

resource "opc_compute_security_application" "all" {
  name     = "all"
  protocol = "tcp"
  dport    = "22"
}

resource "opc_compute_security_ip_list" "public_internet" {
  name       = "public_internet"
  ip_entries = ["0.0.0.0/0"]
}

resource "opc_compute_security_list" "allow-ssh-access" {
  name                 = "allow-ssh-access"
  policy               = "deny"
  outbound_cidr_policy = "permit"
}

resource "opc_compute_security_association" "associate_SSH" {
  name = "${format("associate_SSH%1d", count.index + 1)}"
  count   = "${var.instance_count}"
  vcable  = "${element(opc_compute_instance.test.*.vcable,count.index)}"
  seclist = "${element(opc_compute_security_list.allow-ssh-access.*.name,count.index)}"
}
ssh.tf

resource "opc_compute_ssh_key" "test" {
  name    = "test-key"
  key     = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQD05YI38Nwd79pWwuEEUGwcnVPzP2/EodeAkzQtmPa9S4KnDS2bAib2GkRkE8TVA/vZWkavfySaTab+TpBsZvf5r/YfqsK2gsRqyPxi+0G/bD92daBuHpHUlKZ55/qJsI5cSEbSmi5j1PAMiomW9DQzddJ8WX6TNdO1KDfRdQBYTN+br0sbl9K9QvxqqkQ0xKI5VwpEJPKSrx6onjQkUIcPUjfPQsCJMts1sW9pBpFdjoTf8Xm/49MPbcru1Sxs/8srsfEt8AHZcZxmKT6pqfdc4Sb+lIRPiba7J8/QGef1uD00BxOKjNtBMxWGilyv5ogZEwki8BsLGUWz6nttEbIzGkPio1ZqX+cCx5TItYoBHcf7DZD1Foa6T4tR2TH13791cWQbKjDc1vthVDu/fZ7BbVGsyeHEHEmh9uTQ7t6Bq8UHeTWKTSIh7RWanFKBeLMZHajZ/hcxS3tSdmDethjE0OyxBtZn8r8zTKIufjQEHaoqRbKDN/ezRfoi0/ocxenldSOjt3PlE1QvbIuSLtMYRxU4eWVRt/w55+dYhUDQ6uVRITLM3gKnI5o98DUO0bKrargADWpOu4vNZA3uhbN5z5GbLxdMNAKMcVophFYb1cXk6DktW+NA4KDQ0cf7Sp4McyClOhBVjSflPrRu1Dq/tWPPxjIYhBzIfCMgAabuyw== [email protected]"
  enabled = true
}

Debug Output

There are no errors created per se, but I am going to point to plan and apply gist for the record.

terraform plan and apply - initial two instances
https://gist.github.com/ubersol/1ea39805757f43d611c87948cb77c638

terraform plan and apply - additional three instances
https://gist.github.com/ubersol/3fb8210cb167acf6dce39e2a8396f86e

Panic Output

No panic

Expected Behavior

After executing the plan for the additional three instances, I should end up a total of 5 machines.

Actual Behavior

Terraform is able to create my initial two instances without any problem, however after re-executing the plan for three more instances, terraform deleted all of my machines and its resources.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

First Plan couple machines

  1. terraform plan -var instance_count=2
    Then execute the plan
  2. terraform apply -var instance_count=2
    This gets executed fine in a fresh environment, no problem. Then proceed to plan and execute for three more machines
  3. terraform plan -var instance_count=3
  4. terraform apply -var instance_count=3

Steps 3 & 4 forces to destroy the machines created in Steps 1 & 2. The outputs can be seen in the gists mentioned above

Important Factoids

I am running this for OPC ( oracle public cloud ) and using terraform-provider-opc provider.

References

Not sure!

snapshot restoration timeout error

I am creating opc instances and for that I created a single main.tf file which will do three things, a) restore a boot disk from a snapshot d) restore data disk from snapshot 3) create an instance from all those snapshot restored disk. but during restoration I see a timeout error. restoration is happening for only 600 seconds and then I see a snapshot error. only boot disk is restored.

a) can we increase the timeout parameter from 600 to some x seconds.?
b) or can we set a sleep between snapshot restore resource & opc instance resource in terraform so that instance creation will start only once snapshot have restored successfully. I tried using below sleep command after snapshot restore but didn't provide any luck.
provisioner "local-exec" {
command = "sleep 1000"

}

below is the error I got.

  • module.snapshot-restore.opc_compute_storage_volume.demo-final-dt: 1 error(s) occurred:

  • opc_compute_storage_volume.demo-final-dt: Error creating storage volume demo-dev-001-data: Timeout waiting for storage volume demo-dev-001-data to become available.

can anybody please help on this quickly. I am fully struck due to this.

How to use json,with terraform script.

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.10.4.

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume
  • Terraform

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Debug Output

N/A

Panic Output

N/A

Expected Behavior

Need details for instance creation,network configuration using JSON and terraform.

Actual Behavior

N/A

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. JSON script to create instance,storage
  2. terraform script to create full configuration, Instance/storage-using json template and network configuration.
  3. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

Storage snapshot restoration error

Hi there,

I am trying to restore couple of snapshots and got some strange error. any body is aware of this error?. when I commented “collocated = true” in my scripts then snapshots was restored successfully.

2 error(s) occurred:

  • opc_compute_storage_volume.vol-data: 1 error(s) occurred:

  • opc_compute_storage_volume.vol-data: Error creating storage volume demo4d-st-data01: 400: {"message": "Volume properties incompatible with snapshot properties"}

  • opc_compute_storage_volume.vol-boot: 1 error(s) occurred:

  • opc_compute_storage_volume.vol-boot: Error creating storage volume demo4d-st-boot01: 400: {"message": "Volume properties incompatible with snapshot properties"}

    “collocated = true” worked last week. After I commented “collocated = true”, snapshots was restored successfully.

This is my terraform scripts,

# snapshot
resource "opc_compute_storage_volume_snapshot" "base" {
  name        = "${var.snap_boot_name}"
 collocated  = true
  description = "Demo"
  tags        = "${var.snp_boot_tags}"
  volume_name = "${var.src_vol_boot}"
}

# restore
resource "opc_compute_storage_volume" "vol-boot" {
  name        = "${var.restore_boot_vol_name}"
  bootable    = true
  image_list  = "/oracle/public/OL_7.2_UEKR4_x86_64"
  description = "Demo"
  size        = 12
  tags        = "${var.snp_boot_tags}"
  snapshot_id    = "${opc_compute_storage_volume_snapshot.base.snapshot_id}"
  image_list_entry = 1
}

Import doesn't always update image_list value

Terraform Version

Terraform v0.9.11

Affected Resource(s)

  • opc_compute_instance

Terraform Configuration Files

variable user {}
variable password {}
variable domain {}
variable endpoint {}


provider "opc" {
  user = "${var.user}"
  password = "${var.password}"
  identity_domain = "${var.domain}"
  endpoint = "${var.endpoint}"
}

Debug Output

NA

Panic Output

NA

Expected Behavior

image_list in tfstate file should be populated after import.

Actual Behavior

image_list was populated for instance created by Terraform apply command. For instances created manually on opc cloud console, the image_list value in tfstate file is empty ("image_list": "") after import.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform import

Important Factoids

There is nothing atypical in the account used

References

NA

Unable to create storage_volume from a snapshot created by another user.

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.9.8

Affected Resource(s)

  • opc_storage_volume

Terraform Configuration Files

resource "opc_compute_storage_volume" "test4" {
  name        = "storageVolumekd4"
  description = "Description for the Storage Volume"
  size        = 30
  tags        = ["bar", "foo"]
  snapshot    = "KD_Test_Snap"
  snapshot_account = "Compute-a463859/username"

}

Debug Output

https://gist.github.com/kdvlr/855134fcd3923cbef03b7809a5dc4649

Panic Output

Expected Behavior

A new storage volume should have been created from the existing storage snapshot.

Actual Behavior

Unable to find the storage snapshot even though it exists.

Steps to Reproduce

  1. terraform apply

Important Factoids

None. Plain vanilla OPC

References

None.

Enable creation of resource under non-default container path

Terraform Version

terraform v0.10.0
opc provider v0.1.2

Affected Resource(s)

All resources

The opc provider currently creates all resources under the users specific container e.g. for a user with ID [email protected] all resources will be created under /Compute-mydomain/[email protected]. This matches the compute console UI behavior, however using the APIs it's possible to create resources under a custom container. Terraform provider also need to support this behavior

e.g. explicitly setting the name of the resource to name = "/Compute-mydomain/mycontainer/myresource" should create the resource under the specifically named container path. Omitting the fully qualified container path would revert to using the default user id based container.

Example config:

provider "opc" {
  user            = "${var.user}"
  password        = "${var.password}"
  identity_domain = "${var.domain}"
  endpoint        = "${var.endpoint}"
}

resource "opc_compute_storage_volume" "volume1" {
  size             = "12"
  description      = "Example bootable storage volume"
  name             = "/Compute-${var.domain}/mycontainer/boot-from-storage-example"
  bootable         = true
  image_list       = "/oracle/public/OL_6.8_UEKR3_x86_64"
  image_list_entry = 3
}

resource "opc_compute_instance" "instance1" {
  name  = "/Compute-${var.domain}/mycontainer/boot-from-storage-instance1"
  label = "Example instance with bootable storage"
  shape = "oc3"

  storage {
    index  = 1
    volume = "${opc_compute_storage_volume.volume1.name}"
  }

  boot_order = [1]
}

volume snapshot wrongly destroyed/recreated

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
Terraform v0.10.8

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

  • opc_compute_storage_volume_snapshot

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

------ Create a storage volume

resource "opc_compute_storage_volume" "node1" {
name = "node1"
description = "Node 1 (persistent boot)"
size = 100
bootable = true
storage_type = "/oracle/public/storage/default"
image_list = "/oracle/public/OL_6.4_UEKR3_x86_64"
image_list_entry = 6
}

------ Create a non-colocated volume snapshot of this volume

resource "opc_compute_storage_volume_snapshot" "node1" {
name = "backup_node1"
description = "backup_node1"
tags = ["backup"]
collocated = false
volume_name = "${opc_compute_storage_volume.node1.name}"
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
https://gist.github.com/cpauliat/15ac3450a7ac7401cd82dbc2c27c1d71

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?

After the successful creation of the snapshot volume with terraform apply, the next run of terraform apply should make no modification.

Actual Behavior

What actually happened?
Instead, the snapshot volume is destroyed is recreated.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
    terraform apply --> creation of volume and snapshot volume
    terraform plan --> should show no modification, but instead shows that snapshot will be destroyed/recreated
    terraform apply --> should make no modification, but instead snapshot is destroyed/recreated.

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

OPC compute instance create based on docs fails with "Error creating instance"

Terraform Version

v0.9.11

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_compute_instance

Terraform Configuration Files

# Configure the Oracle Public Cloud
provider "opc" {
  user            = "cloud.admin"
  password        = "OMITTED"
  identity_domain = "OMITTED"
  endpoint        = "https://api-Z53.compute.us6.oraclecloud.com/"
}


resource "opc_compute_ip_network" "test" {
  name                = "internal-network"
  description         = "Terraform Provisioned Internal Network"
  ip_address_prefix   = "10.0.1.0/24"
  public_napt_enabled = false
}

resource "opc_compute_storage_volume" "test" {
  name = "internal"
  size = 100
}

resource "opc_compute_instance" "test" {
  name       = "instance1"
  label      = "Terraform Provisioned Instance"
  shape      = "oc3"
  image_list = "/oracle/public/oel_6.7_apaas_16.4.5_1610211300"

  storage {
    volume = "${opc_compute_storage_volume.test.name}"
    index  = 1
  }

  networking_info {
    index          = 0
    nat            = ["ippool:/oracle/public/ippool"]
    shared_network = true
  }
}

Debug Output

https://gist.github.com/craigbarrau/9aeafbb9e261f207ebb14bd0a2cc348c

Panic Output

N//A

Expected Behavior

I would expect it to work as it is copied and pasted from official doc example.

Actual Behavior

It failed with an error:

* opc_compute_instance.test: Error creating instance /Compute-OMITTED/cloud.admin/instance1: 400: {"message": "seclist /Compute-OMITTED/default/default not found"}

Note: identity domain is omitted from the above output

Steps to Reproduce

  1. create tf file to match the provided example
  2. replace OMITTED with details for a real opc instance
  3. terraform apply

Important Factoids

Nothing of significance

Import doesn't use tfvars file

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Terraform v0.9.11

Affected Resource(s)

  • opc_instance
  • opc_compute_image_list
  • opc_compute_ssh_key

I have not tested others, as I had the same issue for all three

Terraform Configuration Files

There is no configuration file needed for import

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Debug Output

NA

Panic Output

NA

Expected Behavior

I expect that terraform doesn't prompt for variable values once there is a terraform.tfvars file available. It works for other commands as expected, but not for import as it prompts for the variables.

Actual Behavior

Prompts for the variable values

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform import opc_compute_instance.compute_instance1 compute_instance1/6abcfdcb-0a19-45fc-ac52-815615dc59af

Important Factoids

NA

References

NA

Terraform snapshot restoration

This issue was originally opened by @tsgmanh as hashicorp/terraform#15560. It was migrated here as a result of the provider split. The original body of the issue is below.


Hi There,

I am trying to restore a storage volume in opc cloud and see a strange error. can anybody please help on this?

  • module.snapshot-restore.opc_compute_storage_volume.demo-final-bt: 1 error(s) occurred:

  • opc_compute_storage_volume.demo-final-bt: Error creating storage volume snap-000-boot: 400: {"message": "Volume properties incompatible with snapshot properties"}

  • module.snapshot-restore.opc_compute_storage_volume.demo-final-dt: 1 error(s) occurred:

  • opc_compute_storage_volume.demo-final-dt: Error creating storage volume snap-000-data: 400: {"message": "Volume properties incompatible with snapshot properties"}
    =================

security list updated on subsequent applies

Terraform Version

0.9.8

Affected Resource(s)

opc_compute_security_list

Terraform Configuration Files

resource "opc_compute_security_list" "sec_list1" {
  name                 = "sec-list-1"
  policy               = "permit"
  outbound_cidr_policy = "deny"
}

Behavior

After first apply the state shows the policy and outbound_cidr_policy values in upper-case, but the config is in lower case, so on subsequent applies terraform sees a difference and updates the resource each time.

Steps to Reproduce

$ terraform apply
$ terraform plan

~ opc_compute_security_list.SSH_ALLOW
    outbound_cidr_policy: "PERMIT" => "permit"
    policy:               "DENY" => "deny"

Plan: 0 to add, 1 to change, 0 to destroy.

Workaround

Explicitly set the policy and outbound_cidr_policy as PERMIT or DENY (i.e. use uppercase)

Add Timeouts to OPC resources

The OPC provider doesn't yet support timeouts. Can we add timeouts to the resources? Primarily storage resources are the ones that take a long time and don't work with the default timeouts

Import resources from OPC

Hi,

question 1

Is there a way to import OPC resources (example Instances, Storage, etc) with terraform ? Of course resources that has not been created by Terraform.

question 2

is it possible to use JSON file of OPC orchestration as Terraform input to create resources? This would be a workaround for quetsion 1

Thanks
Carlo

opc_storage_container allowed_origins not set correctly

Terraform Version

terraform v0.10.0
opc provider v0.1.2

Affected Resource(s)

opc_storage_container

Terraform Configuration Files

provider "opc" {
  user            = "${var.user}"
  password        = "${var.password}"
  identity_domain = "${var.domain}"
  endpoint        = "${var.endpoint}"
  storage_endpoint = "https://${var.domain}.storage.oraclecloud.com/"
}

resource "opc_storage_container" "default" {
  name        = "storage-container-1"
  allowed_origins = [ "cloud.oracle.com", "oraclecloud.com" ]
}

Expected Behavior

The access-control-allow-origin attribute shown in the Storage Cloud console should be set with the provided values

Actual Behavior

The access-control-expose-headers attribute shown in the Storage Cloud console is set instead

REST API call appears to be setting X-Container-Meta-Access-Control-Expose-Headers instead of X-Container-Meta-Access-Control-Allow-Origin.

expose_headers should be added as a separate resource attribute (api docs)

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Terraform OPC provider doesn't use orchestration rather its Launchplan causing server shape change almost impossible

This issue was originally opened by @pranay-oracle as hashicorp/terraform#14502. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

Terraform 0.9 is the one I checked.

Affected Resource(s)

Please list the resources as a list, for example:

  • OPC provider

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

As Terraform uses laucnhplan kind of mechanism its makes impossible for us to change Shape of the instance or to attach new volumes to remain persistent accross orchestrations start.

customer d choose orchestration stop start mechnism in pay-per-use kind of environment.

use case :
Shutdown all non-prod on weekend to reduce cost and bing it up on monday once developers are backp.

Provisioner remote-exec using in modules

I am creating few opc instance using the modules concept so that I can reuse my module any time. After the instance creation has been completed then I want to run few remote-exec to copy few files and do the installation of application. below is the snippet of my code. unfortunately its not working, can any body please help.

main.tf

module "instance" {
---
---
}

resource "null_resource" "cluster" {
  connection {
    user        = "user"
    host        = "${element(var.inst_ips, count.index)}"
    type        = "ssh"
    private_key = "${file("id_rsa")}"
    timeout     = "10m"
  }

  provisioner "file" {
    source      = "${path.module}/scripts/"
    destination = "/src/"
  }

  provisioner "remote-exec" {
    inline = [
      "chmod +x /src/setup.sh",
      "/src/setup.sh",
    ]
  }
}

module/instance/main.tf

resource "opc_compute_instance" "opc-inst" {
  count = "${length(var.inst_name)}"
  name  = "web-${format("%03d", count.index+1)}-${element(var.inst_name, count.index)}"
  label = "${var.label}"
  shape = "${var.shape}"
  networking_info {
    index      = 0
    ip_network = "${var.ip_network}"
    ip_address = "${element(var.inst_ips, count.index)}"
    vnic_sets  = ["${var.vnic_sets}"]
    vnic       = "${var.vnic}"
    nat = [ "${var.nat}" ]
    name_servers   = "${var.rfs_ns}"
    search_domains = "${var.search_domains}"
  }

  ssh_keys = ["sshkey"]

  storage {
    volume = "${var.boot_disk}"
    index  = 1
  }

  storage {
    volume = "${var.data_disk}"
    index  = 2
  }

  boot_order = [1]
}

add opc_compute_storage_volume_snapshot as a data source

The opc_compute_storage_volume_snapshot should be available as a data source as well as a resource. This will make it possible to reference existing snapshots when creating new storage volumes.

data "opc_compute_storage_volume_snapshot" "snapshot1" {
  name = "snapshot1"
}

resource "opc_compute_storage_volume" "storage-from-snapshot1" {
  name = "storage1"
  bootable = true
  snapshot_id = "${data.opc_compute_storage_volume_snapshot.snapshot1.snapshot_id}"
  size = "${data.opc_compute_storage_volume_snapshot.snapshot1.size}"	
}

Storage vol addition in running OPC instance.

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

  • Terraform v0.10.4

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

  • While adding new storage volume, Instance recreate and with new storage volume.
  • Unable to see orchestration for instance.

Terraform Configuration Files

provider "opc" {
user = ""
password = "
"
identity_domain = "******"
endpoint = "https://compute.*******.oraclecloud.com/"
}

resource "opc_compute_storage_volume" "volume1" {
size = "13"
description = "Example bootable storage volume"
name = "boot-from-storage-example"
bootable = true
image_list = "/oracle/public/OL_6.4_UEKR4_x86_64"
image_list_entry = 3
}

resource "opc_compute_storage_volume" "volume2" {
size = "4"
description = "Example persistent storage volume"
name = "persistent-storage-example"
}

resource "opc_compute_storage_volume" "volume3" {
size = "4"
description = "Example persistent storage volume"
name = "persistent-storage-example-vol3"
}

-- Newly added
resource "opc_compute_storage_volume" "volume4" {
size = "2"
description = "Example persistent storage volume"
name = "persistent-storage-example-vol4"
}

resource "opc_compute_instance" "instance1" {
name = "boot-from-storage-instance1"
label = "Example instance with bootable storage"
shape = "oc3"

storage {
index = 1
volume = "${opc_compute_storage_volume.volume1.name}"
}

storage {
index = 2
volume = "${opc_compute_storage_volume.volume2.name}"
}

storage {
index = 3
volume = "${opc_compute_storage_volume.volume3.name}"
}

-- Newly added
storage {
index = 4
volume = "${opc_compute_storage_volume.volume4.name}"
}

boot_order = [1]
}

Debug Output

NA

Panic Output

Instance recreated

Expected Behavior

No, Expected new volume will get add in existing instance.

Actual Behavior

Instance recreated with new volumn

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
terraform plan
terraform apply

Update .tf with new volumn
terraform apply

rename opc_compute_ip_network fails with 404 error

Terraform Version

terraform v0.10.2
opc_provider v0.1.2

Affected Resource(s)

  • opc_compute_ip_network

Terraform Configuration Files

resource "opc_compute_ip_network" "example" {
  name = "example-subnet3"
  ip_address_prefix = "192.168.0.0/24"
}

Steps to Reproduce

terraform apply rename the resource to example-subnet4 and terraform apply again

Expected Behavior

The IP Network resource should be destroyed and then recreated with the new name

Actual Behavior

The provider tries to rename the resouce, but failes

$ terraform plan
...
  ~ module.opc_ip_network.opc_compute_ip_network.network[2]
      name: "example-subnet3" => "example-subnet4"
$ terraform apply
...
module.opc_ip_network.opc_compute_ip_network.network.2: Modifying... (ID: example-subnet3)
  name: "example-subnet3" => "example-subnet4"
Error applying plan:

1 error(s) occurred:

* module.opc_ip_network.opc_compute_ip_network.network[2]: 1 error(s) occurred:

* opc_compute_ip_network.network.2: Error updating IP Network 'example-subnet4': 404: {"message": "no IpNetwork object named /Compute-usorclptsc53098/[email protected]/example-subnet4 found"}

OPC Error: Error running plan: 1 error(s) occurred: * provider.opc: 401: {"message": "Incorrect username

Hi team,
I am trying to connect Oracle compute classic instance using OPC and Terraform.
Version terraform_0.10.8
OPC : 0.1.3.x

Error: Error running plan: 1 error(s) occurred: * provider.opc: 401: {"message": "Incorrect username

But with same logins I am able to login to Oracle compute classic console and able to create instances.

  • provider.opc: 401: {"message": "Incorrect username or password"}

Please check on this.
Thanks
Ravi

code:
provider "opc" {
identity_domain = "rg193860"
endpoint = "https://compute.gbcom-south-1.oraclecloud.com/"
user = "@"
password = "****"
}

resource "opc_compute_ip_reservation" "production" {
parent_pool = "/oracle/public/ippool"
permanent = true

Debug Output

https://gist.github.com/rg193860/33cb0222ced52e068647dfca81746717
.

Terraform Version

D:\terraform_0.10.8_windows_amd64>terraform -v
2017/11/09 16:36:06 [INFO] Terraform version: 0.10.8 44110772d9ffd0ec3589943c6
4c93c24a5fff06
2017/11/09 16:36:06 [INFO] Go runtime version: go1.9
2017/11/09 16:36:06 [INFO] CLI args: []string{"D:\terraform_0.10.8_windows_amd
4\terraform.exe", "-v"}
2017/11/09 16:36:06 [DEBUG] Attempting to open CLI config file: C:\Users\rgudur
\AppData\Roaming\terraform.rc
2017/11/09 16:36:06 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2017/11/09 16:36:06 [DEBUG] CLI config is &main.Config{Providers:map[string]str
ng{}, Provisioners:map[string]string{}, DisableCheckpoint:false, DisableCheckpo
ntSignature:false, PluginCacheDir:"", Credentials:map[string]map[string]interfa
e {}(nil), CredentialsHelpers:map[string]*main.ConfigCredentialsHelper(nil)}
2017/11/09 16:36:06 [INFO] CLI command args: []string{"version", "-v"}
2017/11/09 16:36:06 [DEBUG] plugin: waiting for all plugin processes to complet
...
Terraform v0.10.8

Affected Resource(s)

OPC provider. for Oracle cloud compute classic.

Debug Output

https://gist.github.com/rg193860/33cb0222ced52e068647dfca81746717

Error in instance JSON, unable to use two NAT IP within shared network configuration

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • OPC Network Config

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

JSON-

{
"account" : "/Compute-/default",
"description" : "RG-NAT",
"tags" : [ ],
"name" : "/Compute-
/[email protected]/RG-NAT",
"objects" : [ {
"account" : "/Compute-/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"managed" : true,
"name" : "/Compute-
/[email protected]/RG-NAT_storage",
"bootable" : true,
"shared" : false,
"imagelist" : "/oracle/public/OL_6.4_UEKR3_x86_64",
"properties" : [ "/oracle/public/storage/default" ],
"size" : "12G"
},
"label" : "RG-NAT_storage_1",
"orchestration" : "/Compute-/[email protected]/RG-NAT",
"type" : "StorageVolume",
"name" : "/Compute-
/[email protected]/RG-NAT/RG-NAT_storage"
}, {
"account" : "/Compute-/default",
"desired_state" : "inherit",
"description" : "RG-NAT",
"persistent" : true,
"template" : {
"networking" : {
"eth1" : {
"vnic" : "/Compute-
/[email protected]/RG-NAT_eth1",
"ip" : "10.0.0.20",
"ipnetwork" : "/Compute-/[email protected]/Gateway_OEL6IPN",
"nat" : [ ],
"vnicsets" : [ "/Compute-
/[email protected]/RG-NAT" ]
},
"eth0" : {
"seclists" : [ "/Compute-/[email protected]/dnsntpseclist" ],
"nat" : "ipreservation:/Compute-
/[email protected]/DNS_NTP_TEST","/Compute-/[email protected]/NAT-IP-TEST",
"dns" : [ "RG-NAT" ]
}
},
"name" : "/Compute-
/[email protected]/RG-NAT",
"storage_attachments" : [ {
"volume" : "{{RG-NAT_storage_1:name}}",
"index" : 1
} ],
"boot_order" : [ 1 ],
"hostname" : "RG-NAT",
"label" : "RG-NAT",
"shape" : "oc3",
"imagelist" : "/oracle/public/OL_6.4_UEKR3_x86_64",
"sshkeys" : [ "/Compute-/[email protected]/DR_SSH" ]
},
"label" : "RG-NAT_instance",
"orchestration" : "/Compute-
/[email protected]/RG-NAT",
"type" : "Instance",
"name" : "/Compute-*******/[email protected]/RG-NAT/instance"
} ],
"desired_state" : "active"
}

Debug Output

Error while uploading

Panic Output

NA

Expected Behavior

What should have happened?

Actual Behavior

Error while uploading

Handle HTTP 404 Error Response Codes from OPC

Terraform Version

Terraform v0.9.11

Affected Resource(s)

opc_compute_image_list_entry

Terraform Configuration Files

resource "opc_compute_image_list" "images" {
name = "imageList2"
description = "Description for the Image List"
}

resource "opc_compute_image_list_entry" "images" {
name = "${opc_compute_image_list.images.name}"
machine_images = [ "/oracle/public/OL_7.2_UEKR3", "/Compute-orcdevtest1/[email protected]/myImage"]
version = 1
}

resource "opc_compute_storage_volume" "dns" {
name = "storageVolume2"
description = "Description for the Bootable Storage Volume"
size = 500
bootable = true
image_list = "${opc_compute_image_list.images.name}"
image_list_entry = "1"
}
Debug Output

2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:27 [DEBUG] No meta timeoutkey found in Apply()
2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:27 [DEBUG] [go-oracle-terraform]: HTTP POST Req (/storage/volume/):
2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 0xc4201023b0
2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:27 [DEBUG] No meta timeoutkey found in Apply()
2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:27 [DEBUG] [go-oracle-terraform]: HTTP POST Req (/imagelist/Compute-orcdevtest1/[email protected]/imageList3/entry/):
2017/07/20 15:44:27 [DEBUG] plugin: terraform-provider-opc: 0xc420102550
2017/07/20 15:44:31 [DEBUG] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "opc_compute_image_list_entry.images"
2017/07/20 15:44:31 [DEBUG] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2017/07/20 15:44:31 [DEBUG] dag/walk: vertex "provider.opc (close)", waiting for: "opc_compute_image_list_entry.images"
2017/07/20 15:44:32 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:32 [DEBUG] [go-oracle-terraform]: Encountered HTTP (404) Error: {"message": "MachineImage does not exist"}
2017/07/20 15:44:32 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:32 [DEBUG] [go-oracle-terraform]: 1/3 retries left
2017/07/20 15:44:33 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:33 [DEBUG] [go-oracle-terraform]: Encountered HTTP (404) Error: {"message": "Image list /Compute-orcdevtest1/[email protected]/imageList3 or image list entry 1 does not exist to create boot volume /Compute-orcdevtest1/[email protected]/storageVolume3."}
2017/07/20 15:44:33 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:44:33 [DEBUG] [go-oracle-terraform]: 1/3 retries left
2017/07/20 15:44:36 [DEBUG] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "opc_compute_image_list_entry.images"
2017/07/20 15:44:36 [DEBUG] dag/walk: vertex "root", waiting for: "meta.count-boundary (count boundary fixup)"
2017/07/20 15:44:36 [DEBUG] dag/walk: vertex "provider.opc (close)", waiting for: "opc_compute_image_list_entry.images"
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteState
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalIf
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteState
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteDiff
2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalApplyPost
2017/07/20 15:44:37 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:

  • opc_compute_storage_volume.dns: Error creating storage volume storageVolume3: Post https://compute.uscom-central-1.oraclecloud.com/storage/volume/: http: ContentLength=306 with Body length 0
    2017/07/20 15:44:37 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

  • opc_compute_storage_volume.dns: Error creating storage volume storageVolume3: Post https://compute.uscom-central-1.oraclecloud.com/storage/volume/: http: ContentLength=306 with Body length 0
    2017/07/20 15:44:37 [TRACE] [walkApply] Exiting eval tree: opc_compute_storage_volume.dns
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteState
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalIf
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteState
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalWriteDiff
    2017/07/20 15:44:37 [DEBUG] root: eval: *terraform.EvalApplyPost
    2017/07/20 15:44:37 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:

  • opc_compute_image_list_entry.images: Post https://compute.uscom-central-1.oraclecloud.com/imagelist/Compute-orcdevtest1/[email protected]/imageList3/entry/: http: ContentLength=163 with Body length 0
    2017/07/20 15:44:37 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

  • opc_compute_image_list_entry.images: Post https://compute.uscom-central-1.oraclecloud.com/imagelist/Compute-orcdevtest1/[email protected]/imageList3/entry/: http: ContentLength=163 with Body length 0
    2017/07/20 15:44:37 [TRACE] [walkApply] Exiting eval tree: opc_compute_image_list_entry.images
    2017/07/20 15:44:37 [DEBUG] dag/walk: upstream errored, not walking "meta.count-boundary (count boundary fixup)"
    2017/07/20 15:44:37 [DEBUG] dag/walk: upstream errored, not walking "provider.opc (close)"
    2017/07/20 15:44:37 [DEBUG] dag/walk: upstream errored, not walking "root"

Expected Behavior

Standard output should print correct reason as to why the resource fail to be created.

I.ex " opc_compute_image_list_entry.images: 1 error(s) occurred: machine_image does not exist"

What should have happened?

Actual Behavior

"
Error applying plan:

2 error(s) occurred:

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create your plan and set value of /oracle/public/OL_7.1_UEKR3_x86_64 as a machine_image name, resource "opc_compute_image_list_entry" "images" {
    name = "${opc_compute_image_list.images.name}"
    machine_images = [ "/oracle/public/OL_7.1_UEKR3_x86_64"]
    version = 1
    }
  1. terraform plan
  2. terraform apply

Important Factoids

References

Importing Private Image crashes terraform

Hi there,

Terraform Version

0.9.11

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_compute_image_list_entry

Terraform Configuration Files

none. Just the credentials and then a test to import the image resource.


### Debug Output
https://gist.github.com/nagulako/dd89c2360f00b4503c92ec1e2beb1d0d

### Panic Output
https://gist.github.com/nagulako/316d8c90bd3847fd27b503d453e4b986

### Expected Behavior
Should have created a resource inside terraform i.e imported existing resource.

### Actual Behavior
crash

### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1.  Create a private image from the console (in this case, it was SUSE Linux 11.4
2.  Use terraform import to import that image entry.  Entry crashes.

userdata template script- opc instance

Hi, need an help.

I am trying to pass some userdata to my instance during creation and getting some error during running. I am creating multiple instance and I need to run this userdata for each instance.

data "template_file" "userdata" {
template = "bootstrap.sh"
  vars {
    private_ips = "${element(var.inst_ips, count.index)}"
  }}
( private ip will pass as a variable input to my bootstrap.sh script)
resource "opc_compute_instance" "opc-inst" {
  count = "${length(var.inst_name)}"
  name  = "web-${format("%03d", count.index+1)}-${element(var.inst_name, count.index)}"
  label = "${var.label}"
  shape = "${var.shape}"
 instance_attributes = "${data.template_file.userdata.rendered}"

  networking_info {
    index          = 0
    ip_network     = "${var.ip_network}"
    ip_address     = "${element(var.inst_ips, count.index)}"
    vnic_sets      = ["${var.vnic_sets}"]
    vnic           = "${var.vnic}"
    nat            = ["${var.nat}"]
    name_servers   = "${var.rfs_ns}"
    search_domains = "${var.search_domains}"
  }

below is the error I got. can anybody please help

* module.instance.opc_compute_instance.opc-inst[2]: "instance_attributes" contains an invalid JSON: invalid character 'b' looking for beginning of value
* module.instance.opc_compute_instance.opc-inst[0]: "instance_attributes" contains an invalid JSON: invalid character 'b' looking for beginning of value
* module.instance.opc_compute_instance.opc-inst[1]: "instance_attributes" contains an invalid JSON: invalid character 'b' looking for beginning of value

add opc_compute_orchestration resource

Add support for the managing native Orchestrations for Oracle Compute Cloud

Usage would be similar to the way the aws provider handles cloud formation templates e.g. (for illustration purposes only)

resource "opc_compute_orchestration" "instance-orchestration" {
  name = "instance-orchestration"
  description = "my instance creation orchestration"
  schedule {
    start_time = "2017-06-20T12:00:00Z"
    stop_time = "2017-06-26T12:00:00Z"
  }
  
  vars {
    context = "/Compute-${var.domain}/${var.user}"
  }

  object_plan = <<OPLAN
{
  "obj_type": "launchplan",
  "ha_policy": "active",
  "label": "instance1",
  "objects": [
    {
      "instances": [
        {
          "shape": "oc3",
          "label": "instance1",
          "imagelist": "/oracle/public/compute_oel_6.4_2GB",
          "name": "$${content}/primary_webserver"
        }
      ]
    }
  ]
}
OPLAN
}

import opc_compute_storage_volume_snapshot crashes

This issue was originally opened by @nagulako as hashicorp/terraform-provider-opc#8. It was migrated here as part of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

root@nvmk-sol-wls1:~/terraform_opc/opc/e18# terraform -v
Terraform v0.9.5

Affected Resource(s)

Please list the resources as a list, for example:
opc_compute_storage_volume_snapshot

Terraform Configuration Files

Single command - no config file here.
terraform import opc_compute_storage_volume_snapshot.tf-nvmk-corente-sol1-snapshot tf-nvmk-corente-sol1-snapshot
t.

Panic Output

https://gist.github.com/nagulako/a10e5ee928c8fef0ea8544cac2516ff2

Expected Behavior

Imported the Storage snapshot.

Actual Behavior

Crash

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
terraform import opc_compute_storage_volume_snapshot.tf-nvmk-corente-sol1-snapshot tf-nvmk-corente-sol1-snapshot

Important Factoids

Oracle Public Cloud.

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
None

storage volume from snapshot recreated on second apply

Terraform Version

Terraform: v0.10.0
OPC Provider: v0.1.2

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_compute_storage_volume

Terraform Configuration Files

data "opc_compute_storage_volume_snapshot" "my-snapshot" {
  name = "storage_snapshot_test_storage/my-storage-volume-snapshot"
}

resource "opc_compute_storage_volume" "volume1" {
  name             = "boot-from-storage-snapshot"
  description      = "bootable storage volume from storage snapshot"
  snapshot_id      = "${data.opc_compute_storage_volume_snapshot.my-snapshot.snapshot_id}"
  size             = "${data.opc_compute_storage_volume_snapshot.my-snapshot.size}"
  bootable         = true
}

Expected Behavior

When the configuration is applied a second time without changing the config not resource updated should be required

Actual Behavior

Terraform apply destroys and recreated the storage volume due to delta of the snapshot_account parameter

-/+ opc_compute_storage_volume.volume1 (new resource required)
      bootable:         "true" => "true"
      description:      "bootable storage volume from storage snapshot" => "bootable storage volume from storage snapshot"
      hypervisor:       "" => "<computed>"
      image_list_entry: "-1" => "-1"
      machine_image:    "/oracle/public/OL_7.2_UEKR4_x86_64-17.2.2-20170405-211209" => "<computed>"
      managed:          "true" => "<computed>"
      name:             "boot-from-storage-snapshot" => "boot-from-storage-snapshot"
      platform:         "linux" => "<computed>"
      readonly:         "false" => "<computed>"
      size:             "12" => "12"
      snapshot:         "storage_snapshot_test_storage/my-storage-volume-snapshot" => "<computed>"
      snapshot_account: "/Compute-usorclptsc53098/cloud_storage" => "" (forces new resource)
      snapshot_id:      "cbf10aaaa49536f30637601ce4e80c4519d9ee8d3d509764026e9eb4cc914d4d-us6" => "cbf10aaaa49536f30637601ce4e80c4519d9ee8d3d509764026e9eb4cc914d4d-us6"
      status:           "Online" => "<computed>"
      storage_pool:     "/compute-us6-z27/lpeis01nas58-v2_vnic6/storagepool/iscsi/latency_1" => "<computed>"
      storage_type:     "/oracle/public/storage/default" => "/oracle/public/storage/default"
      uri:              "https://api-z27.compute.us6.oraclecloud.com/storage/volume/Compute-usorclptsc53098/stephen.cross%40oracle.com/boot-from-storage-snapshot" => "<computed>"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan

Important Factoids

The cloud_account parameter is not required when using the snapshot_id option

OPC - Error 500 instance creation

This issue was originally opened by @AudreyBramy as hashicorp/terraform#15113. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

I am working with the OPC terraform provider and I get an error 500 when I try to create an instance.

Terraform Version

0.9.6

Affected Resource(s)

opc_compute_instance

Terraform Configuration Files

https://drive.google.com/drive/folders/0BzAU2-jlx__JQWpQX1A4ekhhV3c?usp=sharing

Debug Output

https://gist.github.com/AudreyBramy/b0230fde6bcfac7bfbd63c734196206e

Expected Behavior

Create an OPC instance attached to an IPNetwork.

Actual Behavior

Error 500 for the OPC instance creation:

  • opc_compute_instance.instance_app: Error creating instance instance_app: 500: {"message": "An internal error occurred.", "reference": "A4ENPIQBOKOP"}

Steps to Reproduce

  1. Open the configuration files : https://drive.google.com/drive/folders/0BzAU2-jlx__JQWpQX1A4ekhhV3c?usp=sharing
  2. complete the variables : user, password, identity_domain, endpoint, ssh_keys
  3. execute terraform apply

Important Factoids

I had two accounts, one in the us2 datacenter and the second in the em2 datacenter.
This error occurs only in the em2 datacenter and when I am attaching the instance to an IpNetwork.
With no ipnetwork, the instance is created.

Thanks,

Error initializing instance: Error during setup or launch

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

terraform -v
Terraform v0.9.8

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance = terraform_instance

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

provider "opc" {
user = "${var.user}"
password = "${var.password}"
identity_domain = "${var.domain}"
endpoint = "${var.endpoint}"
}

resource "opc_compute_instance" "terraform_instance" {
count = 3
name = "${format("instance%1d", count.index + 1)}"
label = "${format("instance%1d", count.index + 1)}"
shape = "oc3"
image_list = "/Compute-${var.domain}/${var.user}/Ubuntu.16.04-LTS.amd64.20170330"
ssh_keys = ["${opc_compute_ssh_key.terraformKey.name}"]

storage {
index = 1
volume = "${opc_compute_storage_volume.terraform_bootable_storage.name}"
}
}

resource "opc_compute_ssh_key" "terraformKey" {
name = "terraformKey"
key = "${file(var.ssh_publicKey_file)}"
enabled = true
}

resource "opc_compute_ip_reservation" "reserve_ip" {
count = 3
parent_pool = "/oracle/public/ippool"
permanent = true
tags = []
}

resource "opc_compute_ip_association" "instance_accociateIP" {
count = 3
vcable = "${element(opc_compute_instance.terraform_instance..vcable,count.index)}"
parent_pool = "ipreservation:${element(opc_compute_instance.terraform_instance.
.name,count.index)}"
}

/*
resource "opc_compute_storage_volume" "storage_space" {
name = "terraform_storage_space"
description = "creates a storage volume to attache separtly"
size = 10
}
*/

resource "opc_compute_storage_volume" "terraform_bootable_storage" {
#count = 3
name = "terraform_bootable_storage"
description = "persistent bootable storage that is attached to this instance"
size = 10
}

#security stuff below

resource "opc_compute_sec_rule" "SSH_ACCESS" {
name = "SSH_ACCESS"
source_list = "seciplist:${opc_compute_security_ip_list.public_internet.name}"
destination_list = "seclist:${opc_compute_security_list.SSH_ALLOW.name}"
action = "permit"
application = "${opc_compute_security_application.ssh_port.name}"
}

resource "opc_compute_security_application" "ssh_port" {
name = "ssh_port"
protocol = "tcp"
dport = "22"
}

resource "opc_compute_security_association" "associate_SSH" {
count = 3
name = "associate_SSH"
vcable = "${element(opc_compute_instance.terraform_instance.*.vcable,count.index)}"
seclist = "${opc_compute_security_list.SSH_ALLOW.name}"
}

resource "opc_compute_security_ip_list" "public_internet" {
name = "public_internet"
ip_entries = ["0.0.0.0/0"]
}

resource "opc_compute_security_list" "SSH_ALLOW" {
name = "SSH_ALLOW"
policy = "deny"
outbound_cidr_policy = "permit"
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

https://gist.github.com/oakzaid/82312e9d06101b6a98e015decc3856e0

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?

all three resources should be created

Actual Behavior

What actually happened?

i am trying to create multiple instances of the same instance but it gives me this error after creating one of them.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply or TF_LOG=DEBUG terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

i am running this in OPC

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

bootable disk snapshot

Original issue raised at oracle/terraform-examples#12 by @tsgmanh :

I am working on Oracle public cloud and I used terraform resource "opc_compute_storage_volume_snapshot" to create a snapshot of an existing bootable disk. it went well. After i used "opc_compute_storage_volume" to restore the snapshot using snapshot id and there i have few problems,
a) even though i took snapshot of a boot disk the restoration is not giving a bootable disk
b) during restoration of snapshot, if i give bootable=true then terraform is asking for a image list and image entry and later terraform is not using the snapshot and using that image to create the volume.
can anybody please help me on this.

Please find below my terraform scripts,

# boot-snapshot
resource "opc_compute_storage_volume_snapshot" "demo-snap-bt" {
  name        = "${var.st_snp_bt_name}"
  description = "${var.project}_${var.st_snp_bt_name}"
  tags        = ["${var.st_snp_bt_tags}"]
 parent_volume_bootable = true
  volume_name = "my volume-disk"
}


#restore boot
resource "opc_compute_storage_volume" "demo-rst-bt" {
  name        = "${var.st_rst_bt_name}"
    description = "${var.project}_${var.st_rst_bt_name}"
  size        = 20
  tags        = ["${var.st_snp_bt_tags}"]
  snapshot_id    = "${opc_compute_storage_volume_snapshot.demo-snap-bt.snapshot_id}"
}

Error when renaming an ip network resource

Terraform Version

terraform v0.10.0
opc provider v0.1.2

Affected Resource(s)

  • opc_compute_ip_reservation

Steps

  1. Create an IP network resource
resource "opc_compute_ip_network" "network" {
  name = "test"
  ip_address_prefix = "192.168.1.0/24"
}

terraform apply

  1. Modify the name of the IP network resource
resource "opc_compute_ip_network" "network" {
  name = "test-updated"
  ip_address_prefix = "192.168.1.0/24"
}

terraform apply

opc_compute_ip_network.network: Modifying... (ID: test)
  name: "test" => "test-updated"
Error applying plan:

1 error(s) occurred:

* opc_compute_ip_network.network: 1 error(s) occurred:

* opc_compute_ip_network.network: Error updating IP Network 'test-updated': 404: {"message": "no IpNetwork object named /Compute-usorclptsc53098/[email protected]/test-updated found"}

add `is_default_gateway` attribute for ip network interfaces

Terraform Version

terraform v0.10.0
opc provider v0.1.2

Affected Resource(s)

  • opc_compute_instance

Need to support the is_default_gateway in the networking_info block for IP network interfaces:

https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/creating-instances-using-launch-plans.html

is_default_gateway: (Optional) If you want to specify the interface to be used as the default gateway for all traffic, set this to true. The default is false. Only one interface on an instance can be specified as the default gateway. If the instance has an interface on the shared network, that interface is always used as the default gateway. You can specify an interface on an IP network as the default gateway only when the instance doesn’t have an interface on the shared network

e.g.

resource "opc_compute_instance" "bastion" {
... 
  networking_info {
    index      = 0
    ip_network = "${opc_compute_ip_network.ipnetwork.name}"
    is_default_gateway = true
  }
}

Removing a specific OPC compute instance is causing the security list association to be removed from other instances

Hello, I was hoping that you guys might be able to shed some light to the following issue I am running into.

Currently I am working on developing a deployment infrastructure with terraform-provider-opc. One of the problem we are seeing with our current TF config is that when we wanted to remove a specific resource, in this case a compute instance, some other dependencies also get deleted/removed from other instances. In this case it is the opc_compute_security_association from the other instances too.

For example, I have two instances, and I try to remove second one with the following

$ terraform plan -destroy -target=opc_compute_instance.test[1]

The plan execution then shows me the following:

$ terraform plan -destroy -target=opc_compute_instance.test[1]
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.userdata: Refreshing state...
opc_compute_ssh_key.test: Refreshing state... (ID: test-key)
opc_compute_ip_reservation.ipreservation.0: Refreshing state... (ID: 9725578a-3484-46f9-885a-675e1f62daec)
opc_compute_ip_reservation.ipreservation.1: Refreshing state... (ID: 0eb1407e-bc6e-49e8-86ef-d5a4cd10ad93)
opc_compute_storage_volume.data.0: Refreshing state... (ID: data-0)
opc_compute_storage_volume.data.1: Refreshing state... (ID: data-1)
opc_compute_instance.test.1: Refreshing state... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

- opc_compute_instance.test.1

- opc_compute_security_association.associate_SSH.0

- opc_compute_security_association.associate_SSH.1

As the output shows, TF is going to attempt to remove opc_compute_security_association.associate_SSH.0.

How do I prevent opc_compute_security_association.associate_SSH.0 being removed? That has to stay with the first instance ( opc_compute_instance.test.0 ) or any other undeleted instances. If I execute this, indeed the opc_compute_security_association.associate_SSH.0 gets deleted:

$ terraform  destroy -target=opc_compute_instance.test[1]
Do you really want to destroy?
  Terraform will delete the following infrastructure:
  	opc_compute_instance.test[1]
  There is no undo. Only 'yes' will be accepted to confirm

  Enter a value: yes

data.template_file.userdata: Refreshing state...
opc_compute_storage_volume.data.1: Refreshing state... (ID: data-1)
opc_compute_storage_volume.data.0: Refreshing state... (ID: data-0)
opc_compute_ssh_key.test: Refreshing state... (ID: test-key)
opc_compute_ip_reservation.ipreservation.1: Refreshing state... (ID: 0eb1407e-bc6e-49e8-86ef-d5a4cd10ad93)
opc_compute_ip_reservation.ipreservation.0: Refreshing state... (ID: 9725578a-3484-46f9-885a-675e1f62daec)
opc_compute_instance.test.1: Refreshing state... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5)
opc_compute_security_association.associate_SSH.1: Destroying... (ID: associate_SSH2)
opc_compute_security_association.associate_SSH.0: Destroying... (ID: associate_SSH1)
opc_compute_security_association.associate_SSH.1: Destruction complete
opc_compute_security_association.associate_SSH.0: Destruction complete
opc_compute_instance.test.1: Destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 10s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 20s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 30s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 40s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 50s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 1m0s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 1m10s elapsed)
opc_compute_instance.test.1: Still destroying... (ID: d49a2366-7b00-4c86-84b4-fc72d21841f5, 1m20s elapsed)
opc_compute_instance.test.1: Destruction complete

Destroy complete! Resources: 3 destroyed.

So since, opc_compute_security_association.associate_SSH.0 gets removed, this breaks my ssh access to the first instance

Moreover, when I go back to the compute UI, I also see from the Storage tab that "data-1" storage volume is not deleted but only its association with the instance is removed. This is as opposed to terraform destroy default behaviour where it really deletes every single resource and removes all of the associations.

Terraform Version

terraform -v
Terraform v0.9.11

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume
  • opc_compute_security_association

Terraform Configuration Files

main.tf 
# The user data here defines what opc-init should be doing after the images are installed
# In this case first we are creating/formatting the storage volume and then we are installing chef-client/chef-solo into
# each image that is being created.
data "template_file" "userdata" {
  template = <<JSON
{
  "userdata": {
    "pre-bootstrap": {
      "script": [
   ...skipped as not needed
      ]
    }
  }
}
JSON
}

# Reserve a public IP
resource "opc_compute_ip_reservation" "ipreservation" {
  count       = "${var.instance_count}"
  parent_pool = "/oracle/public/ippool"
  permanent   = true
}

# Creates a storage volume.  It turns out that the name must be unique so adding
# the index in the name
resource "opc_compute_storage_volume" "data" {
  count = "${var.instance_count}"
  name  = "data-${count.index}"
  size  = 10
}

# Creates an instance, attaches a storage volume, associates a public IP
# The storage volume and chef client is installed through instance_attributes where it uses opc-init in "userdata" defined in
# the data section above
resource "opc_compute_instance" "test" {
  count      = "${var.instance_count}"
  name       = "deniz-test-instance-${count.index}"
  label      = "Terraform Provisioned Instance"
  shape      = "oc3"
  image_list = "/oracle/public/OL_6.8_UEKR3_x86_64"
  ssh_keys            = ["${opc_compute_ssh_key.test.name}"]
  instance_attributes = "${data.template_file.userdata.rendered}"
# Attach the previously created storage volume
  storage {
    #volume = "${element(opc_compute_storage_volume.data.*.id, count.index)}"
    volume = "${opc_compute_storage_volume.data.*.id[count.index]}"
    index  = 1
  }
# Sets up the network and attaches the previously created IP
  networking_info {
    index          = 0
    shared_network = true
    #nat            = ["${element(opc_compute_ip_reservation.ipreservation.*.id, count.index)}"]
    nat            = ["${opc_compute_ip_reservation.ipreservation.*.id[count.index]}"]
  }
}
# outputs are printed after an apply is finished.  It is useful to pass on certain
# details to the operator. In this case Public IP of the instances are being printed out
output "public_ip" {
  value = "${opc_compute_ip_reservation.ipreservation.*.ip}"
}
secrule.tf
#secrules, seclists, sec associations

resource "opc_compute_sec_rule" "ssh-vm-secrule" {
  name             = "ssh-vm-secrule"
  source_list      = "seciplist:${opc_compute_security_ip_list.public_internet.name}"
  destination_list = "seclist:${opc_compute_security_list.allow-ssh-access.name}"
  action      = "permit"
  application = "${opc_compute_security_application.all.name}"
}

resource "opc_compute_security_application" "all" {
  name     = "all"
  protocol = "tcp"
  dport    = "22"
}

resource "opc_compute_security_ip_list" "public_internet" {
  name       = "public_internet"
  ip_entries = ["0.0.0.0/0"]
}

resource "opc_compute_security_list" "allow-ssh-access" {
  name                 = "allow-ssh-access"
  policy               = "DENY"
  outbound_cidr_policy = "PERMIT"
}

resource "opc_compute_security_association" "associate_SSH" {
  name = "${format("associate_SSH%1d", count.index + 1)}"
  count   = "${var.instance_count}"
  #vcable  = "${element(opc_compute_instance.test.*.vcable,count.index)}"
  vcable  = "${opc_compute_instance.test.*.vcable[count.index]}"
  seclist = "${element(opc_compute_security_list.allow-ssh-access.*.name,count.index)}"
  #seclist = "${opc_compute_security_list.allow-ssh-access.*.name[count.index]}"
}
variables.tf
variable instance_count {
  description = "Number of instances you want"
  default     = "3"
}

Debug Output

https://gist.github.com/ubersol/7b338ddf00457e2b3b04fbfc8603d7a8

Panic Output

No panic

Expected Behavior

The second instance should have been deleted with its security association removed without touching other instances' association. The storage volume should be destroyed associated with this instance.

Actual Behavior

The second instance indeed gets deleted, but in the process the security association for the other unrelated instances to be removed, which breaks ssh access. The storage volume does not get deleted and it is shown as online. However, its instance association gets removed.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

First create couple instances

  1. terraform apply -var instance_count=2

Then remove the second instance

  1. terraform destroy -target=opc_compute_instance.test[1]

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

There are couple of similar bugs as below, but honestly the solution is really not clear to me for the instance deletion:
hashicorp/terraform#10952
The issue above is then merged into this following one:
hashicorp/terraform#3449

Can't create storage volume with SSD storage

Terraform Version

0.9.11

Affected Resource(s)

  • opc_compute_storage_volume

Terraform Configuration Files

resource "opc_compute_storage_volume" "test" {
  name = "storageVolume1"
  size = 10
  storage_type = "/oracle/public/storage/ssd/gpl"

The storage_type attribute has a validation that only allows Default and Latency storage

/oracle/public/storage/ssd/gpl needs to be added as an additional storage type.

Error initializing instance: Error during setup or launch

This issue was originally opened by @oakzaid as hashicorp/terraform#15384. It was migrated here as part of the provider split. The original body of the issue is below.


PROVIDER ISSUES

PLEASE NOTE: Terraform has split out the builtin Providers into their own repositories. For any Provider issues, please open all issues and pull requests in the corresponding repository. An index of supported Providers can be found here:

All other issues (that appear to affect multiple or all providers) may be an issue with Terraform's core, and should be opened here.


Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Terraform v0.9.8

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

provider "opc" {
user = "${var.user}"
password = "${var.password}"
identity_domain = "${var.domain}"
endpoint = "${var.endpoint}"
}

resource "opc_compute_instance" "terraform_instance" {
count = 3
name = "${format("instance%1d", count.index + 1)}"
label = "${format("instance%1d", count.index + 1)}"
shape = "oc3"
image_list = "/Compute-${var.domain}/${var.user}/Ubuntu.16.04-LTS.amd64.20170330"
ssh_keys = ["${opc_compute_ssh_key.terraformKey.name}"]

storage {
index = 1
volume = "${opc_compute_storage_volume.terraform_bootable_storage.name}"
}
}

resource "opc_compute_ssh_key" "terraformKey" {
name = "terraformKey"
key = "${file(var.ssh_publicKey_file)}"
enabled = true
}

resource "opc_compute_ip_reservation" "reserve_ip" {
count = 3
parent_pool = "/oracle/public/ippool"
permanent = true
tags = []
}

resource "opc_compute_ip_association" "instance_accociateIP" {
count = 3
vcable = "${element(opc_compute_instance.terraform_instance..vcable,count.index)}"
parent_pool = "ipreservation:${element(opc_compute_instance.terraform_instance.
.name,count.index)}"
}

resource "opc_compute_storage_volume" "terraform_bootable_storage" {
#count = 3
name = "terraform_bootable_storage"
description = "persistent bootable storage that is attached to this instance"
size = 10
}

#security stuff below

resource "opc_compute_sec_rule" "SSH_ACCESS" {
name = "SSH_ACCESS"
source_list = "seciplist:${opc_compute_security_ip_list.public_internet.name}"
destination_list = "seclist:${opc_compute_security_list.SSH_ALLOW.name}"
action = "permit"
application = "${opc_compute_security_application.ssh_port.name}"
}

resource "opc_compute_security_application" "ssh_port" {
name = "ssh_port"
protocol = "tcp"
dport = "22"
}

resource "opc_compute_security_association" "associate_SSH" {
count = 3
name = "associate_SSH"
vcable = "${element(opc_compute_instance.terraform_instance.*.vcable,count.index)}"
seclist = "${opc_compute_security_list.SSH_ALLOW.name}"
}

resource "opc_compute_security_ip_list" "public_internet" {
name = "public_internet"
ip_entries = ["0.0.0.0/0"]
}

resource "opc_compute_security_list" "SSH_ALLOW" {
name = "SSH_ALLOW"
policy = "deny"
outbound_cidr_policy = "permit"
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

https://gist.github.com/oakzaid/82312e9d06101b6a98e015decc3856e0

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?

all three instances should be created

Actual Behavior

What actually happened?

it crashes saying Error initializing instance: Error during setup or launch

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply or TF_LOG=DEBUG terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

Bootable Storage Volumes

This issue was originally opened by @tsgmanh as hashicorp/terraform#15414. It was migrated here as part of the provider split. The original body of the issue is below.


I am working on Oracle public cloud and I used terraform resource "opc_compute_storage_volume_snapshot" to create a snapshot of an existing bootable disk. it went well. After i used "opc_compute_storage_volume" to restore the snapshot using snapshot id and there i have few problems,

a) even though i took snapshot of a boot disk the restoration is not giving a bootable disk
b) during restoration of snapshot, if i give bootable=true then terraform is asking for a image list and image entry and later terraform is not using the snapshot and using the image to create the volume.

can anybody please help me on this

Security lists do not get added or validated to OPC Instance

This issue was originally opened by @nagulako as hashicorp/terraform-provider-opc#11. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.9.5

Affected Resource(s)

  • opc_instance

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "opc_compute_instance" "tf-sol1-test-1" {
  name       = "tf-sol1-test-1"
  label      = "Terraform Provisioned Solaris Instance"
  shape      = "oc3"
  ssh_keys   = ["murali-opc-public-key"]
  image_list = "/oracle/public/Oracle_Solaris_11.3"
  hostname   = "tf-sol1-test-1.compute-usoraocips16001.oraclecloud.internal."

  storage {
    volume = "${opc_compute_storage_volume.tf-sol1-test-1.name}"
    index  = 1
  }
  networking_info {
    index          = 0
    shared_network = true
#    sec_lists     = ["${opc_compute_security_list.nvmk-sec-list.name}"]
#    sec_lists      = ["nvmk-sec-list"]
    nat            = ["ippool:/oracle/public/ippool"]
  }
  networking_info {
    index          = 1
    ip_network  = "terraform-network"
    ip_address  = "192.168.111.11"
    shared_network = false
    vnic        = "sol1-test-1-eth1"
  }
}

Expected Behavior

When a sec_list is added to the instance - it should be added to the Instance.

Actual Behavior

It does not. Terraform does not validate if the Sec List already exists on the instance or if it is added.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create a resource as above (You can use Linux or Solaris) - without the sec_list.
  2. Add the sec_list to the instance and run terraform plan - nothing happens.

Important Factoids

This is running on OPC IaaS compute

References

No

Handle of HTTP 409 error messages

Terraform Version

Terraform v0.9.11

Affected Resource(s)

opc_compute_image_list.images

Terraform Configuration Files

resource "opc_compute_image_list" "images" {
  name        = "imageList2"
  description = "Description for the Image List"
}

resource "opc_compute_image_list_entry" "images" {
  name           = "${opc_compute_image_list.images.name}"
  machine_images = [ "/oracle/public/OL_7.2_UEKR3_x86_64-17.2.2-20170405-232026", "/Compute-orcdevtest1/[email protected]/myImage"]
  version        = 1
}

resource "opc_compute_storage_volume" "dns" {
  name             = "storageVolume2"
  description      = "Description for the Bootable Storage Volume"
  size             = 500
  bootable         = true
  image_list       = "${opc_compute_image_list.images.name}"
  image_list_entry = "1"
}

Debug Output

2017/07/20 15:24:19 [DEBUG] dag/walk: vertex "provider.opc (close)", waiting for: "opc_compute_storage_volume.dns"
2017/07/20 15:24:20 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:24:20 [DEBUG] [go-oracle-terraform]: Encountered HTTP (409) Error: {"message": "Conflict occurred attempting to store object"}
2017/07/20 15:24:20 [DEBUG] plugin: terraform-provider-opc: 2017/07/20 15:24:20 [DEBUG] [go-oracle-terraform]: 1/3 retries left
2017/07/20 15:24:24 [DEBUG] dag/walk: vertex "opc_compute_storage_volume.dns", waiting for: "opc_compute_image_list.images"
2017/07/20 15:24:24 [DEBUG] dag/walk: vertex "opc_compute_image_list_entry.images", waiting for: "opc_compute_image_list.images"
2017/07/20 15:24:24 [DEBUG] dag/walk: vertex "meta.count-boundary (count boundary fixup)", waiting for: "opc_compute_storage_volume.dns"
2017/07/20 15:24:24 [DEBUG] dag/walk: vertex "provider.opc (close)", waiting for: "opc_compute_storage_volume.dns"
2017/07/20 15:24:24 [DEBUG] dag/walk: vertex "root", waiting for: "provider.opc (close)"
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalWriteState
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalApplyProvisioners
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalIf
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalWriteState
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalWriteDiff
2017/07/20 15:24:25 [DEBUG] root: eval: *terraform.EvalApplyPost
2017/07/20 15:24:25 [ERROR] root: eval: *terraform.EvalApplyPost, err: 1 error(s) occurred:

  • opc_compute_image_list.images: Post https://compute.uscom-central-1.oraclecloud.com/imagelist/: http: ContentLength=125 with Body length 0
    2017/07/20 15:24:25 [ERROR] root: eval: *terraform.EvalSequence, err: 1 error(s) occurred:

  • opc_compute_image_list.images: Post https://compute.uscom-central-1.oraclecloud.com/imagelist/: http: ContentLength=125 with Body length 0
    2017/07/20 15:24:25 [TRACE] [walkApply] Exiting eval tree: opc_compute_image_list.images
    2017/07/20 15:24:25 [DEBUG] dag/walk: upstream errored, not walking "opc_compute_storage_volume.dns"
    2017/07/20 15:24:25 [DEBUG] dag/walk: upstream errored, not walking "opc_compute_image_list_entry.images"
    2017/07/20 15:24:25 [DEBUG] dag/walk: upstream errored, not walking "meta.count-boundary (count boundary fixup)"
    2017/07/20 15:24:25 [DEBUG] dag/walk: upstream errored, not walking "provider.opc (close)"

Expected Behavior

Handle HTTP error 409 and provide a detailed description that the resource already existed. Instead of

What should have happened?

Actual Behavior

"
Error applying plan:

1 error(s) occurred:

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. delete terraform.tfstate*
  3. Run terraform apply

Important Factoids

The reason for deleting terraform.tfstates is to reproduce messages that are not clear to the end user as of what could have caused a terraform plan NOT to be applied

References

modify tags on ip_network_exchange forces new resource

Terraform Version

terraform v0.10.2
opc_provider v0.1.2

Affected Resource(s)

  • opc_compute_ip_network_exchange

Terraform Configuration Files

resource "opc_compute_ip_network_exchange" "exchange" {
  name = "example"
  tags = [ "tag1" ]
}

Steps to Reproduce

  1. terraform apply
  2. modify tags to [ "tag1", "tag2" ]
  3. terraform plan

Expected Behavior

Tags should be updated without destroy and recreate

Actual Behavior

$ terrraform plan
...
-/+ module.opc_ip_network.opc_compute_ip_network_exchange.exchange (new resource required)
      name:   "example" => "example"
      tags.#: "1" => "2" (forces new resource)
      tags.0: "tag1" => "tag1"
      tags.1: "" => "tag2" (forces new resource)
      uri:    "https://api-z27.compute.us6.oraclecloud.com:443/network/v1/ipnetworkexchange/Compute-usorclptsc53098/[email protected]/example" => "<computed>"

Clarification on the default Networking setting when networking_info block is not specified

Terraform Version

Terraform v0.10.2

Affected Resource(s)

  • opc_instance

Expected Behavior

According to the opc_compute_instance documentation, the networking_info argument is Optional:

networking_info - (Optional) Information pertaining to an individual network interface to be created and attached to the instance. See Networking Info below for more information.

I clearly understand that we have the attribute shared_network within the networking_info block to specify the network setting. However, when we do not specify the networking_info block on the configuration template, Terraform assumes that Shared Network should be setup. If this is the expected behavior, the documentation should explicitly make that clear, because through the OPC Console you always have to choose which network setting you want to use: Shared network, IPNetworks or both.

I believe we should have that clear at least in the documentation to better enforce the semantics for the network.

Terraform Configuration Files

The template above shows exactly that issue.

provider "opc" {
  user            = "${var.user}"
  password        = "${var.password}"
  identity_domain = "${var.domain}"
  endpoint        = "${var.endpoint}"
}

resource "opc_compute_instance" "instance1" {
  name       = "example-instance1"
  label      = "My Oracle Linux 7.2 UEK3 Server"
  shape      = "oc3"
  image_list = "/oracle/public/OL_7.2_UEKR3_x86_64"
  ssh_keys   = ["${opc_compute_ssh_key.sshkey1.name}"]
}

resource "opc_compute_ssh_key" "sshkey1" {
  name    = "example-sshkey1"
  key     = "${file(var.ssh_public_key_file)}"
  enabled = true
}

resource "opc_compute_ip_reservation" "ipreservation1" {
  parent_pool = "/oracle/public/ippool"
  permanent   = true
}

resource "opc_compute_ip_association" "instance1-ipreservation" {
  vcable      = "${opc_compute_instance.instance1.vcable}"
  parent_pool = "ipreservation:${opc_compute_ip_reservation.ipreservation1.name}"
}

References

Network_info: https://www.terraform.io/docs/providers/opc/r/opc_compute_instance.html#shared_network

terraform - Authorization token invalid error

Hi,
I am getting following error when I ran terraform script. this error is getting after 30 minutes when i run the script. The error is throwing during middle of restoring snapshot.

Terraform version - 0.10.0
Provider version - 0.1.2
My script is doing three things
a) creating a snapshot of an existing disk
b using the snapshot create a storage volume
c) using the storage volume then create an instance.
Error applying plan:
1 error(s) occurred:
module.snapshot-restore.opc_compute_storage_volume.opc-restore-data[0]: 1 error(s) occurred:
opc_compute_storage_volume.opc-restore-data.0: Error creating storage volume sp-demo-data: 401: {"message": "Authorization token is invalid"}

Start/Stop instance - Desired State

Hi there,

Terraform Version

Terraform 0.10

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance

Hi Could you please give an example on how is it possible to:

  • Start, stop instance with Terraform

Thanks

carlo

Oracle cloud terraform - bootstrap

This issue was originally opened by @tsgmanh as hashicorp/terraform#15531. It was migrated here as a result of the provider split. The original body of the issue is below.


All,

I am trying to spin up instances on OPC using terraform and I want to run couple of scripts to be executed before the instance come online[ like setting hostname , updating /etc/host file etc]. I didn' see any user_data option for opc instance, so I thought of running using "provisioner remote-exec". below is my my terraform scripts, but it is not working. can any body please help?

resource "null_resource" "cluster" {
provisioner "remote-exec" {
inline = [
"bootstrap.sh", (#this is my script)
]
vars {
hostname = "${element(var.inst_name, count.index)}"
}
}
}

Feature Request - Provisioning of VPNendpointV2

Hi there. We're attempting to provision VPN connections to OPC via terraform, and the feature is unsupported. I've noticed that the OPC REST api has the functionality enabled (See below URL)

VPN Endpoint REST api

It would help us a great deal to have this feature via Terraform.

Thanks, and keep up the good work!
George Crossley

add opc_compute_image_list_entry as a data source

To reference existing public, marketplace, and private images the opc_compute_image_list and opc_compute_image_list_entry needs to be available as data sources

Target configuration could be something like (for illustration only):

data "opc_compute_image_list" "ubuntu-1604" {
  name = "/Compute-${var.domain}/${var.user}/Ubuntu.16.04-LTS.amd64.20160721"
}

data "opc_compute_image_list_entry" "ubuntu-1604-1" {
  image_list = ${opc_compute_image_list.ubuntu-1604.name}
  entry = 1
}

resource "opc_compute_storage_volume" "bootdisk" {
  name = "BootStorageVolume"
  bootable = true
  image_list = "${data.opc_compute_image_list_entry.ubuntu-1604-1.image_list}"
  image_list_entry = "${data.opc_compute_image_list_entry.ubuntu-1604-1.entry}"
  size = ${data.opc_compute_image_list_entry.ubuntu-1604-1.size}
}

Where /Compute-${var.domain}/${var.user}/Ubuntu.16.04-LTS.amd64.20160721 is a pre-existing image in the domain

Note: size is not a native attribute of the image list entry, but would be useful if it was calculated from the underlying referenced machines image, so it can be used to provide the required size for the storage. Or maybe the right approach would be to add a separate opc_compute_machine_image data source .

vnic removed from vnicset on second apply

Terraform Version

v0.9.11

Affected Resource(s)

opc_compute_vnic_set

Terraform Configuration Files

resource "opc_compute_ip_network" "my-ip-network" {
  name                = "my-ip-network"
  ip_address_prefix   = "192.168.1.0/24"
}

resource "opc_compute_acl" "my-acl" {
  name        = "my-acl"
}

resource "opc_compute_vnic_set" "my-vnic-set" {
  name         = "my-vnic-set"
  applied_acls = ["${opc_compute_acl.my-acl.name}"]
}

resource "opc_compute_instance" "my-instance" {
  name = "my-instance"
  hostname = "my-instance"
  label = "my-instance"
  shape = "oc3"
  image_list = "/oracle/public/OL_7.2_UEKR4_x86_64"
  networking_info {
    index = 0
    ip_network = "${opc_compute_ip_network.my-ip-network.name}"
    ip_address = "192.168.1.100"
    vnic = "my-instance_eth0"
    vnic_sets = [ "${opc_compute_vnic_set.my-vnic-set.name}"]
  }
}

Expected Behavior

Running terraform apply a second time with the above config without making at config changes should not modifiy the vnic_set

Actual Behavior

A terraform plan shows that the vnic set will be updated to remove the vnic entry from the set.

~ opc_compute_vnic_set.my-vnic-set
    virtual_nics.#: "1" => "0"
    virtual_nics.0: "my-instance_eth0" => ""


Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. terraform apply
  2. terraform plan

Important Factoids

The vnic entry in the vnic set is automatically added by Compute Cloud during the instance launch based on the networking_info.vnic_sets attribute. Removing the entry from the vnic set means the instance interface looses the association to the ACL that is used to configure the network security.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.