Giter VIP home page Giter VIP logo

terraform-provider-bigip's People

Contributors

aburnett avatar aliallomani avatar appilon avatar catsby avatar cgriggs01 avatar chandrajr avatar dependabot[bot] avatar diaconud007 avatar gaddamsaketha avatar gliptak avatar ivey avatar jakauppila avatar jlosito avatar kavitha-f5 avatar lesamis avatar nirving avatar papineni87 avatar pdxfixit avatar pteiber avatar ramaniprateek avatar ravinderreddyf5 avatar robloxrob avatar scshitole avatar shaggy245 avatar smerrell avatar stack72 avatar stobias123 avatar urohit011 avatar vtrippel avatar wojtek0806 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-bigip's Issues

Allow definiting Status, enabled or disabled, in pool attachment

Currently, there is no way to manage pool member status through Terraform.

I set the status to disabled when I am migrating nodes and want the connections to gracefully drain before removing them from the pool.

I am assuming the place to do this would be the pool attachment resource.

Error checking is not needed on primitive types

Throughout the code there are many Read method implementation that perform error checking on primitive types (TypeBool, TypeInt, TypeFloat, TypeString).

However this is not needed, since only Aggregate types require Error checking.

Example where this is not needed:
https://github.com/terraform-providers/terraform-provider-bigip/blob/0ce45fa06c32a7a173a91bc436ff92345dc6c363/bigip/resource_bigip_cm_device.go#L117-L127

All three of the attributes above are of TypeString type and don't require error checking.

Reference: https://www.terraform.io/docs/extend/best-practices/detecting-drift.html#error-checking-aggregate-types

@scshitole

Bug: partition is never set on 'Create TCP profile'

The partition attribute of the resource bigip_ltm_profile_tcp is never set or is not set in Terraform state.

I work with the following example:

tcp_profile         = {
        "name" = "sanjose-tcp-lan-profile-test"
        "partition" = "Common"
        "idle_timeout" = 2000
        "close_wait_timeout" = 5
        "finwait_2timeout" = 5
        "finwait_timeout" = 300
        "keepalive_interval" = 1700
        "deferred_accept" = "disabled"
        "fast_open" = "disabled"
}

If I do a terraform plan or terraform apply, the resource shows this (which is normal, it knows which partition to create the resource on):

$ terraform apply -var-file=variables.tfvars
data.null_data_source.tcp_profiles: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + bigip_ltm_profile_tcp.tcp_profiles
      id:                 <computed>
      close_wait_timeout: "5"
      defaults_from:      "/Common/tcp"
      deferred_accept:    "disabled"
      fast_open:          "disabled"
      finwait_2timeout:   "5"
      finwait_timeout:    "300"
      idle_timeout:       "2000"
      keepalive_interval: "1700"
      name:               "sanjose-tcp-lan-profile-test"
      partition:          "Common"


Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

bigip_ltm_profile_tcp.tcp_profiles: Creating...
  close_wait_timeout: "" => "5"
  defaults_from:      "" => "/Common/tcp"
  deferred_accept:    "" => "disabled"
  fast_open:          "" => "disabled"
  finwait_2timeout:   "" => "5"
  finwait_timeout:    "" => "300"
  idle_timeout:       "" => "2000"
  keepalive_interval: "" => "1700"
  name:               "" => "sanjose-tcp-lan-profile-test"
  partition:          "" => "Common"
bigip_ltm_profile_tcp.tcp_profiles: Creation complete after 1s (ID: sanjose-tcp-lan-profile-test)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

But if I do a terraform plan just after without changing any variable, I get this:

$ terraform plan -var-file=variables.tfvars
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.null_data_source.tcp_profiles: Refreshing state...
bigip_ltm_profile_tcp.tcp_profiles: Refreshing state... (ID: sanjose-tcp-lan-profile-test)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ bigip_ltm_profile_tcp.tcp_profiles
      partition: "" => "Common"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

The terraform show command also shows that the partition is unknown to terraform:

$ terraform show
bigip_ltm_profile_tcp.tcp_profiles:
  id = sanjose-tcp-lan-profile-test
  close_wait_timeout = 5
  defaults_from = /Common/tcp
  deferred_accept = disabled
  fast_open = disabled
  finwait_2timeout = 5
  finwait_timeout = 300
  idle_timeout = 2000
  keepalive_interval = 1700
  name = sanjose-tcp-lan-profile-test
  partition = 

Feature Request: GTM/Wide IP DNS support

I know the front page states "A Terraform provider for F5 BigIP LTM." and the Doco website says "Resources are currently available for LTM." - however is the plan to eventually include GTM with this provider as well?

stuck on "Acquiring state lock"

Hello,

I try to do a "terraform plan" on our F5 (basically I'm just trying to create a node), but terraform is stuck on "Acquiring state lock". No additionnal logs are provided even if I set TF_LOG to TRACE:

`$ sudo TF_LOG=TRACE CHECKPOINT_DISABLE=1 terraform plan

2018/12/06 17:34:59 [INFO] Terraform version: 0.11.9 4e44b41c8bc1b533d14f9939690adf09e3d2a2be

2018/12/06 17:34:59 [INFO] Go runtime version: go1.11.1

2018/12/06 17:34:59 [INFO] CLI args: []string{"/usr/local/bin/terraform", "plan"}

2018/12/06 17:34:59 [DEBUG] Attempting to open CLI config file: /root/.terraformrc

2018/12/06 17:34:59 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.

2018/12/06 17:34:59 [INFO] CLI command args: []string{"plan"}

2018/12/06 17:34:59 [INFO] command: empty terraform config, returning nil

2018/12/06 17:34:59 [DEBUG] command: no data state file found for backend config

2018/12/06 17:34:59 [DEBUG] New state was assigned lineage "768aaff6-cbf2-fb81-1a60-f2937de8db52"

2018/12/06 17:34:59 [INFO] command: backend initialized:

2018/12/06 17:34:59 [DEBUG] checking for provider in "."

2018/12/06 17:34:59 [DEBUG] checking for provider in "/usr/local/bin"

2018/12/06 17:34:59 [DEBUG] checking for provider in ".terraform/plugins/linux_amd64"

2018/12/06 17:34:59 [DEBUG] found provider "terraform-provider-bigip_v0.12.0_x4"

2018/12/06 17:34:59 [DEBUG] found valid plugin: "bigip", "0.12.0", "/home/[...]/.terraform/plugins/linux_amd64/terraform-provider-bigip_v0.12.0_x4"

2018/12/06 17:34:59 [DEBUG] checking for provisioner in "."

2018/12/06 17:34:59 [DEBUG] checking for provisioner in "/usr/local/bin"

2018/12/06 17:34:59 [DEBUG] checking for provisioner in ".terraform/plugins/linux_amd64"

2018/12/06 17:34:59 [INFO] command: backend is not enhanced, wrapping in local

2018/12/06 17:34:59 [INFO] backend/local: starting Plan operation

Acquiring state lock. This may take a few moments...
`
I tried to wait for 20 minutes without any change.

Thanks in advance for your help.

Exists functions are modifying the Resource Data when they shouldn't

I've noticed that in several places where the Exists method is implemented, it is modifying the resource data under certain conditions (e.g. when the resource is not found).

Examples:

https://github.com/terraform-providers/terraform-provider-bigip/blob/3736641c61a3b5969a988fe1aabdf73dbb6a58a8/bigip/resource_bigip_ltm_monitor.go#L201-L205

https://github.com/terraform-providers/terraform-provider-bigip/blob/3736641c61a3b5969a988fe1aabdf73dbb6a58a8/bigip/resource_bigip_ltm_irule.go#L92-L96

https://github.com/terraform-providers/terraform-provider-bigip/blob/3736641c61a3b5969a988fe1aabdf73dbb6a58a8/bigip/resource_bigip_ltm_virtual_server.go#L311-L313

It is my understanding that Exists should not modify the resource data as indicated in the documentation, using SetId("") should be reserved to Read and Delete methods:

    // Exists is a function that is called to check if a resource still
    // exists. If this returns false, then this will affect the diff
    // accordingly. If this function isn't set, it will not be called. It
    // is highly recommended to set it. The *ResourceData passed to Exists
    // should _not_ be modified.

reference: https://godoc.org/github.com/hashicorp/terraform/helper/schema#Resource

Unable to retrieve route domain to properly display state for virtual server and nodes

This is my first issue, so I apologize if I've missed something.

For the function resourceBigipLtmNodeRead (within here resource_bigip_ltm_node.go) there is logic to check for the route domain. However the regex used does not properly capture the route domain information so the state file only shows the IP address.

This causes future plans/applies for nodes or virtual servers with route domains to try to destroy and recreate or update the object.This would cause any build processes to show that it failed since it doesnt not exit cleanly even though it does provision the objects as necessary.

bigip_ltm_snat missing functionality

within web interface, there is the ability to not only enable/disable snat on ALL VLANS, but to specify which VLAN/Tunnel.
I have a very old outbound rule that I'm trying to replicate.
My original rule looks like this inside of an scf file.

ltm snat /Common/outbound {
origins {
0.0.0.0/0 { }
}
snatpool /Common/out
vlans {
/Common/internal
}
vlans-enabled
}

But the closest I can get in this provider is

resource "bigip_ltm_snat" "outbound" {
name = "outbound"
origins = {name = "0.0.0.0"}
snatpool = "/Common/out"
vlansdisabled = false
}

Any chance on the additional functionality?

Determine which is active in HA pair

One of the challenges we face for any automation is that we run our F5s as an active passive pair it would be good if there was a way using a data resource to capture the active passive - its available through

curl -s -k -u user:passwd https://F5-name/mgmt/tm/shared/bigip-failover-state

returns:

{"isEnabled":true,"pollCyclePeriodMillis":3600000,"nextPollTime":"2019-02-26T09:57:08.067+0000","networkFailoverDeviceGroup":"device-group-failover","failoverState":"active","generation":0,"lastUpdateMicros":0}

or/

{"isEnabled":true,"pollCyclePeriodMillis":3600000,"nextPollTime":"2019-02-26T09:56:24.724+0000","networkFailoverDeviceGroup":"device-group-failover","failoverState":"standby","generation":0,"lastUpdateMicros":0}

This could then be used to then drive the provider choice for subsequent activities.

Provider fails to create a node

I am trying to upgrade our terraform-provider-bigip from 0.3b to 0.12.0. I'm using:

Terraform v0.11.8
 provider.bigip v0.12.0

The bigip version I'm running against has this version: BIG-IP 11.6.3 Build 0.0.3 Final, which I understand is not the official version that you've tested with.

I have the following code in a module (I tried to simplify from the real code a bit). Also note that I tried with the interval field being "5" as well as the int value of 5

locals {
  pool_name = "/Common/${var.env_type}_${var.application}_${var.component}"
}

resource "bigip_ltm_monitor" "monitor" {
  name     = "/Common/${var.env_type}_${var.application}_${var.component}"
  parent   = "/Common/http"
  send     = "GET ${var.healthcheck_uri} HTTP/1.0\r\n"
  receive  = "${var.healthcheck_return}"
  timeout  = "16"
  interval = 5
}

resource "bigip_ltm_node" "node" {
  count   = "5"
  name    = "/Common/${element(var.instances_dns, count.index)}"
  address = "${element(var.instance_private_ips, count.index)}"
}

resource "bigip_ltm_pool" "pool" {
  count               = "1"
  name                = "${local.pool_name}"
  load_balancing_mode = "least-connections-member"
  monitors            = ["${var.tcp_monitor ? "/Common/tcp" : bigip_ltm_monitor.monitor.name}"]
  allow_snat          = "yes"
  allow_nat           = "yes"
}

resource "bigip_ltm_pool_attachment" "pool_node_attachment" {
  count = "5"
  pool  = "${local.pool_name}"
  node  = "${element(bigip_ltm_node.node.*.name, count.index)}"
}

It appears the monitor, pool and nodes get created ok, as evidenced below, but it looks like right when it is about to perform the pool attachment, the apply fails.

module.test_f5_pool.bigip_ltm_node.node[4]: Creating...
  address:          "" => "8.8.8.5"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_instance_5"
  state:            "" => "user-up"
module.test_f5_pool.bigip_ltm_monitor.monitor: Creating...
  destination:   "" => "*:*"
  interval:      "" => "5"
  ip_dscp:       "" => "0"
  manual_resume: "" => "disabled"
  name:          "" => "/Common/QA_tf_test-f5-pool-v2-with-nodes"
  parent:        "" => "/Common/http"
  receive:       "" => "HTTP/1.1 200"
  reverse:       "" => "disabled"
  send:          "" => "GET /health HTTP/1.0\\r\\n"
  time_until_up: "" => "0"
  timeout:       "" => "16"
  transparent:   "" => "disabled"
module.test_f5_pool.bigip_ltm_node.node[3]: Creating...
  address:          "" => "8.8.8.4"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_instance_4"
  state:            "" => "user-up"
module.test_f5_pool.bigip_ltm_node.node[1]: Creating...
  address:          "" => "8.8.8.2"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_instance_2"
  state:            "" => "user-up"
module.test_f5_pool.bigip_ltm_node.node[0]: Creating...
  address:          "" => "8.8.8.1"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_instance_1"
  state:            "" => "user-up"
module.test_f5_pool.bigip_ltm_node.node[2]: Creating...
  address:          "" => "8.8.8.3"
  connection_limit: "" => "0"
  dynamic_ratio:    "" => "0"
  name:             "" => "/Common/test_instance_3"
  state:            "" => "user-up"
module.test_f5_pool.bigip_ltm_monitor.monitor: Creation complete after 3s (ID: /Common/QA_tf_test-f5-pool-v2-with-nodes)
module.test_f5_pool.bigip_ltm_pool.pool: Creating...
  allow_nat:           "" => "yes"
  allow_snat:          "" => "yes"
  load_balancing_mode: "" => "least-connections-member"
  monitors.#:          "" => "1"
  monitors.2320741212: "" => "/Common/QA_tf_test-f5-pool-v2-with-nodes"
  name:                "" => "/Common/QA_tf_test-f5-pool-v2-with-nodes"
  reselect_tries:      "" => "0"
  service_down_action: "" => "none"
  slow_ramp_time:      "" => "10"
module.test_f5_pool.bigip_ltm_pool.pool: Creation complete after 0s (ID: /Common/QA_tf_test-f5-pool-v2-with-nodes)

Error: Error applying plan:

5 error(s) occurred:

* module.test_f5_pool.bigip_ltm_node.node[3]: 1 error(s) occurred:

* bigip_ltm_node.node.3: json: cannot unmarshal number into Go struct field .interval of type string
* module.test_f5_pool.bigip_ltm_node.node[2]: 1 error(s) occurred:

* bigip_ltm_node.node.2: json: cannot unmarshal number into Go struct field .interval of type string
* module.test_f5_pool.bigip_ltm_node.node[1]: 1 error(s) occurred:

* bigip_ltm_node.node.1: json: cannot unmarshal number into Go struct field .interval of type string
* module.test_f5_pool.bigip_ltm_node.node[4]: 1 error(s) occurred:

* bigip_ltm_node.node.4: json: cannot unmarshal number into Go struct field .interval of type string
* module.test_f5_pool.bigip_ltm_node.node[0]: 1 error(s) occurred:

* bigip_ltm_node.node.0: json: cannot unmarshal number into Go struct field .interval of type string

It appears as if the provider is allowing me to create the monitor objects with either an integer or a string value type, but then doesn't like it when it attempts a pool attachment. This used to not be a problem when we used 0.3b. Can you recommend a workaround or a solution of any kind for this problem?

VLAN and SelfIP resources Read method doesn't refresh all the attributes

Woot woot! first issue ๐Ÿ˜„

We've discovered yesterday that VLAN and SelfIP resource are not refreshing all the attributes in their respective Read methods, as well as few other minor things that are missing.

I've prepared a PR for that in the old repo (https://github.com/f5devcentral/terraform-provider-bigip/pull/126), I'll resubmit the PR here shortly.

Very excited to have this provider in the official repo ๐Ÿ˜€

Bug: 'Update TCP profile' destroys the Terraform state of the resource

I found a bug that happens on terraform plan whenever I updated the resource bigip_ltm_profile_tcp just before.

I create my resource with the following configuration:

tcp_profile         = {
        "name" = "sanjose-tcp-lan-profile-test"
        "idle_timeout" = 2000
        "close_wait_timeout" = 5
        "finwait_2timeout" = 5
        "finwait_timeout" = 300
        "keepalive_interval" = 1700
        "deferred_accept" = "disabled"
        "fast_open" = "disabled"
}

The command terraform show shows the following:

$ terraform show
bigip_ltm_profile_tcp.tcp_profiles:
  id = sanjose-tcp-lan-profile-test
  close_wait_timeout = 5
  defaults_from = /Common/tcp
  deferred_accept = disabled
  fast_open = disabled
  finwait_2timeout = 5
  finwait_timeout = 300
  idle_timeout = 200
  keepalive_interval = 1700
  name = sanjose-tcp-lan-profile-test
  partition = 

If I do a terraform plan, it shows that there is nothing to be changed.
I I update a value with a terraform apply:

$ terraform apply -var-file=variables.tfvars
data.null_data_source.tcp_profiles: Refreshing state...
bigip_ltm_profile_tcp.tcp_profiles: Refreshing state... (ID: sanjose-tcp-lan-profile-test)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ bigip_ltm_profile_tcp.tcp_profiles
      idle_timeout: "200" => "2000"


Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

bigip_ltm_profile_tcp.tcp_profiles: Modifying... (ID: sanjose-tcp-lan-profile-test)
  idle_timeout: "200" => "2000"
bigip_ltm_profile_tcp.tcp_profiles: Modifications complete after 1s

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

And then, if I do a terraform plan just after (without modifying anything), I get this even though it already exists:

$ terraform plan -var-file=variables.tfvars
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.null_data_source.tcp_profiles: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + bigip_ltm_profile_tcp.tcp_profiles
      id:                 <computed>
      close_wait_timeout: "5"
      defaults_from:      "/Common/tcp"
      deferred_accept:    "disabled"
      fast_open:          "disabled"
      finwait_2timeout:   "5"
      finwait_timeout:    "300"
      idle_timeout:       "2000"
      keepalive_interval: "1700"
      name:               "sanjose-tcp-lan-profile-test"
      partition:          "Common"


Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

The terraform show command shows me an empty state (apart from my other variables), but for terraform, the update command destroyed the resource. On the F5 device, the resource does still exist.

Docs: How to import pool_attachment

I am trying to import a pool attachment. The code seems to indicate <pool_name>-<node_name> but I havent been able to get it to work.

Also, applying the plan with the un-imported pool attachment fails (expected).

Terraform will perform the following actions:

  + bigip_ltm_pool_attachment.datahub_na-memo_8080--hub-na-memo-tst-01-50v-awstst-pason-com
      id:   <computed>
      node: "/Common/hub-na-memo-tst-01-50v.awstst.pason.com:8080"
      pool: "/Common/datahub_na-memo_8080"

...

* bigip_ltm_pool_attachment.datahub_na-memo_8080--hub-na-memo-tst-01-50v-awstst-pason-com: Failure adding node /Common/hub-na-memo-tst-01-50v.awstst.pason.com:8080 to pool /Common/datahub_na-memo_8080: 01020066:3: The requested Pool Member (/Common/datahub_na-mem
o_8080 /Common/hub-na-memo-tst-01-50v.awstst.pason.com 8080) already exists in partition Common.

I have tried:

# Based on the error output: <pool_name> <node_name> <port>
$ terraform import bigip_ltm_pool_attachment.datahub_na-memo_8080--hub-na-memo-tst-01-50v-awstst-pason-com '/Common/datahub_na-memo_8080 /Common/hub-na-memo-tst-01-50v.awstst.pason.com 8080'

# Based on the code <pool_name>-<node_name>
$ terraform import bigip_ltm_pool_attachment.datahub_na-memo_8080--hub-na-memo-tst-01-50v-awstst-pason-com '/Common/datahub_na-memo_8080-/Common/hub-na-memo-tst-01-50v.awstst.pason.com'

# <pool_name>-<node_name>:<port>
$ terraform import bigip_ltm_pool_attachment.datahub_na-memo_8080--hub-na-memo-tst-01-50v-awstst-pason-com '/Common/datahub_na-memo_8080-/Common/hub-na-memo-tst-01-50v.awstst.pason.com:8080'

# Other:
# <pool_name>-<node_name>-<port>
# <pool_name>-<no_partition_node_name>-<port>
# <pool_name>-<no_partition_node_name>
# <no_partition_pool_name>-<no_partition_node_name>:<port>
# <no_partition_pool_name>-<no_partition_node_name>


Terraform detected a resource with this ID doesn't
exist. Please verify the ID is correct. You cannot import non-existent

How do you look into the IDs for the attachment? The docs for the bigip provider don't include the import command: https://www.terraform.io/docs/providers/bigip/r/bigip_ltm_pool_attachment.html

Creating FTP monitors

Are there any plans to allow FTP monitors to be created through the provider?

Similar question to #45 but probably needs a bit more work since the current monitor resource is geared towards monitors that work in a send / receive string basis.

Unable to set Alias Service Port on HTTPS monitor

Hi,

With the following monitor resource configuration:

resource "bigip_ltm_monitor" "MON_HTTPS_DP_EVAL_CHILD" {
  name              = "/Common/MON_HTTPS_DP_EVAL_CHILD"
  parent            = "/Common/https"
  interval          = 5
  time_until_up     = 0
  timeout           = 16
  send              = "GET /\r\n"
  destination       = "*:8008"
}

...I get the following error:

bigip_ltm_monitor.MON_HTTPS_DP_EVAL_CHILD: "compatibility" invalid value "none", expected one of the following: "disabled" "enabled"

If I remove the 'destination' argument then I don't get the error, and the monitor creates successfully.

If I use the same configuration but with '/Common/http' as the parent then it creates successfully and the Alias Service Port is set correctly.

So it seems when creating an HTTPS monitor with Alias Service Port definition, the 'compatibility' attribute must be set.

Regards,

Dan

Request: Cut a new release

The last release is from September, it would be great if you can cut a new release so users don't have to get the latest changes by building the plugin themselves.

bigip_ltm_policy with additional option

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • bigip_ltm_policy
TERRAFORM VERSION
Terraform v0.11.14
+ provider.bigip v0.12.2
BIGIP VERSION
Main Package
  Product     BIG-IP
  Version     12.1.3.7
SUMMARY

Hi !

It would be nice being able to add http requests redirection to https for virtual servers listening on 80 which traffic must be on https ports instead. The CLI example as follow :

TMSH EXAMPLE
create ltm policy http2https
  strategy all-match
  requires add { http }
  controls add { forwarding }
  rules add {
    <RuleName> {
      actions replace-all-with {
        0 {
          http-reply redirect
          location ""https://[getfield [HTTP::host] \"":\"" 1][HTTP::uri]""
        }
      }
    }
  }

EXPECTED RESULTS
ltm policy http2https {
    controls { forwarding }
    requires { http }
    rules {
        http2https {
            actions {
                0 {
                    http-reply
                    redirect
                    location "tcl:https://[getfield [HTTP::host] \":\" 1][HTTP::uri]"
                }
            }
        }
    }
    status published
    strategy all-match
}
QUESTIONS

Considering that bigip_ltm_policy resource already exists, how can I assign a policy to a given
bigip_ltm_virtual_server resource ?

Thank you in advance.

resource to create the profile, with http as the parent profile missing

resource to create the profile, with http as the parent profile missing in the resource group.

some think like

###################

resource "bigip_ltm_profile_http" {

    name = "/Common/test_http"
        defaults_from = "/Common/httpt"
        concurrent_streams_per_connection = 10
        connection_idle_timeout= 30

}

############################

Crash when running terraform apply.

Hi there, When I running terraform apply, it showed TERRAFORM CRASH.

Here is my terraform file content:

resource "bigip_ltm_policy" "policy_frontend-api_ip-limit" {
    name = "policy_frontend-api_ip-limit"
    strategy = "first-match"
    requires = [ "http" ]
    controls = [ "forwarding" ]
    rule = [
        {
            name = "deny-when-xff-header-missing"
            condition {
                all = true
                case_insensitive = true
                equals = true
                external = true
                http_header = true
                tm_name = "X-Forwared-For"
                present = true
                remote = true
                request = true
                values = [
                    ""
                ]
            }
            action {
                forward = true
                request = true
                reset = true
            }
        }
    ]
}

And this is crash.log:

crash.log

Please help to check, thanks!

Feature request: support Node.FQDN.Interval

I would like to create a virtual servers using the vsphere provider, and then create a node and add it to a LTM pool by hostname.

Unfortunately, there is race where the new server's hostname might not be resolvable when I create these objects. When this happens, I need to wait for 3600 seconds until the LTM retries DNS. The suggestion from F5 is to adjust the FQDN Interval field (see https://support.f5.com/csp/article/K47726919).

This field is available in the Go API (https://github.com/f5devcentral/go-bigip/blob/master/ltm.go), so would it be possible to make it available in the Terraform provider?

Thanks for your help.

Terraform BIG IP module out of sync

We have an issue when the pool and VIPs already exist on the F5 load balancer, Terraform always fails. I've tried refreshing but the only way to reconcile this is to delete the pools or VIPs on the F5 and then execute the terraform script.

Is there a better way to work around this?

Script

resource "bigip_ltm_pool" "pool" {

name = "${var.partition}${var.pool}"
load_balancing_mode = "round-robin"
monitors = ["tcp_half_open"]
allow_snat = "yes"
allow_nat = "yes"
}

resource "bigip_ltm_pool_attachment" "attach_node" {
count = "${length(var.node_names)}"

pool = "${var.partition}${var.pool}"

node = "${var.partition}${element(var.node_names, count.index)}:9443"

depends_on = ["bigip_ltm_pool.pool"]
}

resource "bigip_ltm_virtual_server" "https" {

name = "${var.partition}${var.vip_name}"
destination = "${var.virtual_ip}"
port = "${var.vip_port}"
pool = "${var.partition}${var.pool}"
profiles = "${var.profiles}"
source_address_translation = "${var.source_address_translation}"
translate_port = "${var.translate_port}"
vlans_enabled = "${var.vlans_enabled}"
vlans = ["${var.vlan}"]

depends_on = ["bigip_ltm_pool.pool"]
}

ERROR
5 error(s) occurred:

  • module.app01-c4.module.app01-c4.bigip_ltm_pool_attachment.attach_node[1]: 1 error(s) occurred:

  • bigip_ltm_pool_attachment.attach_node.1: Failure adding node /Common/AT4D-LVKC4N02-10.155.28.152:9443 to pool /Common/APP01-C4: 01020066:3: The requested Pool Member (/Common/APP01-C4 /Common/AT4D-LVKC4N02-10.155.28.152 9443) already exists in partition Common.

  • module.app01-c4.module.app01-c4.bigip_ltm_virtual_server.https: 1 error(s) occurred:

  • bigip_ltm_virtual_server.https: 01020066:3: The requested Virtual Server (/Common/APP01-C4.AT4D.ACME.COM) already exists in partition Common.

  • module.app01-c4.module.app01-c4.bigip_ltm_pool_attachment.attach_node[0]: 1 error(s) occurred:

  • bigip_ltm_pool_attachment.attach_node.0: Failure adding node /Common/AT4D-LVKC4N01-10.155.28.151:9443 to pool /Common/APP01-C4: 01020066:3: The requested Pool Member (/Common/APP01-C4 /Common/AT4D-LVKC4N01-10.155.28.151 9443) already exists in partition Common.

  • module.mqgateway-c1.module.mqgateway-c1.bigip_ltm_virtual_server.https: 1 error(s) occurred:

  • bigip_ltm_virtual_server.https: 01020066:3: The requested Virtual Server (/Common/AT4D-VPMQG-443) already exists in partition Common.

  • module.app01-c4.module.app01-c4.bigip_ltm_pool_attachment.attach_node[2]: 1 error(s) occurred:

  • bigip_ltm_pool_attachment.attach_node.2: Failure adding node /Common/AT4D-LVKC4N03-10.155.28.153:9443 to pool /Common/APP01-C4: 01020066:3: The requested Pool Member (/Common/APP01-C4 /Common/AT4D-LVKC4N03-10.155.28.153 9443) already exists in partition Common.

F5 Policy resource doesn't work

It seems nothing involving policies wants to function. The resource will not create policies in anything but a draft mode, nor will virtual servers maintain references to an applied policy. I can create a config for a simple Virtual with a pre-existing policy associated, and TF never see's it in the state. Each TF plan results in wanting to add the policy. Even after importing a VS into my state, it will not show any policies are applied.

Has anyone been able to get this to work? I am on 12.1 version of F5s.

Bug on 'Read FastHTTP profile'

Whenever I create a resource bigip_ltm_profile_fasthttp, it is correctly created on the F5 device, but the values in terraform show are incorrect, thus the the command terraform plan also always tries to update the resource.

Here is my test configuration:

fasthttp_profile  = {
            name = "sjfasthttpprofile"
            defaults_from = "fasthttp"
            idle_timeout = 300
            connpoolidle_timeoutoverride    = 0
            connpool_maxreuse = 2
            connpool_maxsize  = 2048
            connpool_minsize = 0
            connpool_replenish = "enabled"
            connpool_step = 4
            forcehttp_10response = "disabled"
            maxheader_size = 32768
}

The creation goes smoothly and here is the result returned:

$ terraform apply -var-file=variables.tfvars
data.null_data_source.fasthttp_profiles: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + bigip_ltm_profile_fasthttp.fasthttp_profiles
      id:                           <computed>
      connpool_maxreuse:            "2"
      connpool_maxsize:             "2048"
      connpool_minsize:             "0"
      connpool_replenish:           "enabled"
      connpool_step:                "4"
      connpoolidle_timeoutoverride: "0"
      defaults_from:                "/Common/fasthttp"
      forcehttp_10response:         "disabled"
      idle_timeout:                 "300"
      maxheader_size:               "32768"
      name:                         "sjfasthttpprofile"


Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

bigip_ltm_profile_fasthttp.fasthttp_profiles: Creating...
  connpool_maxreuse:            "" => "2"
  connpool_maxsize:             "" => "2048"
  connpool_minsize:             "" => "0"
  connpool_replenish:           "" => "enabled"
  connpool_step:                "" => "4"
  connpoolidle_timeoutoverride: "" => "0"
  defaults_from:                "" => "/Common/fasthttp"
  forcehttp_10response:         "" => "disabled"
  idle_timeout:                 "" => "300"
  maxheader_size:               "" => "32768"
  name:                         "" => "sjfasthttpprofile"
bigip_ltm_profile_fasthttp.fasthttp_profiles: Creation complete after 1s (ID: sjfasthttpprofile)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

But, the command terraform show returns this:

bigip_ltm_profile_fasthttp.fasthttp_profiles:
id = sjfasthttpprofile
connpool_maxreuse = 0
connpool_maxsize = 0
connpool_minsize = 0
connpool_replenish = 
connpool_step = 0
connpoolidle_timeoutoverride = 0
defaults_from = 
forcehttp_10response = 
idle_timeout = 0
maxheader_size = 0
name = sjfasthttpprofile

The command terraform plan without changing any of the variables returns this:

$ terraform plan -var-file=variables.tfvars
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.null_data_source.fasthttp_profiles: Refreshing state...
bigip_ltm_profile_fasthttp.fasthttp_profiles: Refreshing state... (ID: sjfasthttpprofile)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ bigip_ltm_profile_fasthttp.fasthttp_profiles
      connpool_maxreuse:    "0" => "2"
      connpool_maxsize:     "0" => "2048"
      connpool_replenish:   "" => "enabled"
      connpool_step:        "0" => "4"
      defaults_from:        "" => "/Common/fasthttp"
      forcehttp_10response: "" => "disabled"
      idle_timeout:         "0" => "300"
      maxheader_size:       "0" => "32768"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Request: hard fork and cut releases of the go-bigip library

Although not directly related to this repo, but I can't open an issue in the go-bigip repo.

This provider is build on top of https://github.com/f5devcentral/go-bigip, which is a fork of https://github.com/scottdware/go-bigip (which is no longer in active development).

However the fork has seriously diverged over time from the original code.

go-bigip doesn't have any releases and it's not possible to open issues, so it's very tricky to track the changes over time or identify issues introduced by changes that are just pushed to master, it also makes vendoring more difficult.

Proposal:

  1. Hard fork https://github.com/f5devcentral/go-bigip and allow to create issues in that repo
  2. Cut a release and vendor it in here, future changes should be through a proper release cycle

/cc @scshitole

F5 ASM

Kindly let us know which field we have to edit in policywaf.json file in order to create new ASM Policy for the existing VIP.

Error handling and log messages is not unified/aligned

I've been going over our error handling practices and some of the log messages and I think we need to have some alignment, especially with Terraform's best practices:

  1. Error checking should be done on aggregate types only (schema.TypeList, schema.TypeSet, and schema.TypeMap)
    reference: https://www.terraform.io/docs/extend/best-practices/detecting-drift.html#error-checking-aggregate-types

  2. In case of error in a Set operations or any other action we need to use return fmt.Errorf() method with a short description of the error that occurred, affected resource and the error content, for example:

return fmt.Errorf("error getting resource %s: %v", d.Id(), err)

Currently, we use log.Printf() and return err which is not a very good way to handle this, since the log message is suppressed and the err returned lacks context.

  1. Logging of debug messages we wish to add throughout the code, should use log.Printf() (if formatting is required) or log.Println() (if there's no formatting) and be tagged as [DEBUG], but this should not be used in error contexts.

I will try to work on a PR to align with these practices.

/cc @scshitole

Inconsistent state behavior with pool attachment resource

Hello
We are observing some inconsistent state behavior with the pool attachment resource.
Pools are successfully built and nodes attached.
However, any subsequent plan/apply with unchanged code results in attempts to re-attach nodes, even though they already exist.
We see similar behavior when attempting to remove a single node. Apply attempts to rebuild remaining nodes, errors out as they already exist and fails to remove node.
Abbreviated relevant example output below:

  • provider.bigip: version = "~> 0.12"

$ terraform apply <- Run Apply. Adds 12 objects, including 4 pool members.

Plan: 12 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

bigip_ltm_pool_attachment.attach_node[3]: Creation complete after 1s (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:80)
bigip_ltm_pool_attachment.attach_node[2]: Creation complete after 2s (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:443)
bigip_ltm_pool_attachment.attach_node[1]: Creation complete after 2s (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:80)
bigip_ltm_pool_attachment.attach_node[0]: Creation complete after 2s (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:443)

Apply complete! Resources: 12 added, 0 changed, 0 destroyed.

$ terraform plan <- Immediately run Plan - code unchanged. 4 Pool members identified in state refresh. Yet it still wants to re-add them.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

bigip_ltm_node.node[0]: Refreshing state... (ID: /Common/xwtcvd1retpoca01.corp.dom)
bigip_ltm_node.node[1]: Refreshing state... (ID: /Common/xwtcvd1retpoca02.corp.dom)
bigip_ltm_monitor.monitor[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_mon)
bigip_ltm_monitor.monitor[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_mon)
bigip_ltm_pool.pool[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool)
bigip_ltm_pool.pool[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool)
bigip_ltm_virtual_server.server[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_vs)
bigip_ltm_virtual_server.server[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_vs)
bigip_ltm_pool_attachment.attach_node[2]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:443)
bigip_ltm_pool_attachment.attach_node[3]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca02.corp.dom:80)
bigip_ltm_pool_attachment.attach_node[0]: Refreshing state... (ID: /Common/wtc_retpoc_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:443)
bigip_ltm_pool_attachment.attach_node[1]: Refreshing state... (ID: /Common/wtc_retpoc_redir_tdg_dev_pool-xwtcvd1retpoca01.corp.dom:80)


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

  • bigip_ltm_pool_attachment.attach_node[0]
    id:
    node: "xwtcvd1retpoca01.corp.dom:443"
    pool: "/Common/wtc_retpoc_tdg_dev_pool"

  • bigip_ltm_pool_attachment.attach_node[1]
    id:
    node: "xwtcvd1retpoca01.corp.dom:80"
    pool: "/Common/wtc_retpoc_redir_tdg_dev_pool"

  • bigip_ltm_pool_attachment.attach_node[2]
    id:
    node: "xwtcvd1retpoca02.corp.dom:443"
    pool: "/Common/wtc_retpoc_tdg_dev_pool"

  • bigip_ltm_pool_attachment.attach_node[3]
    id:
    node: "xwtcvd1retpoca02.corp.dom:80"
    pool: "/Common/wtc_retpoc_redir_tdg_dev_pool"

Plan: 4 to add, 0 to change, 0 to destroy

recreating a node doesn't recreate the pool attachment that depends on it

Starting from a basic pool:

resource "bigip_ltm_pool" "dgamba-test-pool" {
  name = "/Common/dgamba-test-pool"

  load_balancing_mode = "least-connections-member"

  monitors   = [""]
  allow_snat = "yes"
  allow_nat  = "yes"
}

resource "bigip_ltm_pool_attachment" "dgamba-test-pool--dgamba-test-node-1" {
  pool = "${bigip_ltm_pool.dgamba-test-pool.name}"
  node = "${bigip_ltm_node.dgamba-test-node-1.name}:8080"
}

resource "bigip_ltm_node" "dgamba-test-node-1" {
  name = "/Common/dgamba-test-node-1"

  address = "10.169.20.244"

  connection_limit = "0"
  dynamic_ratio    = "1"
  monitor          = "default"
  rate_limit       = "disabled"
  state            = "user-up"  # "user-down"

  fqdn {
    address_family = "ipv4"
  }
}

Then change the IP of the node or change from IP to DNS entry:

Terraform will perform the following actions:

-/+ bigip_ltm_node.dgamba-test-node-1 (new resource required)
      id:                    "/Common/dgamba-test-node-1" => <computed> (forces new resource)
      address:               "10.169.20.244" => "dgamba-test-node-1.example.com" (forces new resource)
      connection_limit:      "0" => "0"
      dynamic_ratio:         "1" => "1"
      fqdn.#:                "1" => "1"
      fqdn.0.address_family: "ipv4" => "ipv4"
      monitor:               "default" => "default"
      name:                  "/Common/dgamba-test-node-1" => "/Common/dgamba-test-node-1"
      rate_limit:            "disabled" => "disabled"
      state:                 "user-up" => "user-up"


Plan: 1 to add, 0 to change, 1 to destroy.

Only the node is changing, the node attachment is not affected in the plan. Then apply:

bigip_ltm_node.dgamba-test-node-1: Destroying... (ID: /Common/dgamba-test-node-1)

Error: Error applying plan:

1 error(s) occurred:

* bigip_ltm_node.dgamba-test-node-1 (destroy): 1 error(s) occurred:

* bigip_ltm_node.dgamba-test-node-1: 01070110:3: Node address '/Common/dgamba-test-node-1' is referenced by a member of pool '/Common/dgamba-test-pool'.

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Trying to add depends_on on the pool attachment didn't work.
Changes on the node that require a destroy/create should taint the pool attachment.

Default values for optional attributes being ignored

Hi,

I just started testing the F5 BIG-IP Terraform provider and I have quickly run into what appears to be an issue, unless I am misinterpreting the behaviour.

I have used the following Terraform config to deploy a TCP profile:

resource "bigip_ltm_profile_tcp" "PFL_DP_EVAL_TCP" {
  name                  = "/Common/PFL_DP_EVAL_TCP"
  defaults_from         = "/Common/tcp-lan-optimized"
  partition             = "Common"
}

This applies correctly and sets up a TCP profile with the default values.

However when I run a Terraform plan without changing any of the configuration, I get the following:

Terraform will perform the following actions:

bigip_ltm_profile_tcp.PFL_DP_EVAL_TCP
      close_wait_timeout: "5" => "0"
      deferred_accept:    "disabled" => ""
      fast_open:          "disabled" => ""
      finwait_2timeout:   "300" => "0"
      finwait_timeout:    "5" => "0"
      idle_timeout:       "300" => "0"
      keepalive_interval: "1800" => "0"
Plan: 0 to add, 1 to change, 0 to destroy.

It appears to want to make an update to reset all the optional attributes to either 0 or "". I think it does not recognise that I am using the default values. I would expect that running at 'terraform plan' would report 'no changes'. Is there an issue here?

Regards,

Dan

bigip_ltm_policy rule condition not working

I am trying to create a policy if the uri contain my domain will forward to some pool.
This is my current policy resource

resource "bigip_ltm_policy" "policy" {
  name = "${var.poolName}-policy"
  strategy = "first-match"
  requires = ["http"]
  published_copy = "Drafts/${var.poolName}-policy"
  controls = ["forwarding"]
  rule {
    name = "route-${var.poolName}"
    condition {
      http_uri = true
      host = true
      contains = true
      values = ["<mydomain>"]
    }
    action {
      forward = true
      pool = "${bigip_ltm_pool.pool.name}"
    }
 }
  depends_on = ["bigip_ltm_pool.pool"]
}

when I terraform apply
it will always show up this

Error: 01071706:3: Policy '/Common/Drafts/<poolname>-policy', rule 'route-<poolname>'; missing operand.

  on main.tf line 121, in resource "bigip_ltm_policy" "policy":
 121: resource "bigip_ltm_policy" "policy" {

I try if comment the condition is will create the policy successfully

resource "bigip_ltm_policy" "policy" {
  name = "${var.poolName}-policy"
  strategy = "first-match"
  requires = ["http"]
  published_copy = "Drafts/${var.poolName}-policy"
  controls = ["forwarding"]
  rule {
    name = "route-${var.poolName}"
    #condition {
    #  http_uri = true
    #  host = true
    #  contains = true
    #  values = ["<mydomain>"]
    #}
    action {
      forward = true
      pool = "${bigip_ltm_pool.pool.name}"
    }
 }
  depends_on = ["bigip_ltm_pool.pool"]
}

More info

my current provider settings

provider "bigip" {
  version = "=0.12.3"
  address = "${var.url}"
  username = "${var.username}"
  password = "${var.password}"
}

@scshitole Stop pushing to master

@scshitole That's not the proper way to provide a stable release. It doesn't make any sense and looks very unprofessional. As an example:

hashicorp@d8db259

I'd encourage you to create feature branches that you merge into an development/integration branch when a PR is approved.
I'd be desirable too to freeze at some point that branch and create release branches (v0.0.1, 2,3 ...) that will only contain bug fixes for those versions.

X-F5-Auth-Token expired. Requesting that a property be added to the provider that can specify a token timeout value.

Resources that are provisioning prior to the use for BigIp resources takes longer than 20Minutes and by then the F5 token has already expired.

Error: Error applying plan:

2 error(s) occurred:

  • module.web-farm.bigip_ltm_node.app_nodes[1]: 1 error(s) occurred:

  • bigip_ltm_node.app_nodes.1: X-F5-Auth-Token has expired.

  • module.web-farm.bigip_ltm_node.app_nodes[0]: 1 error(s) occurred:

  • bigip_ltm_node.app_nodes.0: X-F5-Auth-Token has expired.

Unable to set a custom parent for a monitor

Hi,

Seems I am unable to set a custom parent for a monitor. I get the following error:

bigip_ltm_monitor.MON_HTTP_DP_EVAL_CHILD: parent must be one of /Common/http, /Common/https, /Common/icmp, /Common/gateway-icmp, /Common/tcp-half-open,  or /Common/tcp

Can this restriction be relaxed?

Regards,

Dan

[PROPOSAL] Switch to Go Modules

As part of the preparation for Terraform v0.12, we would like to migrate all providers to use Go Modules. We plan to continue checking dependencies into vendor/ to remain compatible with existing tooling/CI for a period of time, however go modules will be used for management. Go Modules is the official solution for the go programming language, we understand some providers might not want this change yet, however we encourage providers to begin looking towards the switch as this is how we will be managing all Go projects in the future. Would maintainers please react with ๐Ÿ‘ for support, or ๐Ÿ‘Ž if you wish to have this provider omitted from the first wave of pull requests. If your provider is in support, we would ask that you avoid merging any pull requests that mutate the dependencies while the Go Modules PR is open (in fact a total codefreeze would be even more helpful), otherwise we will need to close that PR and re-run go mod init. Once merged, dependencies can be added or updated as follows:

$ GO111MODULE=on go get github.com/some/module@master
$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

GO111MODULE=on might be unnecessary depending on your environment, this example will fetch a module @ master and record it in your project's go.mod and go.sum files. It's a good idea to tidy up afterward and then copy the dependencies into vendor/. To remove dependencies from your project, simply remove all usage from your codebase and run:

$ GO111MODULE=on go mody tidy
$ GO111MODULE=on go mod vendor

Thank you sincerely for all your time, contributions, and cooperation!

Tests failing for persistence_profile_cookie, persistence_profile_dstaddr & persistence_profile_srcaddr

Tests for the following resources are failing on BigIP 12.1.1 due to this commit --> hashicorp@344b1fe

  • bigip_ltm_persistence_profile_cookie
  • bigip_ltm_persistence_profile_srcaddr
  • bigip_ltm_persistence_profile_dstaddr
=== RUN   TestAccBigipLtmPersistenceProfileCookieCreate
--- FAIL: TestAccBigipLtmPersistenceProfileCookieCreate (3.99s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_cookie.test_ppcookie
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_cookie.test_ppcookie:
                  ID = /Common/test-ppcookie
                  provider = provider.bigip
                  always_send = enabled
                  app_service =
                  cookie_encryption = disabled
                  cookie_encryption_passphrase =
                  cookie_name = ham
                  defaults_from = /Common/cookie
                  expiration = 1:0:0
                  hash_length = 0
                  hash_offset = 0
                  httponly = disabled
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = disabled
                  name = /Common/test-ppcookie
                  override_conn_limit = enabled
                  timeout = 3600
=== RUN   TestAccBigipLtmPersistenceProfileCookieImport
--- FAIL: TestAccBigipLtmPersistenceProfileCookieImport (4.10s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_cookie.test_ppcookie
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_cookie.test_ppcookie:
                  ID = /Common/test-ppcookie
                  provider = provider.bigip
                  always_send = enabled
                  app_service =
                  cookie_encryption = disabled
                  cookie_encryption_passphrase =
                  cookie_name = ham
                  defaults_from = /Common/cookie
                  expiration = 1:0:0
                  hash_length = 0
                  hash_offset = 0
                  httponly = disabled
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = disabled
                  name = /Common/test-ppcookie
                  override_conn_limit = enabled
                  timeout = 3600
=== RUN   TestAccBigipLtmPersistenceProfileDstAddrCreate
--- FAIL: TestAccBigipLtmPersistenceProfileDstAddrCreate (4.62s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_dstaddr.test_ppdstaddr
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_dstaddr.test_ppdstaddr:
                  ID = /Common/test-ppdstaddr
                  provider = provider.bigip
                  app_service =
                  defaults_from = /Common/dest_addr
                  hash_algorithm = carp
                  mask = 255.255.255.255
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = enabled
                  name = /Common/test-ppdstaddr
                  override_conn_limit = enabled
                  timeout = 3600
=== RUN   TestAccBigipLtmPersistenceProfileDstAddrImport
--- FAIL: TestAccBigipLtmPersistenceProfileDstAddrImport (4.02s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_dstaddr.test_ppdstaddr
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_dstaddr.test_ppdstaddr:
                  ID = /Common/test-ppdstaddr
                  provider = provider.bigip
                  app_service =
                  defaults_from = /Common/dest_addr
                  hash_algorithm = carp
                  mask = 255.255.255.255
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = enabled
                  name = /Common/test-ppdstaddr
                  override_conn_limit = enabled
                  timeout = 3600
=== RUN   TestAccBigipLtmPersistenceProfileSrcAddrCreate
--- FAIL: TestAccBigipLtmPersistenceProfileSrcAddrCreate (4.13s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_srcaddr.test_ppsrcaddr
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_srcaddr.test_ppsrcaddr:
                  ID = /Common/test-ppsrcaddr
                  provider = provider.bigip
                  app_service =
                  defaults_from = /Common/source_addr
                  hash_algorithm = carp
                  map_proxies = enabled
                  mask = 255.255.255.255
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = enabled
                  name = /Common/test-ppsrcaddr
                  override_conn_limit = enabled
                  timeout = 3600
=== RUN   TestAccBigipLtmPersistenceProfileSrcAddrImport
--- FAIL: TestAccBigipLtmPersistenceProfileSrcAddrImport (4.17s)
        testing.go:527: Step 0 error: After applying this step, the plan was not empty:

                DIFF:

                UPDATE: bigip_ltm_persistence_profile_srcaddr.test_ppsrcaddr
                  app_service: "" => "none"

                STATE:

                bigip_ltm_persistence_profile_srcaddr.test_ppsrcaddr:
                  ID = /Common/test-ppsrcaddr
                  provider = provider.bigip
                  app_service =
                  defaults_from = /Common/source_addr
                  hash_algorithm = carp
                  map_proxies = enabled
                  mask = 255.255.255.255
                  match_across_pools = enabled
                  match_across_services = enabled
                  match_across_virtuals = enabled
                  mirror = enabled
                  name = /Common/test-ppsrcaddr
                  override_conn_limit = enabled
                  timeout = 3600

bigip_ltm_policy resource seems broken

can also reference https://github.com/f5devcentral/terraform-provider-bigip/issues/112

Using a very slightly modified version of the example code for the provider I get the following error when I try to apply.
bigip_ltm_policy.test-policy: 0107186c:3: Policy '/Common/Drafts/my_policy', rule 'rule6'; missing or invalid target.

Here's the terraform Policy I'm trying with.

name = "my_policy"
strategy = "first-match"
 requires = ["http"]
published_copy = "Drafts/my_policy"
 controls = ["forwarding"]
 rule  {
 name = "rule6"

  action = {
    tm_name = "20"
    redirect = true
     location = "https://www.auctionsniper.com/help/support/"
  }
 }
# depends_on = ["bigip_ltm_pool.mypool"]
}```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.