Giter VIP home page Giter VIP logo

terraform-provider-tanzu-mission-control's Introduction

Terraform VMware Tanzu Mission Control Provider

This is the repository for the Terraform Tanzu Mission Control Provider and can be used with VMware Tanzu Mission Control.

For general information about Terraform, visit the official website and the GitHub project page.

Using the Provider

The latest version of this provider requires Terraform v0.12 or higher to run.

Note that you need to run terraform init to fetch the provider before deploying.

Controlling the provider version

Note that you can also control the provider version. This requires the use of a provider block in your Terraform configuration if you have not added one already.

The syntax is as follows:

terraform {
  required_providers {
    tanzu-mission-control = {
      source = "vmware/tanzu-mission-control"
      version = "1.0.0"
    }
  }
}

provider "tanzu-mission-control" {
  # Configuration options
}

Version locking uses a pessimistic operator, so this version lock would mean anything within the 1.x namespace, including or after 1.0.0. Read more on provider version control.

Manual Installation

Cloning the Project

First, you will want to clone the repository to github.com/vmware/terraform-provider-tanzu-mission-control:

mkdir -p $GOPATH/src/github.com/vmware/terraform-provider-tanzu-mission-control
cd $GOPATH/src/github.com/vmware/terraform-provider-tanzu-mission-control
git clone [email protected]:vmware/terraform-provider-tanzu-mission-control.git

Building and Installing the Provider

Recommended golang version is go1.14 onwards. After the clone has been completed, you can enter the provider directory and build the provider.

cd github.com/vmware/terraform-provider-tanzu-mission-control
make

After the build is complete, copy the provider executable terraform-provider-tanzu into location specified in your provider installation configuration. Make sure to delete provider lock files that might exist in your working directory due to prior provider usage. Run terraform init. For developing, consider using dev overrides configuration. Please note that terraform init should not be used with dev overrides.

Developing the Provider

NOTE: Before you start work on a feature, please make sure to check the issue tracker and existing pull requests to ensure that work is not being duplicated. For further clarification, you can also ask in a new issue.

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.14+ is recommended). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

See Manual Installation for details on building the provider.

Testing the Provider

Flattening and Helper Tests

Run the command:

$ make test

Acceptance Tests

NOTE: This block is applicable only for Tanzu Mission Control SaaS offering.

Configuring Environment Variables:

Set the environment variables in your IDE configurations or Terminal. Environment variables that are required to be set universally are TMC_ENDPOINT, VMW_CLOUD_ENDPOINT and VMW_CLOUD_API_TOKEN.

Example:

$ export TMC_ENDPOINT = my-org.tmc.cloud.vmware.com
$ export VMW_CLOUD_ENDPOINT = console.cloud.vmware.com

Environment variables specific to particular resources:

  • Attach Cluster with Kubeconfig and Namespace Resource - KUBECONFIG
  • Tanzu Kubernetes Grid Service for vSphere workload cluster - MANAGEMENT_CLUSTER, PROVISIONER_NAME, VERSION and STORAGE_CLASS.
  • Tanzu Kubernetes Grid workload cluster - MANAGEMENT_CLUSTER and CONTROL_PLANE_ENDPOINT.

Running the Test:

Run the command:

$ make acc-test

To run the acceptance test specific to a resource make use of the build-tags. Build tag name is equivalent to the corresponding resource name.

Running acceptance test without explicitly settingBUILD_TAGSruns all the acceptance test by default. To specifically run acceptances test of a resouces, setBUILD_TAGSvalue to correponding resource name.

For instance to run acceptance test for cluster-group and namespace resource

$ export BUILD_TAGS = "clustergroup namespace"
$ make acc-test

Test provider changes locally

Please make use of a unique path as provided in the Makefile while building the provider with changes and kindly use the same path in the source while using the provider to test the local changes.

terraform {
  required_providers {
    tanzu-mission-control = {
      source = "vmware/dev/tanzu-mission-control"
    }
  }
}

provider "tanzu-mission-control" {
  # Configuration options
}

Debugging Provider

Please set the environmental variable TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to capture the logs. More details in the link here.

Set the environmental variable TMC_MODE to DEV to capture more granular logs.

Connect the VSCode debugger

  1. Create ./.vscode/launch.json
    {
          "version": "0.2.0",
          "configurations": [
              {
                  "name": "Debug Terraform Provider",
                  "type": "go",
                  "request": "launch",
                  "mode": "debug",
                  // this assumes your workspace is the root of the repo
                  "program": "${workspaceFolder}",
                  "env": {},
                  "args": [
                      "-debug",
                  ]
              }
          ]
      }
  2. Click on "Run and Debug" option in VSCode, This will open a panel on the left side of the editor. Here, you can see a list of configurations for debugging different languages and tools. Find the one that says "Debug Terraform Provider" and click on it. This will launch the debugger and attach it to your provider process. You can now set breakpoints, inspect variables, and step through your code as usual.
  3. Check the "DEBUG CONSOLE" tab, there you will find the value of TF_REATTACH_PROVIDERS, which is a special environment variable that tells Terraform how to connect to the provider's plugin process. You need to set this variable in your shell before running any Terraform commands. For example, you can use the export command as shown below:
    # Set TF_REATTACH_PROVIDERS as environment variable.
    export TF_REATTACH_PROVIDERS='{"vmware/dev/tanzu-mission-control":{"Protocol":"grpc","ProtocolVersion":5,"Pid":1338,"Test":true,"Addr":{"Network":"unix","String":"/var/folders/r9/h_0mgps9053g3tft7t8xh6rh0000gq/T/plugin2483048401"}}}'
    
    # Run TF command
    terraform plan

https://developer.hashicorp.com/terraform/plugin/debugging#visual-studio-code

Provider Documentation

Tanzu Mission Control Terraform provider documentation is autogenerated using tfplugindocs.

Use the tfplugindocs tool to generate documentation for your provider in the format required by the Terraform Registry. The plugin will read the descriptions and schema of each resource and data source in your provider and generate the relevant Markdown files for you.

Using Tanzu Mission Control Provider

Please refer to examples folder to perform CRUD operations with Tanzu Mission Control provider for various resources

Troubleshooting

Executions of a different version of the provider

Terraform will always look for the latest version of the provided and will use it even if you have just built a previous version. Terraform caches all known builds/versions in the cache folder located in ~/.terraform.d folder.

Delete ~/.terraform.d/plugins/vmware folder to remove all cached versions of the plugin

Support

The Tanzu Mission Control Terraform provider is now VMware supported as well as community supported. For bugs and feature requests please open a Github Issue and label it appropriately or contact VMware support.

License

Copyright © 2015-2022 VMware, Inc. All Rights Reserved.

The Tanzu Mission Control Terraform provider is available under MPL2.0 license.

terraform-provider-tanzu-mission-control's People

Contributors

akbaralishaikh avatar ankitsny avatar asutoshpalai avatar berndtj avatar billkable avatar dependabot[bot] avatar dheerajsshetty avatar gargi avatar gilterasky avatar grenader avatar gshaw-pivotal avatar hxietkg avatar ishangupta-ds avatar kduan005 avatar markpeek avatar ramya-bangera avatar rnarenpujari avatar shobha2626 avatar shubbhang351 avatar smwakalobo avatar snootan avatar sreenivasmrpivot avatar taylorono avatar vasundharashukla avatar vmwghbot avatar warroyo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-tanzu-mission-control's Issues

Unable to update or destroy a terraform when utilizing execution_cmd.

Describe the bug

 Error: Missing map element
│
│   on main.tf line 4, in module "ubuntu_node":
│    4:   execution_cmd=tanzu-mission-control_cluster.attach_cluster_without_apply.status.execution_cmd
│     ├────────────────
│     │ tanzu-mission-control_cluster.attach_cluster_without_apply.status is map of string with 9 elements
│
│ This map does not have an element with the key "execution_cmd".
This is what I got, thats producing that error.
module "ubuntu_node" {
  source = "../ubuntu_node"
  hostname="tap-run-1"
  execution_cmd=tanzu-mission-control_cluster.attach_cluster_without_apply.status.execution_cmd
}
and in the module
  provisioner "remote-exec" {
      inline = [
        "/bin/bash -c \"timeout 300 sed '/finished-user-data/q' <(tail -f /var/log/cloud-init-output.log)\"",
        "export KUBECONFIG=/home/ubuntu/.kube/config",
        "${var.execution_cmd}"
      ]
}

Reproduction steps

  1. Create a terraform script that provisions an attached cluster
  2. output the execution_cmd to the remote exec provisioner.
  3. Works first time,
  4. Terraform UPDATE or DELETE does not work.
    ...

Expected behavior

rerunning terraform should not result in an error due to missing execution_cmd.

the status map that is returned when looking up the resource does not return this value after the cluster is attached.

workaround for me was to use

lookup(tanzu-mission-control_cluster.attach_cluster_without_apply.status,"execution_cmd","unkown")

If this is working as designed please provide working example of attaching a cluster after provisioning it.

Additional context

No response

Unable to upgrade EKS cluster

Describe the bug

I created a EKS cluster on Kubernetes 1.23 with the TMC terraform provider. II'm attempting to change the kubernetes_version from 1.23 to 1.24 in my terraform config and applying. Terraform apply seems to imply that a change will be made. But nothing happens, and when I check the state, it still reports the old version. Using TMC terraform provider 1.1.7.

Reproduction steps

  1. Create an EKS cluster on Kubernetes 1.23 using the TMC terraform provider
  2. Change kubernetes_version from 1.23 to 1.24
  3. terraform apply
# terraform apply
tanzu-mission-control_cluster_group.sandbox: Refreshing state... [id=cg:01H12NCXQC92PFPD572RYXY4PZ]
data.tanzu-mission-control_ekscluster.tf_eks_cluster: Reading...
tanzu-mission-control_ekscluster.tf_eks_cluster: Refreshing state... [id=c:01H154EQFQFABTYF7HYNPQXM88]
data.tanzu-mission-control_ekscluster.tf_eks_cluster: Read complete after 1s [id=c:01H154EQFQFABTYF7HYNPQXM88]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # tanzu-mission-control_ekscluster.tf_eks_cluster will be updated in-place
  ~ resource "tanzu-mission-control_ekscluster" "tf_eks_cluster" {
        id                 = "c:01H154EQFQFABTYF7HYNPQXM88"
        name               = "eks-sandbox"
        # (4 unchanged attributes hidden)

      ~ spec {
            # (1 unchanged attribute hidden)

          ~ config {
              ~ kubernetes_version = "1.23" -> "1.24"
                tags               = {
                    "owner"                               = "xxx"
                    "tmc.cloud.vmware.com/tmc-creator"    = "xxx"
                    "tmc.cloud.vmware.com/tmc-credential" = "xxx
                    "tmc.cloud.vmware.com/tmc-managed"    = "true"
                    "tmc.cloud.vmware.com/tmc-org"        = "xxx"
                }
                # (1 unchanged attribute hidden)

                # (3 unchanged blocks hidden)
            }

            # (2 unchanged blocks hidden)
        }

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

tanzu-mission-control_ekscluster.tf_eks_cluster: Modifying... [id=c:01H154EQFQFABTYF7HYNPQXM88]
tanzu-mission-control_ekscluster.tf_eks_cluster: Modifications complete after 2s [id=c:01H154EQFQFABTYF7HYNPQXM88]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

But no upgrade ever occurs:

# terraform state show tanzu-mission-control_ekscluster.tf_eks_cluster | grep kubernetes_version
            kubernetes_version = "1.23"

Expected behavior

Cluster upgrade to Kubernetes 1.24 occurs.

Additional context

No response

Cluster creation does not wait for TMC components to be healthy

Describe the bug

During cluster creation the terraform provider will return successfully once the TMC cluster is shown as READY. The issue is that it should wait for the cluster to become HEALTHY. Not waiting for the cluster to be healthy means that interacting with the cluster after it is created does not work since we cannot pull a kubeconfig until pinniped is up. This then requires writing a bash wait script to actually wait for the cluster to be ready.

Reproduction steps

  1. create a EKS or AKS cluster.
  2. try to pull the kubeconfig back after it completes
  3. this will error out

Expected behavior

The provider should wait until the cluster is healthy this way it does not error out when trying to access the cluster afterwards

Additional context

No response

terraform-provider-tanzu-mission-control_v1.1.2.exe plugin crashed

Describe the bug

│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.

Stack trace from the terraform-provider-tanzu-mission-control_v1.1.2.exe plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x10 pc=0x222b450]

goroutine 51 [running]:
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/cluster.dataSourceTMCClusterRead({0x27eddb8, 0xc0001925a0}, 0xc00043
5c80, {0x24562e0?, 0xc0001d31c0?})
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/cluster/data_source_cluster.go:136 +0xdb0
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc000203ce0, {0x27eddf0, 0xc0005fe450}, 0xd?, {0x24562e0, 0xc0001d31c
0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:359 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc000203ce0, {0x27eddf0, 0xc0005fe450}, 0xc000435c00, {0x245
62e0, 0xc0001d31c0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:578 +0x145
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc0002cf710, {0x27edd48?, 0xc000363740?}, 0xc0002
ad780)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1179 +0x357
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0xc000269360, {0x27eddf0?, 0xc000483980?}, 0xc000080f00)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:657 +0x409
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x24b2520?, 0xc000269360}, {0x27eddf0, 0xc
000483980}, 0xc00007be00, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:421 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001b5a40, {0x27f14c0, 0xc00008a1a0}, 0xc0005f4ea0, 0xc0003c9c80, 0x333c0b0, 0x0)
google.golang.org/[email protected]/server.go:1282 +0xccf
google.golang.org/grpc.(*Server).handleStream(0xc0001b5a40, {0x27f14c0, 0xc00008a1a0}, 0xc0005f4ea0, 0x0)
google.golang.org/[email protected]/server.go:1619 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/[email protected]/server.go:921 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/[email protected]/server.go:919 +0x28a

Error: The terraform-provider-tanzu-mission-control_v1.1.2.exe plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Reproduction steps

  1. terraform plan

Expected behavior

..

Additional context

No response

Unable to use K8s Service account as role binding subject

Describe the bug

when creating IAM policy the K8S_SERVICEACCOUNT type is not supported. The code is currently setup to use SERVICEACCOUNT as a type but the api expects K8S_SERVICEACCOUNT.

here the code is not allowing the K8S_SERVICEACCOUNT type. additionally, the ENUM is incorrect and there seems to be a bug here that causes the the _ in the type to be removed due to _ being used as a delimiter.

After updating the code to allow K8S_SERVICEACCOUNT as a type the initial create works. however on patch this error occurs due to the delimiter issues.

Error: unable to update Role Binding for workspace: PATCH request failed with status : 400 Bad Request, response: {"error":"unknown value \"\\\"K8S\\\"\" for enum vmware.tanzu.core.v1alpha1.policy.Subject.Kind","code":3,"message":"unknown value \"\\\"K8S\\\"\" for enum vmware.tanzu.core.v1alpha1.policy.Subject.Kind"}
│ 
│   with tanzu-mission-control_iam_policy.iris-blue-cluster-admin-policy,
│   on iris-blue.tf line 18, in resource "tanzu-mission-control_iam_policy" "iris-blue-cluster-admin-policy":
│   18: resource "tanzu-mission-control_iam_policy" "iris-blue-cluster-admin-policy" {

Reproduction steps

attempt to use a K8S_SERVICEACCOUNT with the IAM resource


resource "tanzu-mission-control_iam_policy" "iris-blue-cluster-admin-policy" {
  scope {
    workspace {
      name = tanzu-mission-control_workspace.create_workspace.name
    }
  }

  role_bindings {
    role = "cluster-admin-equiv"
    subjects {
      name = "iris-blue:tenant-flux-reconcile"
      kind = "K8S_SERVICEACCOUNT"
    }
  }
}

Expected behavior

it should work.

Additional context

No response

Incorrectly identifying cluster state change

Describe the bug

I created a cluster where I did not specify the following within my node pool.

        cloud_label = {}
        node_label  = {}

The cluster created fine, however each time the cluster state is refreshed for successive "applies" terraform keeps saying that there is a difference been my state and the remote implementation because of this.

I don't think this should happen.

Reproduction steps

1. Create cluster without the labels and apply
2. Run apply again, and you will see the diff

Expected behavior

I did not change the definition, so I did not expect the state change.

Additional context

No response

Wrong import path found in multiple packages

Describe the bug

vmware-tanzu is a wrong import path, and it is used extensively today in nearly all the packages. We need to update the same.

The module itself has the wrong import path -

module github.com/vmware-tanzu/terraform-provider-tanzu-mission-control

Reproduction steps

Search for `github.com/vmware-tanzu/` in the project.

Expected behavior

github.com/vmware-tanzu/ should be updated to point to the correct path.

Additional context

Found while committing changes for a new feature - security-policy

Ref: #53 (comment)

Adding volumes to nodepools on TKGS cluster results in 0 capacity

Describe the bug

I'm attempting to add an volume to my default node pool. See capacity=50 below. However, the TKC that is created in TKGS has a capacity value of 0, which the WCPMachine rejects.

TKC

    ...
    nodePools:
    - name: default-nodepool
      replicas: 1
      storageClass: vc01cl01-t0compute
      tkr:
        reference:
          name: v1.21.6---vmware.1-tkg.1.b3d708a
      vmClass: custom-worker-standard
      volumes:
      - capacity:
          storage: "0" # <= SEE BAD VALUE
        mountPath: /var/lib/containerd
        name: containerd
        storageClass: vc01cl01-t0compute
     ...

Teraform Resource

resource "tanzu-mission-control_cluster" "sandbox_1" {

  # only provision if sandbox_1 is true in tfvars
  count = var.sandbox_1 ? 1 : 0

  management_cluster_name = var.management_cluster
  provisioner_name        = "sandbox-vns"
  name                    = "sandbox-1"

  meta {
    labels = { "infra" : var.infra_name }
  }

  spec {
    cluster_group = tanzu-mission-control_cluster_group.sandbox.name
    tkg_service_vsphere {
      settings {
        network {
          pods {
            cidr_blocks = [
              "172.20.0.0/16", # pods cidr block by default has the value `172.20.0.0/16`
            ]
          }
          services {
            cidr_blocks = [
              "10.96.0.0/16", # services cidr block by default has the value `10.96.0.0/16`
            ]
          }
        }
        storage {
          classes = [
            var.storage_class,
          ]
          default_class = var.storage_class
        }
      }

      distribution {
        version = var.k8s_version
      }

      topology {
        control_plane {
          class             = "custom-cp-guaranteed"
          storage_class     = var.storage_class
          high_availability = false
        }
        node_pools {
          spec {
            worker_node_count = "1"

            cloud_label = {}
            node_label  = {}

            tkg_service_vsphere {
              class         = "custom-worker-standard"
              storage_class = var.storage_class
              volumes {
                capacity          = 50
                mount_path        = "/var/lib/containerd"
                name              = "containerd"
                pvc_storage_class = var.storage_class
              }
            }
          }
          info {
            name = "default-nodepool"
          }
        }
      }
    }
  }

  ready_wait_timeout = "15m"

}

Reproduction steps

Simply terraform apply the resource above

Expected behavior

The TKC should have the capacity of 50 applied to the volume, not 0

Additional context

No response

Data inventory fields are mismatched in policy template

Describe the bug

The version and kind fields are not mapped correctly within the custom policy template resource. this causes them to be submitted to the api incorrectly

VersionKey: tfModelConverterHelper.BuildDefaultModelPath("spec", dataInventoryArrayField, "kind"),
KindKey: tfModelConverterHelper.BuildDefaultModelPath("spec", dataInventoryArrayField, "version"),

Reproduction steps

  1. create a tanzu-mission-control_custom_policy_template
  2. add data inventory
  3. check the template in TMC, you will see that the fields are flipped.

Expected behavior

kind anf version are set correctly

Additional context

No response

Ability to wait until the cluster is healthy

Is your feature request related to a problem? Please describe.

Not related to a problem.

Describe the solution you'd like

I applied a terraform module that had a cluster defined. It ran through super quick. I'd like to see a means where I could wait until the creation of the cluster either failed or succeeded.

Describe alternatives you've considered

No response

Additional context

No response

Upgrading workload cluster failures

Describe the bug

Upgrading workload clusters from v1.23.8+vmware.2-tkg.2-zshippable to v1.24.9+vmware.1-tkg.4 by adjusting spec.version does not function properly.

Terraform will perform the following actions:

# tanzu-mission-control_tanzu_kubernetes_cluster.tkgs_cluster will be updated in-place
 ~ resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster" {
       id                      = "xxxx"
       name                    = "xxxx"
       # (2 unchanged attributes hidden)

     ~ spec {
           # (3 unchanged attributes hidden)

         ~ topology {
             ~ version           = "v1.23.8+vmware.2-tkg.2-zshippable" -> "v1.24.9+vmware.1-tkg.4"
               # (2 unchanged attributes hidden)

               # (5 unchanged blocks hidden)
           }
       }

       # (2 unchanged blocks hidden)
   }
    │ Error: Couldn't read TKG cluster.
│ Management Cluster Name: xxx, Provisioner: xxxx, Cluster Name: xxxx: Timeout exceeded while waiting for the cluster to be ready. Cluster Status: UPGRADING, Cluster Health: HEALTHY: context deadline exceeded
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tkgs_cluster,
│   on main.tf line 98, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster":
│   98: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster" {
│ 
╵

Relavent main.tf

resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster" {
  management_cluster_name = var.management_cluster
  provisioner_name = var.provisioner
  name = var.name

  spec {
    cluster_group_name = "${var.location}"

    topology {
      version = replace("${var.k8s_version}", "&#43;", "+")
      cluster_class = "tanzukubernetescluster"
      cluster_variables = jsonencode(local.tkgs_cluster_variables)

      control_plane {
        replicas = 3

        os_image {
          name = "${var.os_image}"
          version = "3"
          arch = "amd64"
        }
      }
      
      nodepool {
        name = "default-nodepool-a"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class = "node-pool"
          replicas = 3
          overrides = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-a"
      
          os_image {
            name = "${var.os_image}"
            version = "3"
            arch = "amd64"
          }
        }
      }

      nodepool {
        name = "default-nodepool-b"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class = "node-pool"
          replicas = 3
          overrides = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-b"
      
          os_image {
            name = "${var.os_image}"
            version = "3"
            arch    = "amd64"
          }
        }
      }
      
      nodepool {
        name        = "default-nodepool-c"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class   = "node-pool"
          replicas       = 3
          overrides      = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-c"
      
          os_image {
            name    = "${var.os_image}"
            version = "3"
            arch    = "amd64"
          }
        }
      }   

      network {
        pod_cidr_blocks = [
          "172.20.0.0/16",
        ]
        service_cidr_blocks = [
          "10.96.0.0/16",
        ]
        service_domain = "cluster.local"
      }
    }
  }

  timeout_policy {
    timeout             = 60
    wait_for_kubeconfig = true
    fail_on_timeout     = true
  }
}

vars:

kind: vmware/tanzu
scope: cluster
name: xxxx
location: xxxx
management_cluster: xxxx
provisioner: xxxx
k8s_version: "v1.24.9+vmware.1-tkg.4"
os_image: photon
cp_nodecount: 3
cp_size: medium
nodepool_nodecount: 2
nodepool_size: medium
nodepool_disksize: 200G

Reproduction steps

  1. Deploy new 1.23 cluster
  2. Upgrade to 1.24

Expected behavior

Upgrade to complete successfully.

Additional context

No response

TKG(m and s) allow the ability to taint nodes via cluster and node pool resources

Is your feature request related to a problem? Please describe.

This request is to enhance the cluster and node pool resources to allow the addition of applying taints to nodes within a specified node pool.

For example with TKGs, allow the ability to pass in a taint spec similar to the TKC spec.
apiVersion: run.tanzu.vmware.com/v1alpha2
kind: TanzuKubernetesCluster
spec.topology.nodePools.taints

Describe the solution you'd like

In the TKGs example allow for the taint to be defined in both the cluster resource creating node pools when the cluster is created and the ability to define taints for new node pools in the node pool resource.

Example TKGs TKC spec to mimic within the cluster and node pool resources.

taints:
    #key is the taint key to be applied to a node
    -  key: string
    #value is the taint value corresponding to the key
       value: string
    #effect is the effect of the taint on pods
    #that do not tolerate the taint; valid effects are
    #`NoSchedule`, `PreferNoSchedule`, `NoExecute`
       effect: string
    #timeAdded is the time when the taint was added
    #only written by the system for `NoExecute` taints
       timeAdded: time

Describe alternatives you've considered

No response

Additional context

No response

Unable to scale node pool

Describe the bug

Using Release Version 1.2.2

running terraform apply to scale node pool results in error below:

Error: Unable to update Tanzu Mission Control EKS cluster's nodepools, name : sandbox-tap-full: failed to update existing nodepools: failed to update nodepool np-01: PUT request failed with status : 400 Bad Request, response: {"error":"validating the inputs against available options in AWS failed: rpc error: code = InvalidArgument desc = release version only supported for version 1.26.6-20230816","code":3,"message":"validating the inputs against available options in AWS failed: rpc error: code = InvalidArgument desc = release version only supported for version 1.26.6-20230816"}

Terraform Plan :
Terraform will perform the following actions:

module.k8s-clusters.tanzu-mission-control_ekscluster.sandbox-tap-full[0] will be updated in-place
~ resource "tanzu-mission-control_ekscluster" "sandbox-tap-full" {
id = "c:01H8KR93XB2ME5FFS8YP518YC3"
name = "sandbox-tap-full"
# (3 unchanged attributes hidden)

  ~ spec {
        # (1 unchanged attribute hidden)

      ~ nodepool {

          ~ spec {
                tags           = {}
                # (7 unchanged attributes hidden)

              ~ scaling_config {
                  ~ desired_size = 1 -> 3
                    # (2 unchanged attributes hidden)
                }

                # (2 unchanged blocks hidden)
            }

            # (1 unchanged block hidden)
        }

        # (1 unchanged block hidden)
    }

    # (1 unchanged block hidden)
}

Plan: 0 to add, 1 to change, 0 to destroy.

Terraform node pool config :
nodepool {
info {
name = "np-01"
description = "default node pool"
}

  spec {
    role_arn       = var.arn_worker

    ami_type       = "AL2_x86_64"
    capacity_type  = "ON_DEMAND"
    root_disk_size = 70 // Default: 20GiB
    #tags           = { "nptag" : "nptagvalue9" }
    #node_labels    = { "nplabelkey" : "nplabelvalue" }

    subnet_ids        = var.subnet_ids
    remote_access {
      ssh_key         = var.ssh_key_name
      security_groups = var.ssh_security_groups
    }

    scaling_config {
      desired_size = 3
      max_size     = 5
      min_size     = 1
    }

    update_config {
      max_unavailable_nodes = "1"
    }

    instance_types = [
      "c5.2xlarge"
    ]

  }
}

Reproduction steps

  1. update node pool config to desired size
  2. run terraform plan
  3. run terraform apply
    ...

Expected behavior

Node Pool is scaled to desired size.

Additional context

i am able to scale the Node Pool directly in TMC or AWS without issues. this is only happening with Terraform.

Can't resize tkg_vsphere node pools

Describe the bug

During my tests with the terraform tmc provider I noticed that I can't resize an existing node pool. I got the following error. I'm using the tmc provider version 1.1.7 and the terraform version 1.4.5.

│ Error: TKGs vsphere nodepool spec has to be provided
│ 
│   with tanzu-mission-control_cluster_node_pool.node_pool,
│   on main.tf line 119, in resource "tanzu-mission-control_cluster_node_pool" "node_pool":
│  119: resource "tanzu-mission-control_cluster_node_pool" "node_pool" {
│ 
╵

Reproduction steps

1.Generate tmc cluster
2. Resize an existing node pool

resource "tanzu-mission-control_cluster_node_pool" "node_pool" {
  
  management_cluster_name = "${local.raw_settings.management_cluster_name}"
  provisioner_name    = "${local.raw_settings.provisioner_name}"
  cluster_name           = "${local.raw_settings.name}"
  name                        = ${local.raw_settings.spec.topology.node_pools.info.name}
  spec {
    worker_node_count = "${local.raw_settings.spec.topology.node_pools.spec.worker_node_count}"
    tkg_vsphere {
      vm_config {
        cpu       = "${local.raw_settings.spec.topology.node_pools.spec.tkg_vsphere.vm_config.cpu}"
        disk_size = "${local.raw_settings.spec.topology.node_pools.spec.tkg_vsphere.vm_config.disk_size}"
        memory    = "${local.raw_settings.spec.topology.node_pools.spec.tkg_vsphere.vm_config.memory}"
      }
    }
  }
}

Expected behavior

According to the documentation (https://registry.terraform.io/providers/vmware/tanzu-mission-control/latest/docs/resources/cluster_node_pool#nested-schema-for-spectkg_vsphere) I would have expected, that my existing node pool will be resized to the the number of the the worker_node_count value.

I hope this can be reproduced.

Thank you and best regards
Manuel

Additional context

No response

fatal error: concurrent map writes

Describe the bug

While creating tanzu-mission-control_tanzu_kubernetes_cluster, Terraform crashed after ~18 mins with the following error. Even if terraform crashed, the clusters were created in TMC.

tanzu-mission-control_tanzu_kubernetes_cluster.tap_view: Still creating... [18m30s elapsed]
tanzu-mission-control_tanzu_kubernetes_cluster.tap_run: Still creating... [18m30s elapsed]
tanzu-mission-control_tanzu_kubernetes_cluster.tap_build: Still creating... [18m30s elapsed]
tanzu-mission-control_tanzu_kubernetes_cluster.tap_iterate: Still creating... [18m30s elapsed]
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_build,
│   on clusters-tap-build.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_build":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_build" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call.
│ The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_iterate,
│   on clusters-tap-iterate.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_iterate":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_iterate" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call.
│ The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_run,
│   on clusters-tap-run.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_run":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_run" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call.
│ The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_view,
│   on clusters-tap-view.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call.
│ The plugin logs may contain more details.
╵
Releasing state lock. This may take a few moments...

Stack trace from the terraform-provider-tanzu-mission-control_v1.4.4 plugin:

fatal error: concurrent map writes

goroutine 52 [running]:
net/textproto.MIMEHeader.Set(...)
        net/textproto/header.go:22
net/http.Header.Set(...)
        net/http/header.go:40
github.com/vmware/terraform-provider-tanzu-mission-control/internal/authctx.refreshSMUserAuthCtx(0x140001b70e0)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/authctx/selfmanaged.go:236 +0x11c
github.com/vmware/terraform-provider-tanzu-mission-control/internal/authctx.glob..func1(0x140001b70e0, 0x14000b2c618?, {0x1039fd580?, 0x14000b2c618?})
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/authctx/helper.go:117 +0x54
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.waitClusterReady({0x103a134a8, 0x140001b7260}, 0x140001b70e0, 0x14000abdc80)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:133 +0x32c
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.readFullClusterResourceWait({0x103a134a8, 0x140001b7260}, 0x10331a63a?, 0xf?, {0x140007e97c0, 0x1, 0x0?}, 0x1)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:68 +0x60
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.readResourceWait({0x103a134e0, 0x140005c64b0}, 0x10484fd40?, 0x14000a18bc0?, {0x140007e97c0, 0x1, 0x4}, 0x63?)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:45 +0x15c
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.resourceTanzuKubernetesClusterRead({0x103a134e0, 0x140005c64b0}, 0x14000960e80, {0x1038fa8c0?, 0x1400059a180})
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/resource_tanzu_kuberenetes_cluster.go:125 +0x2d8
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.resourceTanzuKubernetesClusterCreate({0x103a134e0, 0x1400041ef30}, 0x0?, {0x1038fa8c0?, 0x1400059a180?})
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/resource_tanzu_kuberenetes_cluster.go:103 +0x5d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x103a134e0?, {0x103a134e0?, 0x1400041ef30?}, 0xd?, {0x1038fa8c0?, 0x1400059a180?})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:702 +0x64
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x1400077da40, {0x103a134e0, 0x1400041ef30}, 0x14000782b60, 0x14000693180, {0x1038fa8c0, 0x1400059a180})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:837 +0x86c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x14000780630, {0x103a13438?, 0x14000b60700?}, 0x14000180230)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1021 +0xb70
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x1400047b0e0, {0x103a134e0?, 0x1400041e900?}, 0x140009060e0)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:812 +0x384
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x103915180?, 0x1400047b0e0}, {0x103a134e0, 0x1400041e900}, 0x14000906070, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x140007681e0, {0x103a1a378, 0x14000502d00}, 0x140007dc360, 0x140006a7ef0, 0x1047daf00, 0x0)
        google.golang.org/[email protected]/server.go:1335 +0xc64
google.golang.org/grpc.(*Server).handleStream(0x140007681e0, {0x103a1a378, 0x14000502d00}, 0x140007dc360, 0x0)
        google.golang.org/[email protected]/server.go:1712 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.1()
        google.golang.org/[email protected]/server.go:947 +0xb4
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:958 +0x174

goroutine 1 [select, 18 minutes]:
github.com/hashicorp/go-plugin.Serve(0x140001b61e0)
        github.com/hashicorp/[email protected]/server.go:469 +0x10a8
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.Serve({0x10330fccd, 0x8}, 0x1400004c230, {0x0, 0x0, 0x0})
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:311 +0x9e0
github.com/hashicorp/terraform-plugin-sdk/v2/plugin.tf5serverServe(0x140001b6180)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/plugin/serve.go:178 +0x4d4
github.com/hashicorp/terraform-plugin-sdk/v2/plugin.Serve(0x140001b6180)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/plugin/serve.go:118 +0x190
main.main()
        github.com/vmware/terraform-provider-tanzu-mission-control/main.go:31 +0xf4

goroutine 9 [select, 18 minutes]:
github.com/hashicorp/go-plugin.(*gRPCBrokerServer).Recv(0x0?)
        github.com/hashicorp/[email protected]/grpc_broker.go:121 +0x58
github.com/hashicorp/go-plugin.(*GRPCBroker).Run(0x14000180e10)
        github.com/hashicorp/[email protected]/grpc_broker.go:411 +0x40
created by github.com/hashicorp/go-plugin.(*GRPCServer).Init
        github.com/hashicorp/[email protected]/grpc_server.go:85 +0x424

goroutine 10 [IO wait, 18 minutes]:
internal/poll.runtime_pollWait(0x12c728350, 0x72)
        runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0x140001b6660?, 0x14000225000?, 0x1)
        internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x140001b6660, {0x14000225000, 0x1000, 0x1000})
        internal/poll/fd_unix.go:167 +0x1e0
os.(*File).read(...)
        os/file_posix.go:31
os.(*File).Read(0x1400000e980, {0x14000225000?, 0x400?, 0x10371c7a0?})
        os/file.go:119 +0x5c
bufio.(*Reader).Read(0x1400008bf30, {0x14000534000, 0x400, 0x0?})
        bufio/bufio.go:237 +0x1e0
github.com/hashicorp/go-plugin.copyChan({0x103a1e178, 0x1400026a7e0}, 0x0?, {0x1039fcb60?, 0x1400000e980?})
        github.com/hashicorp/[email protected]/grpc_stdio.go:181 +0x154
created by github.com/hashicorp/go-plugin.newGRPCStdioServer
        github.com/hashicorp/[email protected]/grpc_stdio.go:37 +0x10c

goroutine 11 [IO wait, 18 minutes]:
internal/poll.runtime_pollWait(0x12c728170, 0x72)
        runtime/netpoll.go:305 +0xa0
internal/poll.(*pollDesc).wait(0x140001b6720?, 0x14000536000?, 0x1)
        internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x140001b6720, {0x14000536000, 0x1000, 0x1000})
        internal/poll/fd_unix.go:167 +0x1e0
os.(*File).read(...)
        os/file_posix.go:31
os.(*File).Read(0x1400000e9a0, {0x14000536000?, 0x400?, 0x10371c7a0?})
        os/file.go:119 +0x5c
bufio.(*Reader).Read(0x14000086730, {0x14000534400, 0x400, 0x0?})
        bufio/bufio.go:237 +0x1e0
github.com/hashicorp/go-plugin.copyChan({0x103a1e178, 0x1400026a7e0}, 0x0?, {0x1039fcb60?, 0x1400000e9a0?})
        github.com/hashicorp/[email protected]/grpc_stdio.go:181 +0x154
created by github.com/hashicorp/go-plugin.newGRPCStdioServer
        github.com/hashicorp/[email protected]/grpc_stdio.go:38 +0x198

goroutine 37 [syscall, 18 minutes]:

Error: The terraform-provider-tanzu-mission-control_v1.4.4 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Reproduction steps

Unknown

I applied the terraform module several times in clean environments and I got this error only once
source code is available here https://github.com/giovannibaratta/vmware-tanzu-training/tree/main/terraform/stages/50-tmc

Expected behavior

Terraform should not crash

Additional context

I am using TMC self managed v1.2
MacBook M1 (Terraform v1.7.4 on darwin_arm64)

EKS Cluster detach does not delete k8s resources from cluster

Describe the bug

I use TF TMC provider 1.1.7 to attach an EKS cluster to TMC with following config:

resource "tanzu-mission-control_cluster" "attach_cluster_with_kubeconfig" {
  count = var.attach_to_tmc ? 1 : 0

  management_cluster_name = "attached"             # Default: attached
  provisioner_name        = "attached"             # Default: attached
  name                    = local.tmc_cluster_name # Required

  attach_k8s_cluster {
    kubeconfig_raw = local.kubeconfig
    # kubeconfig_file = local.kubeconfig_filename
    description = "optional description about the kube-config provided"
  }

  meta {
    description = "Educates ready cluster provisioned by terraform"
    labels      = { "provisioner" : "terraform", "author" : "jomorales" }
  }

  spec {
    cluster_group = var.cluster_group # Default: default
  }

  ready_wait_timeout = "15m" # Default: waits until 3 min for the cluster to become ready

  depends_on = [
    module.eks, local_file.kubeconfig, time_sleep.attach_cluster_with_kubeconfig
  ]
}

When detachment happens, k8s resources that were created as part of the attachment are not deleted. I have a sleep when detaching to see if it's a timing issue but I see the cluster properly disappearing from TMC ui but k8s resources are still there. I would have expected these to be deleted.

Reproduction steps

Described above.

  1. Create an EKS cluster

  2. Attach to TMC

  3. Detach from TMC

  4. Delete EKS cluster

Expected behavior

When detachment happens, k8s resources that were created as part of the attachment are not deleted. I have a sleep when detaching to see if it's a timing issue but I see the cluster properly disappearing from TMC ui but k8s resources are still there. I would have expected these to be deleted.

Additional context

Detaching should remove pinniped and free the EC2 ELB created so that no infrastructure component is left behind.

EKS Credential Resource is tracking label "tmc.cloud.vmware.com/cred-cloudformation-key"

Describe the bug

I added an EKS Credential and it successfully applied. Then I do a terraform plan, and it says that that same resource needs to be updated because of an label.

# tanzu-mission-control_credential.eks_cred_4 will be updated in-place
  ~ resource "tanzu-mission-control_credential" "eks_cred_4" {
        id     = "cred:01H3YZJTKG4RH4W6J0KMW5XBA1"
        name   = "dpfeffer-aws-4"
        # (1 unchanged attribute hidden)

      ~ meta {
          ~ labels           = {
              - "tmc.cloud.vmware.com/cred-cloudformation-key" = "3400776235300400657" -> null
            }
            # (4 unchanged attributes hidden)
        }

        # (1 unchanged block hidden)
    }

Reproduction steps

  1. Apply an EKS credential and don't specify any labels
  2. Run terraform plan and you will see that it indicates the same resource should be updated to remove a label that was not specified

Expected behavior

The terraform plan call should not indicate any changes.

Additional context

No response

Ability to specify storage classes and default storage class for TKGS clusters

Is your feature request related to a problem? Please describe.

Not related to a problem. Not sure why this field is required.

Describe the solution you'd like

The TMC UI allows me to specify a list of storage classes that I want deployed to the TKGS cluster and optionally one that I would like to set as the default storage class. This is not available with the current tkg_service_vsphere model. I would expect this in the settings block.

Here is a snippet of TMC yaml that includes this data.

spec:
  clusterGroupName: se-dpfeffer-dev
  tkgServiceVsphere:
    distribution:
      version: v1.21.2+vmware.1-tkg.1.ee25d55
    settings:
      network:
        pods:
          cidrBlocks:
          - 172.20.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/16
      storage:
        classes:
        - tanzu
        defaultClass: tanzu

Describe alternatives you've considered

No response

Additional context

No response

Provision cluster from latest provider

Describe the bug

With the new provider I can now successfully run this without the provider crashing but now when following the example for creating a AWS Cluster I always get this error:

Unable to create Tanzu Mission Control cluster entry, name : tkgm-aws-workload: POST request failed with status : 400 Bad Request, response: {"error":"invalid GetCredentialRequest.ProvisionerCred: embedded message failed validation | caused by: invalid FullName.Name: value must be a valid Tanzu name | caused by: a name cannot be empty or be longer than 63 characters","code":3,"message":"invalid GetCredentialRequest.ProvisionerCred: embedded message failed validation | caused by: invalid FullName.Name: value must be a valid Tanzu name | caused by: a name cannot be empty or be longer than 63 characters"}

Reproduction steps

  1. Follow guide here for AWS cluster: https://registry.terraform.io/providers/vmware/tanzu-mission-control/latest/docs/resources/cluster
  2. update management_cluster_name and provisioner_name for my config
  3. Get the above error

Expected behavior

Should provision cluster

Additional context

I was able to successfully use the data source to import an existing cluster I provisioned in Tanzu Mission Control so I believe my provider is setup correctly.

kubeconfig input alternate options

Is your feature request related to a problem? Please describe.

When using the attach_cluster_with_kubeconfig you must specify a path to the kubeconfig using the kubeconfig_file variable and can't use the content of an output or block as an input to the config needed. There are extra modules needed to create a file in order to do this and creates more dependencies.

Describe the solution you'd like

For example, when attaching an AKS cluster, it would be nice to use the kube_config_raw or kube_config_admin_raw value as an input for the kubeconfig needed to attach a cluster to TMC. Currently for an AKS attach, you must create the file through another method like local_file from the kube_config_raw variable and pass that file location to the TMC module. So in the function of attach_k8s_cluster maybe have another input option that could be the content of another ouput.

Describe alternatives you've considered

No response

Additional context

No response

Ability to Provision TKGm Clusters on AWS using Terraform

Is your feature request related to a problem? Please describe.

No

Describe the solution you'd like

Currently it seems this plugin can only provision TKGm clusters via TMC using vSphere, I would like to be able to provision clusters on other cloud providers, most importantly AWS, but I could see the same for other clouds.

Describe alternatives you've considered

No response

Additional context

No response

TMC terraform provider to support provisioned cluster's version upgrade

Is your feature request related to a problem? Please describe.

TMC provisioned workload clusters support upgrading of the k8s version. This is possible by updating the version attribute under spec.distribution.
Presently the TMC provider is not able to support this state change of the cluster

Describe the solution you'd like

Updating the distribution version of the provisioned cluster should trigger the cluster upgrade process

Describe alternatives you've considered

No response

Additional context

No response

Cannot provision TKG workload clusters with TKG 2.2

Describe the bug

I am trying to create a workload cluster using TKGm 2.2 on vSphere. When I run terraform apply, I get the following error message:

tanzu-mission-control_cluster.tool: Creating...
╷
│ Error: Unable to create Tanzu Mission Control cluster entry, name : tool: POST request failed with status : 400 Bad Request, response: {"error":"Only support to provision clusters using ClusterClass on TKG 2.1.0+","code":9,"message":"Only support to provision clusters using ClusterClass on TKG 2.1.0+"}
│
│   with tanzu-mission-control_cluster.tool,
│   on tool-cluster.tf line 2, in resource "tanzu-mission-control_cluster" "tool":
│    2: resource "tanzu-mission-control_cluster" "tool" {

The error says it is only supported to provision clusters on TKG 2.1.0+, but I am on a newer version so this should not be an issue. I have verified that I can manually provision workload clusters using the TMC UI.

Reproduction steps

  1. Set up the Terraform provider file provider.tf with the following contents:
terraform {
  required_providers {
    tanzu-mission-control = {
      source = "vmware/tanzu-mission-control"
      version = "1.1.8"
    }
  }
}

provider "tanzu-mission-control" {
  endpoint            = "<REDACTED>"
  vmw_cloud_api_token = "<REDACTED>"
}
  1. Manually create a cluster group in TMC. Then create a file called cluster_group.tf to reference it:
data "tanzu-mission-control_cluster_group" "apower" {
  name = "apower"
}
  1. Manually create a TKG 2.2 management cluster in vSphere, and attach it to TMC.

  2. Create a workload cluster file called tool-cluster.tf with the following contents:

# Create a Tanzu Kubernetes Grid Vsphere workload cluster entry
resource "tanzu-mission-control_cluster" "tool" {
  management_cluster_name = "apower-h2o-mgmt"
  provisioner_name        = "default"
  name                    = "tool"

#   meta {
#     description = "description of the cluster"
#     labels      = { "key" : "value" }
#   }

  spec {
    cluster_group = data.tanzu-mission-control_cluster_group.apower.name
    tkg_vsphere {
#       advanced_configs {
#         key = "AVI_LABELS"
#         value = "test"
#       }
      settings {
        network {
          pods {
            cidr_blocks = [
              "172.20.0.0/16", # pods cidr block by default has the value `172.20.0.0/16`
            ]
          }

          services {
            cidr_blocks = [
              "10.96.0.0/16", # services cidr block by default has the value `10.96.0.0/16`
            ]
          }

          api_server_port = 6443
          control_plane_end_point = "10.220.55.100" # optional for AVI enabled option
        }

        security {
          ssh_key = "<REDACTED>"
        }
      }

      distribution {
        os_arch = "amd64"
        os_name = "photon"
        os_version = "3"
	version = "v1.25.7+vmware.2-tkg.1"

        workspace {
          datacenter        = "/vc01"
          datastore         = "/vc01/datastore/vsanDatastore"
          workspace_network = "/vc01/network/user-workload"
          folder            = "/vc01/vm/tkg"
          resource_pool     = "/vc01/host/vc01cl01/Resources"
        }
      }

      topology {
        control_plane {
          vm_config {
            cpu       = "8"
            disk_size = "256"
            memory    = "16384"
          }

          high_availability = false
        }

        node_pools {
          spec {
            worker_node_count = "3"

            tkg_vsphere {
              vm_config {
                cpu       = "4"
                disk_size = "256"
                memory    = "8192"
              }
            }
          }

          info {
            name        = "default-nodepool" # default node pool name `default-nodepool`
            description = "my nodepool"
          }
        }
      }
    }
  }
}
  1. Execute Terraform init and terraform apply:
terraform init
terraform apply

Expected behavior

I should be able to create a workload cluster with the TMC Terraform provider.

Additional context

No response

add Resource / data to manage management_cluster

Is your feature request related to a problem? Please describe.

I would like to manage the management cluster via terraform, there are any ways to do this not by the UI ?

Describe the solution you'd like

A resource / data to manage the management_cluster

Describe alternatives you've considered

No response

Additional context

No response

More details needed in EKS document

Describe the bug

The default value specified in EKSCluster resource and data source docs needs to updated.

Reproduction steps

  1. Tried creating an EKS Cluster and Nodepool using the values specified in the example.
  2. Nodepool creation failed.

Expected behavior

Nodepool creation should have succeeded

Additional context

No response

EKS Cluster Deletion Deletes Node Pool, but not Cluster

Describe the bug

I'm using TMC Provider 1.1.7. 6 times I created an eks cluster. I then removed the cluster from the TF module and re-applied. In 5 of the 6 tries the nodepool was deleted, but not the eks cluster. TMC shows detaching. EKS shows active EKS cluster with no nodepools. When I've then run terraform plan with the same module, it indicates no changes necessary. If I then choose to add the cluster back, the terraform plan shows it should create a cluster, but terraform apply fails because cluster is in deleted staet.

Reproduction steps

  1. Create a EKSCluster with 1 nodepool
  2. Delete the EKSCluster (doesn't fully complete, TMC UI shows detaching)
  3. Terraform plan (shows no changes)
  4. Update terraform to recreate the cluster and run terraform plan (shows a new cluster should be created)
  5. Apply, and it deletees.
    ...

Expected behavior

Removal of a ekscluster from TF after itis created should completely delete the cluster from TMC and EKS

If a cluster is in detaching / deleting state, then the terraform plan should indicate that a cluster exists.

Additional context

No response

Network Policies require to_pod_labels which should be optional

Describe the bug

TMC allows for not defining a pod selector to apply a network policy to which in turn means it applies to all pods in the relevant namespace/workspace. the TMC resource requires supplying the to_pod_labels field and it is a map type, however when providing an empty map eg to_pod_labels = {} this fails as well as it complains about requiring the data structure to be an array in the API but it is being passed a value of null.

Reproduction steps

  1. create a network policy resource as such:
resource "tanzu-mission-control_network_policy" "kube_system_egress" {
  name = "allow-egress-to-kube-system"

  scope {
    workspace {
      workspace = "saample"
    }
  }

  spec {
    input {
      custom_egress {
        rules {
          ports {
            port = "53"
            protocol = "TCP"
          }
          ports {
            port = "53"
            protocol = "UDP"
          }
          rule_spec {
            custom_selector {
              namespace_selector = {
                "kubernetes.io/metadata.name" = "kube-system"
              }
            }
          }
        }
      }
    }
  }
}
  1. when this fails try adding the following under custom_egress and see it fail again with a different error:
        to_pod_labels = {}
  1. Add an actual label and see that it works
        to_pod_labels = {"demo" = "true"}

Expected behavior

network policy should allow not specifying the to_pod_labels and or allow an empty selector

Additional context

No response

Can't create cluster with vSphere v8.0 due to missing "ClusterClass"

Describe the bug

I followed the official documentation to provision a TKG Cluster. This resultet in the following error:

│ Error: Unable to create Tanzu Mission Control cluster entry, name : mdib-tf-testcluster: POST request failed with status : 400 Bad Request, response: {"error":"Kubernetes clusters in vSphere 8.0+ must be using ClusterClass, for more information see https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-using/GUID-41DE8F22-56B8-4669-AF3F-E4B4372BDB9E.html","code":9,"message":"Kubernetes clusters in vSphere 8.0+ must be using ClusterClass, for more information see https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzumc-using/GUID-41DE8F22-56B8-4669-AF3F-E4B4372BDB9E.html"}
│ 
│   with tanzu-mission-control_cluster.create_tkgs_workload,
│   on cluster.tf line 14, in resource "tanzu-mission-control_cluster" "create_tkgs_workload":
│   14: resource "tanzu-mission-control_cluster" "create_tkgs_workload" {

The documentation doesn't contain any information about being able to specify the cluster class, and there doesn't seem to be a workaround (except for downgrading the vSphere version, which of course is no option).

My config:
provider.tf

terraform {
  required_providers {
    tanzu-mission-control = {
      source = "vmware/tanzu-mission-control"
      version = "1.2.0"
    }
  }
}


provider "tanzu-mission-control" {
  endpoint            = var.endpoint            # optionally use TMC_ENDPOINT env var
  vmw_cloud_api_token = var.vmw_cloud_api_token # optionally use VMW_CLOUD_API_TOKEN env var

  # if you are using dev or different csp endpoint, change the default value below
  # for production environments the vmw_cloud_endpoint is console.cloud.vmware.com
  # vmw_cloud_endpoint = "console.cloud.vmware.com" or optionally use VMW_CLOUD_ENDPOINT env var
}

cluster.tf

resource "tanzu-mission-control_cluster" "create_tkgs_workload" {
  management_cluster_name = var.management_cluster_name
  provisioner_name        = var.provisioner_name
  name                    = var.cluster_name

  meta {
    labels = { "test" : "test" }
  }

  spec {
    cluster_group = var.cluster_group
    tkg_service_vsphere {
      settings {
        network {
          pods {
            cidr_blocks = [
              "172.20.0.0/16", // Required
            ]
          }
          services {
            cidr_blocks = [
              "10.96.0.0/16", // Required
            ]
          }
        }
        storage {
          classes = [
            "tanzu-default-storage-policy",
          ]
          default_class = "tanzu-default-storage-policy"
        }
      }

      distribution {
        version = "v1.24.9+vmware.1-tkg.4" // Required
      }

      topology {
        control_plane {
          class             = "best-effort-xsmall"         // Required
          storage_class     = "tanzu-default-storage-policy" // Required
          high_availability = true             // Default: false
          volumes {
            capacity          = 4
            mount_path        = "/var/lib/etcd"
            name              = "etcd-0"
            pvc_storage_class = "tkgs-k8s-obj-policy"
          }
        }
        node_pools {
          spec {
            worker_node_count = "2" // Required
            cloud_label = {
              "test" : "test"
            }
            node_label = {
              "test" : "test"
            }
            tkg_service_vsphere {
              class          = "best-effort-small"         // Required
              storage_class  = "tanzu-default-storage-policy" // Required
              failure_domain = ""
              volumes {
                capacity          = 4
                mount_path        = "/var/lib/etcd"
                name              = "etcd-0"
                pvc_storage_class = "tkgs-k8s-obj-policy"
              }
            }
          }
          info {
            name = "default-nodepool" // Required
          }
        }
      }
    }
  }
}

Reproduction steps

  1. Create variables file and specify the following values:
endpoint = 
vmw_cloud_api_token = 
management_cluster_name = 
provisioner_name = 
cluster_name = 
cluster_group = 
  1. Create the tf files from above
  2. Build and deploy the tf files

Expected behavior

The cluster should have been created (or at least got beyond the point of the cluster class error)

Additional context

No response

Import a manually attached cluster failing

Describe the bug

I have cut my teeth on Tanzu and I want to now automate provisioning of clusters via TMC's terraform provider. I understand how to create Terraform for a given resource, and I also understand the basics of Terraform importing.

I can successfully import a cluster I create manually thru TMC, but I have several "legacy" clusters that were attached manually.

This doc

.. specifically states that the import command is compromised of mgmt-cluster-name/provisioner_name/cluster_name.

The cluster I imported successfully which I provisioned thru TMC as a test is supervisor-name/vsphere-namespace/terraform-demo-cluster-1

terraform import tanzu-mission-control_tanzu_kubernetes_cluster.terraform-test-1 mgmt-supervisor/tanzu-simcheck/terraform-test-1 <--- Success

The clusters I've previously manually attached display (in the UI) mgmt-cluster-name = 'attached' and provisioner_name = 'attached'.

tkg-attached-cluster is an example of a cluster I've previously attached and has these displayed in the UI:
image

terraform import tanzu-mission-control_tanzu_kubernetes_cluster.attach_cluster_without_apply attached/attached/tkg-attached-cluster <--- Fails

Results in:

tanzu-mission-control_tanzu_kubernetes_cluster.attach_cluster_without_apply: Importing from ID "attached/attached/tkg-attached-cluster"...
╷
│ Error: Couldn't import TKG cluster.
│ Management Cluster Name: attached, Provisioner: attached, Cluster Name: tkg-attached-cluster: get request(v1alpha1/managementclusters/attached/provisioners/attached/tanzukubernetesclusters/tkg-attached-cluster) failed with status : 404 Not Found, response: {"error":"cluster not found tkg-attached-cluster","code":5,"message":"cluster not found tkg-attached-cluster"}

tkg-attached-cluster definitely exists, I see it in the TMC Console.

Reproduction steps

  1. Provision TKG cluster thru kubectl
  2. Manually attach to TMC
  3. Attempt to terraform import
  4. Failure
    ...

Expected behavior

I expect a successful import like in the first example

Additional context

Importing a cluster from TMC is clearly documented/thought about, but is importing previously attached clusters not supported?

I would prefer not to have to reroll the clusters in order to bring them under Terraform control.

Scaling cluster has no effect

Describe the bug

I am unable to use terraform to scale a cluster that was previously created using terraform.

Reproduction steps

1. Create a TKGS cluster. Everything is working.
2. Change spec.tkg_service_vsphere.topology.node_pools.spec.worker_node_count and apply.  Terraform output recognizes the change, prompts for acceptance, and then reports success.
3. However, there is no impact.  And if I go to the TMC UI and view the Node Pools for that cluster it represents the previous value.

Expected behavior

I would expect for the cluster to be scaled.

Additional context

No response

Support ClusterClass based deployments

Is your feature request related to a problem? Please describe.

Since TKG 2.x, TKG uses ClusterClass-based deployments. I was not able to deploy a cluster on a TKG 2.0 environment when using this provider. After digging into the API, I can see that significant request body changes are required and it doesn't appear to have been implemented in this provider yet.

Describe the solution you'd like

Please add support for ClusterClass-based deployments so this provider is usable for TKG 2.x.

Describe alternatives you've considered

No response

Additional context

No response

Issue modifying spec.cluster_group_name

Describe the bug

When adjusting spec.cluster_group_name from one group to another, the action times out and does not complete. In our case, we had hard coded cluster_group_name = "default" and when making it more dynamic, it failed. New cluster creation with the same group definition works.

main.tf section:

resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tkgs_cluster" {
  management_cluster_name = var.management_cluster
  provisioner_name = var.provisioner
  name = var.name

  spec {
    cluster_group_name = "${var.location}"

    topology {
      version = replace("${var.k8s_version}", "&#43;", "+")
      cluster_class = "tanzukubernetescluster"
      cluster_variables = jsonencode(local.tkgs_cluster_variables)

      control_plane {
        replicas = 3

        os_image {
          name = "${var.os_image}"
          version = "3"
          arch = "amd64"
        }
      }
      
      nodepool {
        name = "default-nodepool-a"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class = "node-pool"
          replicas = 3
          overrides = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-a"
      
          os_image {
            name = "${var.os_image}"
            version = "3"
            arch = "amd64"
          }
        }
      }

      nodepool {
        name = "default-nodepool-b"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class = "node-pool"
          replicas = 3
          overrides = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-b"
      
          os_image {
            name = "${var.os_image}"
            version = "3"
            arch    = "amd64"
          }
        }
      }
      
      nodepool {
        name        = "default-nodepool-c"
        description = "tkgs workload nodepool"
      
        spec {
          worker_class   = "node-pool"
          replicas       = 3
          overrides      = jsonencode(local.tkgs_cluster_variables)
          failure_domain = "${var.location}-zone-c"
      
          os_image {
            name    = "${var.os_image}"
            version = "3"
            arch    = "amd64"
          }
        }
      }   

      network {
        pod_cidr_blocks = [
          "172.20.0.0/16",
        ]
        service_cidr_blocks = [
          "10.96.0.0/16",
        ]
        service_domain = "cluster.local"
      }
    }
  }

  timeout_policy {
    timeout             = 60
    wait_for_kubeconfig = true
    fail_on_timeout     = true
  }
}`

Reproduction steps

  1. Adjust spec.cluster_group_name and apply change.

Expected behavior

Cluster group to update

Additional context

No response

EKS Cluster Ready Wait Timeout should default to 30m, but is actually 3m

Describe the bug

Version 1.1.7

Documentation provided at the terraform marketplace says that the ready_wait_timeout has a default value of 30m, however that is not what is happening. It is much shorter.

I have found at https://github.com/vmware/terraform-provider-tanzu-mission-control/blob/main/internal/resources/ekscluster/resource_ekscluster.go#L34 that the code sets the default to 3m.

Reproduction steps

  1. Create an eks cluster. After about 3 minutes, it will report back successful.
  2. Go to TMC UI and see the cluster is still creating.

Expected behavior

The terraform provider should wait up to 30m for the eks cluster to be ready before reporting back status.

Additional context

https://github.com/vmware/terraform-provider-tanzu-mission-control/blob/main/resource_templates/ekscluster.tf has indicated in comments that 30m is the default.

3m is not realistic to expect a EKS cluster to be created. In my experience a small cluster with 3 nodes takes about 12 minutes.

Fully Support for Tanzu Package

Is your feature request related to a problem? Please describe.

Right now, when installing the Tanzu Package, inline_values can only be specified as String or float.
https://registry.terraform.io/providers/vmware/tanzu-mission-control/latest/docs/resources/package_install

However, most of the Tanzu Package has structural parameters rather than String or Float.

envoy:
 service:
   type: LoadBalancer
   annotations: {}
   externalTrafficPolicy: Cluster
   disableWait: false
 hostPorts:
   enable: true
   http: 80
   https: 443
 hostNetwork: false
 terminationGracePeriodSeconds: 300
 logLevel: info
certificates:
 duration: 8760h
 renewBefore: 360h

Therefore, most Tanzu Packages cannot be deployed with the TMC Provider.

Describe the solution you'd like

Since values is written in YAML, YAML support is desirable.
If this is difficult, Json should be supported.

  inline_values = jsonencode({
{
	"certificates": {
		"duration": "8760h",
		"renewBefore": "360h"
	},
	"contour": {
		"configFileContents": {},
		"logLevel": "info",
		"pspNames": "vmware-system-restricted",
		"replicas": 2,
		"useProxyProtocol": false
	},
	"envoy": {
		"hostNetwork": false,
		"hostPorts": {
			"enable": true,
			"http": 80,
			"https": 443
		},
		"logLevel": "info",
		"service": {
			"annotations": {},
			"disableWait": false,
			"externalTrafficPolicy": "Cluster",
			"type": "LoadBalancer"
		},
		"terminationGracePeriodSeconds": 300
	},
	"infrastructure_provider": "vsphere",
	"namespace": "tanzu-system-ingress"
}
  })

Describe alternatives you've considered

No response

Additional context

No response

Adding EKS Nodepool reports error, but really succeds

Describe the bug

I provisioned an EKS cluster using TMC Terraform Provider 1.1.8. I then went and added a second nodepool to the cluster and applied the changes. After about a minute, I got this error...

╷
│ Error: Unable to update Tanzu Mission Control EKS cluster entry, name : eks-1: Unable to update Tanzu Mission Control EKS cluster entry, name : eks-1: PUT request failed with status : 400 Bad Request, response: {"error":"nothing to update","code":3,"message":"nothing to update"}
│ 
│   with tanzu-mission-control_ekscluster.eks_1,
│   on main.tf line 50, in resource "tanzu-mission-control_ekscluster" "eks_1":
│   50: resource "tanzu-mission-control_ekscluster" "eks_1" {
│ 

However, after that, I checked TMC and the nodepool was added successfully. I then ran terraform apply again, and it showed no changes and I was up to date.
adding-node-pool.log

After my first failure, I wanted to see if I could reproduce it, so I added a 3rd node pool. The same series of events happened. Terraform apply failed with same error message, and then everything worked. Attaching complete command output

Reproduction steps

Captured above.

Expected behavior

Terraform apply to show success.

Additional context

No response

panic: runtime error: invalid memory address or nil pointer dereference

Describe the bug

While creating tanzu-mission-control_tanzu_kubernetes_cluster, Terraform crashed after ~8 mins with the following error. Even if terraform crashed, the clusters were created in TMC.

│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_build,
│   on clusters-tap-build.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_build":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_build" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_iterate,
│   on clusters-tap-iterate.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_iterate":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_iterate" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_run,
│   on clusters-tap-run.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_run":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_run" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
╷
│ Error: Plugin did not respond
│ 
│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_view,
│   on clusters-tap-view.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.
╵
Releasing state lock. This may take a few moments...

Stack trace from the terraform-provider-tanzu-mission-control_v1.4.3 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x105994c80]

goroutine 154 [running]:
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.readFullClusterResourceWait({0x1060b7ac8, 0x140006c9380}, 0x1059c40b3?, 0xf?, {0x140001c51e0, 0x1, 0x0?}, 0x1)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:80 +0xd0
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.readResourceWait({0x1060b7b00, 0x140006baa50}, 0x106ee6ae0?, 0x14000a3ebc0?, {0x140001c51e0, 0x1, 0x4}, 0x62?)
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/helper.go:45 +0x15c
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.resourceTanzuKubernetesClusterRead({0x1060b7b00, 0x140006baa50}, 0x140004bc380, {0x105f9f8e0?, 0x1400038f9e0})
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/resource_tanzu_kuberenetes_cluster.go:125 +0x2d8
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster.resourceTanzuKubernetesClusterCreate({0x1060b7b00, 0x1400097a0f0}, 0x0?, {0x105f9f8e0?, 0x1400038f9e0?})
        github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/tanzukubernetescluster/resource_tanzu_kuberenetes_cluster.go:103 +0x5d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x1060b7b00?, {0x1060b7b00?, 0x1400097a0f0?}, 0xd?, {0x105f9f8e0?, 0x1400038f9e0?})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:702 +0x64
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000792e00, {0x1060b7b00, 0x1400097a0f0}, 0x1400089a4e0, 0x14000900080, {0x105f9f8e0, 0x1400038f9e0})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:837 +0x86c
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x1400078c528, {0x1060b7a58?, 0x14000852a00?}, 0x140004c1a90)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1021 +0xb70
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x140002e9900, {0x1060b7b00?, 0x14000829290?}, 0x140002dd6c0)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:812 +0x384
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler({0x105fb9bc0?, 0x140002e9900}, {0x1060b7b00, 0x14000829290}, 0x140002dd650, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002c21e0, {0x1060be998, 0x14000583040}, 0x1400090e5a0, 0x1400078f1a0, 0x106e71e80, 0x0)
        google.golang.org/[email protected]/server.go:1335 +0xc64
google.golang.org/grpc.(*Server).handleStream(0x140002c21e0, {0x1060be998, 0x14000583040}, 0x1400090e5a0, 0x0)
        google.golang.org/[email protected]/server.go:1712 +0x82c
google.golang.org/grpc.(*Server).serveStreams.func1.1()
        google.golang.org/[email protected]/server.go:947 +0xb4
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:958 +0x174

Error: The terraform-provider-tanzu-mission-control_v1.4.3 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Reproduction steps

Unknown

I applied the terraform module several times in clean environments and I got this error only once
source code is available here https://github.com/giovannibaratta/vmware-tanzu-training/tree/main/terraform/stages/50-tmc

Expected behavior

The plugin should not crash

Additional context

  • I am using TMC self managed v1.1
  • MacBook M1

tanzu-mission-control_git_repository resource not working for Cluster

Describe the bug

The tanzu-mission-control_git_repository resource works for the Cluster Group but when changing the scope to the Cluster it does not.

Reproduction steps

  1. Have a cluster already attached to TMC
  2. Use the tanzu-mission-control_git_repository resource to create a terraform TF file
  3. For the scope use cluster. Add the name of the cluster, provisioned and management cluster name
  4. Apply the template - Terraform Apply

Expected behavior

The expectation is that the git repository would be added to the cluster Add-ons.

Additional context

No response

Support proxy

Is your feature request related to a problem? Please describe.

provider vmware/tanzu-mission-control can't be used on devices behind proxies, even if a proxy configuration is defined in environments variables. Firewall logs clearly shows direct access attempts.

As exemple, azurerm provider is working fine with the same proxy configuration.

Describe the solution you'd like

Proxy support.

Describe alternatives you've considered

No response

Additional context

No response

tanzu-mission-control_tanzu_kubernetes_cluster fails with error "Couldn't read TKG cluster"

Describe the bug

While creating a tanzu-mission-control_tanzu_kubernetes_cluster in a clean environment, the request failed after several minutes with the error Couldn't read TKG cluster.

│ Error: Couldn't read TKG cluster.
│ Management Cluster Name: h2o-2-23532, Provisioner: tap, Cluster Name: tap-view: get request(v1alpha1/clusters/tap-view/kubeconfig?cli=TANZU_CLI&fullName.managementClusterName=h2o-2-23532&fullName.provisionerName=tap) failed with status : 401 Unauthorized, response: {"error":"could not extract auth context","code":16,"message":"could not extract auth context"}

│   with tanzu-mission-control_tanzu_kubernetes_cluster.tap_view,
│   on clusters-tap-view.tf line 18, in resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view":
│   18: resource "tanzu-mission-control_tanzu_kubernetes_cluster" "tap_view" {

Even if an error was returned, the cluster was available TMC leaving an unclean state. A second apply fails because the cluster already exist, a manual action is required (import in the state or delete the cluster in TMC).

Reproduction steps

Unknown

source code is available here https://github.com/giovannibaratta/vmware-tanzu-training/tree/main/terraform/stages/50-tmc

Expected behavior

The resource should be created successfully, or a second apply should not result in a conflict.

Additional context

  • I am using TMC self managed v1.1

Provider crashing on access to the Tanzu Mission Control

Describe the bug

I have had issues both creating clusters and reading clusters using this provider. I constantly get a segmentation vault on use. In this specific case I am just trying to read the state of a existing cluster I used Tanzu Mission Control to create on AWS:

provider "tanzu-mission-control" {
endpoint = var.tanzu-endpoint # optionally use TMC_ENDPOINT env var
vmw_cloud_api_token = var.vmw-cloud-api-token # optionally use VMW_CLOUD_API_TOKEN env var

if you are using dev or different csp endpoint, change the default value below

for production environments the vmw_cloud_endpoint is console.cloud.vmware.com

vmw_cloud_endpoint = "console.cloud.vmware.com" or optionally use VMW_CLOUD_ENDPOINT env var

}

Where the endpoint is my URL per the read-me and my API token.

Error: Request cancelled

│ with data.tanzu-mission-control_cluster.read_tkg_aws_cluster,
│ on main.tf line 36, in data "tanzu-mission-control_cluster" "read_tkg_aws_cluster":
│ 36: data "tanzu-mission-control_cluster" "read_tkg_aws_cluster" {

│ The plugin.(*GRPCProvider).ReadDataSource request was cancelled.

Stack trace from the terraform-provider-tanzu-mission-control_v1.1.2 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x14aede7]

goroutine 100 [running]:
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/cluster.dataSourceTMCClusterRead({0x1a66d08, 0xc0006a3260}, 0xc0002e4d80, {0x16d9820?, 0xc000233d40?})
github.com/vmware/terraform-provider-tanzu-mission-control/internal/resources/cluster/data_source_cluster.go:134 +0xd67
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc000231ce0, {0x1a66d40, 0xc00057c8d0}, 0xd?, {0x16d9820, 0xc000233d40})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:359 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc000231ce0, {0x1a66d40, 0xc00057c8d0}, 0xc0002e4d00, {0x16d9820, 0xc000233d40})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:578 +0x145
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc00000c588, {0x1a66c98?, 0xc0004484c0?}, 0xc0008218e0)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1179 +0x357
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0xc00045e320, {0x1a66d40?, 0xc00057c090?}, 0xc00045b5e0)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:657 +0x409
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1735960?, 0xc00045e320}, {0x1a66d40, 0xc00057c090}, 0xc0006a2e40, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:421 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00021a380, {0x1a6a400, 0xc000103380}, 0xc000192a20, 0xc000286960, 0x25af1d0, 0x0)
google.golang.org/[email protected]/server.go:1282 +0xccf
google.golang.org/grpc.(*Server).handleStream(0xc00021a380, {0x1a6a400, 0xc000103380}, 0xc000192a20, 0x0)
google.golang.org/[email protected]/server.go:1619 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/[email protected]/server.go:921 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/[email protected]/server.go:919 +0x28a

Error: The terraform-provider-tanzu-mission-control_v1.1.2 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Reproduction steps

  1. terraform init -upgrade
  2. terraform apply

It will crash before the apply asks you for confirmation to apply the changes.

Expected behavior

Should show the terrafrom plan and then read in the cluster state into my terraform block.

Additional context

No response

TMC Rate Limiting issue

Describe the bug

when applying large configurations, rate limiting can often be encountered.

Reproduction steps

  1. create a terraform module with around 50 resources
  2. try to apply
  3. try to delete

Expected behavior

a retry logic should be added to the provider or a field on the provider to specify concurrency may help as well.

Additional context

No response

`ready_wait_timeout` is not respected for EKS anymore.

Describe the bug

We have received user complaints that ready_wait_timeout is no longer respected for EKS cluster resources.

My primary suspect is 6dc94a4#diff-051c56ef0c686384f313ea8c9a884824f0f771460874096100e8d33e9b2ed00dR84-R86. This was added to fix the diff issue when importing an existing EKS cluster.

If this does turn out to be the culprit here, we should remove this and just set the value of ready_wait_timeout back in the resource returned by import and data ready (maybe).

Reproduction steps

  1. Write an EKS cluster resource TF file with ready_wait_timeout as 20 mins.
  2. Apply the resource file.
  3. The cluster creation finishes in about 3m (the default value) even if ready_wait_timeout is 20 mins

Expected behavior

Terrafrom creation step should wait for 20 mins or the cluster being created before finishing,

Additional context

No response

path_to_inline_values not actually supported in tanzu-package-installer

Describe the bug

I'm using:

Terraform v1.8.4
on linux_amd64
+ provider registry.terraform.io/vmware/tanzu-mission-control v1.4.4

I have a resource like this:

resource "tanzu-mission-control_package_install" "package_install" {
  name      = var.package_name
  namespace = var.namespace

  scope {
    cluster {
      name                    = var.cluster_name
      provisioner_name        = var.provisioner_name
      management_cluster_name = var.management_cluster_name
    }
  }

  spec {
    package_ref {
      package_metadata_name = var.package_name

      version_selection {
        constraints = var.package_version
      }
    }
    path_to_inline_values = var.inline_values
  }
}

However terraform plan comes back with:

└─[$] <> terraform plan            
╷
│ Error: Unsupported argument
│ 
│   on tkg-packages/main.tf line 21, in resource "tanzu-mission-control_package_install" "package_install":
│   21:     path_to_inline_values = var.inline_values
│ 
│ An argument named "path_to_inline_values" is not expected here.

THe only docs I've found is this:

https://github.com/vmware/terraform-provider-tanzu-mission-control/blob/d9cc1f4e9d7abed257b2a0d97ed41de16c83580e/docs/resources/package_install.md

which clearly seem to state it should be there. And I'm on the latest provider.

Reproduction steps

  1. Attempt to provision via terraform a package resource
  2. Follow docs on using path_to_inline_values
  3. Error
    ...

Expected behavior

terraform plan should be aware this is a legitimate value?

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.