Giter VIP home page Giter VIP logo

terraform-azurerm-aks's Introduction

terraform-azurerm-aks

Deploys a Kubernetes cluster on AKS with monitoring support through Azure Log Analytics

This Terraform module deploys a Kubernetes cluster on Azure using AKS (Azure Kubernetes Service) and adds support for monitoring with Log Analytics.

-> NOTE: If you have not assigned client_id or client_secret, A SystemAssigned identity will be created.

Notice on breaking changes

Please be aware that major version(e.g., from 6.8.0 to 7.0.0) update contains breaking changes that may impact your infrastructure. It is crucial to review these changes with caution before proceeding with the upgrade.

In most cases, you will need to adjust your Terraform code to accommodate the changes introduced in the new major version. We strongly recommend reviewing the changelog and migration guide to understand the modifications and ensure a smooth transition.

To help you in this process, we have provided detailed documentation on the breaking changes, new features, and any deprecated functionalities. Please take the time to read through these resources to avoid any potential issues or disruptions to your infrastructure.

Remember, upgrading to a major version with breaking changes should be done carefully and thoroughly tested in your environment. If you have any questions or concerns, please don't hesitate to reach out to our support team for assistance.

Usage in Terraform 1.2.0

Please view folders in examples.

The module supports some outputs that may be used to configure a kubernetes provider after deploying an AKS cluster.

provider "kubernetes" {
  host                   = module.aks.host
  client_certificate     = base64decode(module.aks.client_certificate)
  client_key             = base64decode(module.aks.client_key)
  cluster_ca_certificate = base64decode(module.aks.cluster_ca_certificate)
}

There're some examples in the examples folder. You can execute terraform apply command in examples's sub folder to try the module. These examples are tested against every PR with the E2E Test.

Enable or disable tracing tags

We're using BridgeCrew Yor and yorbox to help manage tags consistently across infrastructure as code (IaC) frameworks. In this module you might see tags like:

resource "azurerm_resource_group" "rg" {
  location = "eastus"
  name     = random_pet.name
  tags = merge(var.tags, (/*<box>*/ (var.tracing_tags_enabled ? { for k, v in /*</box>*/ {
    avm_git_commit           = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
    avm_git_file             = "main.tf"
    avm_git_last_modified_at = "2023-05-05 08:57:54"
    avm_git_org              = "lonegunmanb"
    avm_git_repo             = "terraform-yor-tag-test-module"
    avm_yor_trace            = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
  } /*<box>*/ : replace(k, "avm_", var.tracing_tags_prefix) => v } : {}) /*</box>*/))
}

To enable tracing tags, set the variable to true:

module "example" {
source               = "{module_source}"
...
tracing_tags_enabled = true
}

The tracing_tags_enabled is default to false.

To customize the prefix for your tracing tags, set the tracing_tags_prefix variable value in your Terraform configuration:

module "example" {
source              = "{module_source}"
...
tracing_tags_prefix = "custom_prefix_"
}

The actual applied tags would be:

{
custom_prefix_git_commit           = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
custom_prefix_git_file             = "main.tf"
custom_prefix_git_last_modified_at = "2023-05-05 08:57:54"
custom_prefix_git_org              = "lonegunmanb"
custom_prefix_git_repo             = "terraform-yor-tag-test-module"
custom_prefix_yor_trace            = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
}

Pre-Commit & Pr-Check & Test

Configurations

We assumed that you have setup service principal's credentials in your environment variables like below:

export ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
  export ARM_TENANT_ID="<azure_subscription_tenant_id>"
    export ARM_CLIENT_ID="<service_principal_appid>"
      export ARM_CLIENT_SECRET="<service_principal_password>"
        ```

        On Windows Powershell:

        ```shell
        $env:ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
          $env:ARM_TENANT_ID="<azure_subscription_tenant_id>"
            $env:ARM_CLIENT_ID="<service_principal_appid>"
              $env:ARM_CLIENT_SECRET="<service_principal_password>"
                ```

                We provide a docker image to run the pre-commit checks and tests for you: `mcr.microsoft.com/azterraform:latest`

                To run the pre-commit task, we can run the following command:

                ```shell
                $ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
                ```

                On Windows Powershell:

                ```shell
                $ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
                ```

                In pre-commit task, we will:

                1. Run `terraform fmt -recursive` command for your Terraform code.
                2. Run `terrafmt fmt -f` command for markdown files and go code files to ensure that the Terraform code embedded in these files are well formatted.
                3. Run `go mod tidy` and `go mod vendor` for test folder to ensure that all the dependencies have been synced.
                4. Run `gofmt` for all go code files.
                5. Run `gofumpt` for all go code files.
                6. Run `terraform-docs` on `README.md` file, then run `markdown-table-formatter` to format markdown tables in `README.md`.

                Then we can run the pr-check task to check whether our code meets our pipeline's requirement(We strongly recommend you run the following command before you commit):

                ```shell
                $ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
                ```

                On Windows Powershell:

                ```shell
                $ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
                ```

                To run the e2e-test, we can run the following command:

                ```text
                docker run --rm -v $(pwd):/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
                ```

                On Windows Powershell:

                ```text
                docker run --rm -v ${pwd}:/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
                ```

                To follow [**Ensure AKS uses disk encryption set**](https://docs.bridgecrew.io/docs/ensure-that-aks-uses-disk-encryption-set) policy we've used `azurerm_key_vault` in example codes, and to follow [**Key vault does not allow firewall rules settings**](https://docs.bridgecrew.io/docs/ensure-that-key-vault-allows-firewall-rules-settings) we've limited the ip cidr on it's `network_acls`. By default we'll use the ip returned by `https://api.ipify.org?format=json` api as your public ip, but in case you need to use another cidr, you can set an environment variable like below:

                ```text
                docker run --rm -v $(pwd):/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
                  ```

                  On Windows Powershell:
                  ```text
                  docker run --rm -v ${pwd}:/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
                    ```

                    #### Prerequisites

                    - [Docker](https://www.docker.com/community-edition#/download)

                    ## Authors

                    Originally created by [Damien Caro](http://github.com/dcaro) and [Malte Lantin](http://github.com/n01d)

                    ## License

                    [MIT](LICENSE)

                    # Contributing

                    This project welcomes contributions and suggestions.  Most contributions require you to agree to a
                    Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
                    the rights to use your contribution. For details, visit https://cla.microsoft.com.

                    When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
                    a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
                    provided by the bot. You will only need to do this once across all repos using our CLA.

                    This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
                    For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
                    contact [[email protected]](mailto:[email protected]) with any additional questions or comments.

                    ## Module Spec

                    The following sections are generated by [terraform-docs](https://github.com/terraform-docs/terraform-docs) and [markdown-table-formatter](https://github.com/nvuillam/markdown-table-formatter), please **DO NOT MODIFY THEM MANUALLY!**

                    <!-- BEGIN_TF_DOCS -->
## Requirements

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.3 |
| <a name="requirement_azapi"></a> [azapi](#requirement\_azapi) | >= 1.4.0, < 2.0 |
| <a name="requirement_azurerm"></a> [azurerm](#requirement\_azurerm) | >= 3.106.1, < 4.0 |
| <a name="requirement_null"></a> [null](#requirement\_null) | >= 3.0 |
| <a name="requirement_tls"></a> [tls](#requirement\_tls) | >= 3.1 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_azapi"></a> [azapi](#provider\_azapi) | >= 1.4.0, < 2.0 |
| <a name="provider_azurerm"></a> [azurerm](#provider\_azurerm) | >= 3.106.1, < 4.0 |
| <a name="provider_null"></a> [null](#provider\_null) | >= 3.0 |
| <a name="provider_tls"></a> [tls](#provider\_tls) | >= 3.1 |

## Modules

No modules.

## Resources

| Name | Type |
|------|------|
| [azapi_update_resource.aks_cluster_http_proxy_config_no_proxy](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/update_resource) | resource |
| [azapi_update_resource.aks_cluster_post_create](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/update_resource) | resource |
| [azurerm_kubernetes_cluster.main](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster) | resource |
| [azurerm_kubernetes_cluster_node_pool.node_pool_create_after_destroy](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool) | resource |
| [azurerm_kubernetes_cluster_node_pool.node_pool_create_before_destroy](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster_node_pool) | resource |
| [azurerm_log_analytics_solution.main](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_solution) | resource |
| [azurerm_log_analytics_workspace.main](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_workspace) | resource |
| [azurerm_role_assignment.acr](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.application_gateway_byo_vnet_network_contributor](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.application_gateway_existing_vnet_network_contributor](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.application_gateway_resource_group_reader](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.existing_application_gateway_contributor](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.network_contributor](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [azurerm_role_assignment.network_contributor_on_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/role_assignment) | resource |
| [null_resource.http_proxy_config_no_proxy_keeper](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.kubernetes_cluster_name_keeper](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.kubernetes_version_keeper](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.pool_name_keeper](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [tls_private_key.ssh](https://registry.terraform.io/providers/hashicorp/tls/latest/docs/resources/private_key) | resource |
| [azurerm_client_config.this](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/client_config) | data source |
| [azurerm_log_analytics_workspace.main](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/log_analytics_workspace) | data source |
| [azurerm_resource_group.aks_rg](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/resource_group) | data source |
| [azurerm_resource_group.ingress_gw](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/resource_group) | data source |
| [azurerm_resource_group.main](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/resource_group) | data source |
| [azurerm_user_assigned_identity.cluster_identity](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/user_assigned_identity) | data source |
| [azurerm_virtual_network.application_gateway_vnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/virtual_network) | data source |

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_aci_connector_linux_enabled"></a> [aci\_connector\_linux\_enabled](#input\_aci\_connector\_linux\_enabled) | Enable Virtual Node pool | `bool` | `false` | no |
| <a name="input_aci_connector_linux_subnet_name"></a> [aci\_connector\_linux\_subnet\_name](#input\_aci\_connector\_linux\_subnet\_name) | (Optional) aci\_connector\_linux subnet name | `string` | `null` | no |
| <a name="input_admin_username"></a> [admin\_username](#input\_admin\_username) | The username of the local administrator to be created on the Kubernetes cluster. Set this variable to `null` to turn off the cluster's `linux_profile`. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_agents_availability_zones"></a> [agents\_availability\_zones](#input\_agents\_availability\_zones) | (Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created. | `list(string)` | `null` | no |
| <a name="input_agents_count"></a> [agents\_count](#input\_agents\_count) | The number of Agents that should exist in the Agent Pool. Please set `agents_count` `null` while `enable_auto_scaling` is `true` to avoid possible `agents_count` changes. | `number` | `2` | no |
| <a name="input_agents_labels"></a> [agents\_labels](#input\_agents\_labels) | (Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created. | `map(string)` | `{}` | no |
| <a name="input_agents_max_count"></a> [agents\_max\_count](#input\_agents\_max\_count) | Maximum number of nodes in a pool | `number` | `null` | no |
| <a name="input_agents_max_pods"></a> [agents\_max\_pods](#input\_agents\_max\_pods) | (Optional) The maximum number of pods that can run on each agent. Changing this forces a new resource to be created. | `number` | `null` | no |
| <a name="input_agents_min_count"></a> [agents\_min\_count](#input\_agents\_min\_count) | Minimum number of nodes in a pool | `number` | `null` | no |
| <a name="input_agents_pool_drain_timeout_in_minutes"></a> [agents\_pool\_drain\_timeout\_in\_minutes](#input\_agents\_pool\_drain\_timeout\_in\_minutes) | (Optional) The amount of time in minutes to wait on eviction of pods and graceful termination per node. This eviction wait time honors waiting on pod disruption budgets. If this time is exceeded, the upgrade fails. Unsetting this after configuring it will force a new resource to be created. | `number` | `30` | no |
| <a name="input_agents_pool_kubelet_configs"></a> [agents\_pool\_kubelet\_configs](#input\_agents\_pool\_kubelet\_configs) | list(object({<br>    cpu\_manager\_policy        = (Optional) Specifies the CPU Manager policy to use. Possible values are `none` and `static`, Changing this forces a new resource to be created.<br>    cpu\_cfs\_quota\_enabled     = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created.<br>    cpu\_cfs\_quota\_period      = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created.<br>    image\_gc\_high\_threshold   = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    image\_gc\_low\_threshold    = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    topology\_manager\_policy   = (Optional) Specifies the Topology Manager policy to use. Possible values are `none`, `best-effort`, `restricted` or `single-numa-node`. Changing this forces a new resource to be created.<br>    allowed\_unsafe\_sysctls    = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in `*`). Changing this forces a new resource to be created.<br>    container\_log\_max\_size\_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created.<br>    container\_log\_max\_line    = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created.<br>    pod\_max\_pid               = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created.<br>})) | <pre>list(object({<br>    cpu_manager_policy        = optional(string)<br>    cpu_cfs_quota_enabled     = optional(bool, true)<br>    cpu_cfs_quota_period      = optional(string)<br>    image_gc_high_threshold   = optional(number)<br>    image_gc_low_threshold    = optional(number)<br>    topology_manager_policy   = optional(string)<br>    allowed_unsafe_sysctls    = optional(set(string))<br>    container_log_max_size_mb = optional(number)<br>    container_log_max_line    = optional(number)<br>    pod_max_pid               = optional(number)<br>  }))</pre> | `[]` | no |
| <a name="input_agents_pool_linux_os_configs"></a> [agents\_pool\_linux\_os\_configs](#input\_agents\_pool\_linux\_os\_configs) | list(object({<br>  sysctl\_configs = optional(list(object({<br>    fs\_aio\_max\_nr                      = (Optional) The sysctl setting fs.aio-max-nr. Must be between `65536` and `6553500`. Changing this forces a new resource to be created.<br>    fs\_file\_max                        = (Optional) The sysctl setting fs.file-max. Must be between `8192` and `12000500`. Changing this forces a new resource to be created.<br>    fs\_inotify\_max\_user\_watches        = (Optional) The sysctl setting fs.inotify.max\_user\_watches. Must be between `781250` and `2097152`. Changing this forces a new resource to be created.<br>    fs\_nr\_open                         = (Optional) The sysctl setting fs.nr\_open. Must be between `8192` and `20000500`. Changing this forces a new resource to be created.<br>    kernel\_threads\_max                 = (Optional) The sysctl setting kernel.threads-max. Must be between `20` and `513785`. Changing this forces a new resource to be created.<br>    net\_core\_netdev\_max\_backlog        = (Optional) The sysctl setting net.core.netdev\_max\_backlog. Must be between `1000` and `3240000`. Changing this forces a new resource to be created.<br>    net\_core\_optmem\_max                = (Optional) The sysctl setting net.core.optmem\_max. Must be between `20480` and `4194304`. Changing this forces a new resource to be created.<br>    net\_core\_rmem\_default              = (Optional) The sysctl setting net.core.rmem\_default. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>    net\_core\_rmem\_max                  = (Optional) The sysctl setting net.core.rmem\_max. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>    net\_core\_somaxconn                 = (Optional) The sysctl setting net.core.somaxconn. Must be between `4096` and `3240000`. Changing this forces a new resource to be created.<br>    net\_core\_wmem\_default              = (Optional) The sysctl setting net.core.wmem\_default. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>    net\_core\_wmem\_max                  = (Optional) The sysctl setting net.core.wmem\_max. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>    net\_ipv4\_ip\_local\_port\_range\_min   = (Optional) The sysctl setting net.ipv4.ip\_local\_port\_range max value. Must be between `1024` and `60999`. Changing this forces a new resource to be created.<br>    net\_ipv4\_ip\_local\_port\_range\_max   = (Optional) The sysctl setting net.ipv4.ip\_local\_port\_range min value. Must be between `1024` and `60999`. Changing this forces a new resource to be created.<br>    net\_ipv4\_neigh\_default\_gc\_thresh1  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh1. Must be between `128` and `80000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_neigh\_default\_gc\_thresh2  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh2. Must be between `512` and `90000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_neigh\_default\_gc\_thresh3  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh3. Must be between `1024` and `100000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_fin\_timeout           = (Optional) The sysctl setting net.ipv4.tcp\_fin\_timeout. Must be between `5` and `120`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_keepalive\_intvl       = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_intvl. Must be between `10` and `75`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_keepalive\_probes      = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_probes. Must be between `1` and `15`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_keepalive\_time        = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_time. Must be between `30` and `432000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_max\_syn\_backlog       = (Optional) The sysctl setting net.ipv4.tcp\_max\_syn\_backlog. Must be between `128` and `3240000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_max\_tw\_buckets        = (Optional) The sysctl setting net.ipv4.tcp\_max\_tw\_buckets. Must be between `8000` and `1440000`. Changing this forces a new resource to be created.<br>    net\_ipv4\_tcp\_tw\_reuse              = (Optional) The sysctl setting net.ipv4.tcp\_tw\_reuse. Changing this forces a new resource to be created.<br>    net\_netfilter\_nf\_conntrack\_buckets = (Optional) The sysctl setting net.netfilter.nf\_conntrack\_buckets. Must be between `65536` and `147456`. Changing this forces a new resource to be created.<br>    net\_netfilter\_nf\_conntrack\_max     = (Optional) The sysctl setting net.netfilter.nf\_conntrack\_max. Must be between `131072` and `1048576`. Changing this forces a new resource to be created.<br>    vm\_max\_map\_count                   = (Optional) The sysctl setting vm.max\_map\_count. Must be between `65530` and `262144`. Changing this forces a new resource to be created.<br>    vm\_swappiness                      = (Optional) The sysctl setting vm.swappiness. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    vm\_vfs\_cache\_pressure              = (Optional) The sysctl setting vm.vfs\_cache\_pressure. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>  })), [])<br>  transparent\_huge\_page\_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are `always`, `madvise` and `never`. Changing this forces a new resource to be created.<br>  transparent\_huge\_page\_defrag  = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are `always`, `defer`, `defer+madvise`, `madvise` and `never`. Changing this forces a new resource to be created.<br>  swap\_file\_size\_mb             = (Optional) Specifies the size of the swap file on each node in MB. Changing this forces a new resource to be created.<br>})) | <pre>list(object({<br>    sysctl_configs = optional(list(object({<br>      fs_aio_max_nr                      = optional(number)<br>      fs_file_max                        = optional(number)<br>      fs_inotify_max_user_watches        = optional(number)<br>      fs_nr_open                         = optional(number)<br>      kernel_threads_max                 = optional(number)<br>      net_core_netdev_max_backlog        = optional(number)<br>      net_core_optmem_max                = optional(number)<br>      net_core_rmem_default              = optional(number)<br>      net_core_rmem_max                  = optional(number)<br>      net_core_somaxconn                 = optional(number)<br>      net_core_wmem_default              = optional(number)<br>      net_core_wmem_max                  = optional(number)<br>      net_ipv4_ip_local_port_range_min   = optional(number)<br>      net_ipv4_ip_local_port_range_max   = optional(number)<br>      net_ipv4_neigh_default_gc_thresh1  = optional(number)<br>      net_ipv4_neigh_default_gc_thresh2  = optional(number)<br>      net_ipv4_neigh_default_gc_thresh3  = optional(number)<br>      net_ipv4_tcp_fin_timeout           = optional(number)<br>      net_ipv4_tcp_keepalive_intvl       = optional(number)<br>      net_ipv4_tcp_keepalive_probes      = optional(number)<br>      net_ipv4_tcp_keepalive_time        = optional(number)<br>      net_ipv4_tcp_max_syn_backlog       = optional(number)<br>      net_ipv4_tcp_max_tw_buckets        = optional(number)<br>      net_ipv4_tcp_tw_reuse              = optional(bool)<br>      net_netfilter_nf_conntrack_buckets = optional(number)<br>      net_netfilter_nf_conntrack_max     = optional(number)<br>      vm_max_map_count                   = optional(number)<br>      vm_swappiness                      = optional(number)<br>      vm_vfs_cache_pressure              = optional(number)<br>    })), [])<br>    transparent_huge_page_enabled = optional(string)<br>    transparent_huge_page_defrag  = optional(string)<br>    swap_file_size_mb             = optional(number)<br>  }))</pre> | `[]` | no |
| <a name="input_agents_pool_max_surge"></a> [agents\_pool\_max\_surge](#input\_agents\_pool\_max\_surge) | The maximum number or percentage of nodes which will be added to the Default Node Pool size during an upgrade. | `string` | `"10%"` | no |
| <a name="input_agents_pool_name"></a> [agents\_pool\_name](#input\_agents\_pool\_name) | The default Azure AKS agentpool (nodepool) name. | `string` | `"nodepool"` | no |
| <a name="input_agents_pool_node_soak_duration_in_minutes"></a> [agents\_pool\_node\_soak\_duration\_in\_minutes](#input\_agents\_pool\_node\_soak\_duration\_in\_minutes) | (Optional) The amount of time in minutes to wait after draining a node and before reimaging and moving on to next node. Defaults to 0. | `number` | `0` | no |
| <a name="input_agents_proximity_placement_group_id"></a> [agents\_proximity\_placement\_group\_id](#input\_agents\_proximity\_placement\_group\_id) | (Optional) The ID of the Proximity Placement Group of the default Azure AKS agentpool (nodepool). Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_agents_size"></a> [agents\_size](#input\_agents\_size) | The default virtual machine size for the Kubernetes agents. Changing this without specifying `var.temporary_name_for_rotation` forces a new resource to be created. | `string` | `"Standard_D2s_v3"` | no |
| <a name="input_agents_tags"></a> [agents\_tags](#input\_agents\_tags) | (Optional) A mapping of tags to assign to the Node Pool. | `map(string)` | `{}` | no |
| <a name="input_agents_taints"></a> [agents\_taints](#input\_agents\_taints) | (Optional) A list of the taints added to new nodes during node pool create and scale. Changing this forces a new resource to be created. | `list(string)` | `null` | no |
| <a name="input_agents_type"></a> [agents\_type](#input\_agents\_type) | (Optional) The type of Node Pool which should be created. Possible values are AvailabilitySet and VirtualMachineScaleSets. Defaults to VirtualMachineScaleSets. | `string` | `"VirtualMachineScaleSets"` | no |
| <a name="input_api_server_authorized_ip_ranges"></a> [api\_server\_authorized\_ip\_ranges](#input\_api\_server\_authorized\_ip\_ranges) | (Optional) The IP ranges to allow for incoming traffic to the server nodes. | `set(string)` | `null` | no |
| <a name="input_api_server_subnet_id"></a> [api\_server\_subnet\_id](#input\_api\_server\_subnet\_id) | (Optional) The ID of the Subnet where the API server endpoint is delegated to. | `string` | `null` | no |
| <a name="input_attached_acr_id_map"></a> [attached\_acr\_id\_map](#input\_attached\_acr\_id\_map) | Azure Container Registry ids that need an authentication mechanism with Azure Kubernetes Service (AKS). Map key must be static string as acr's name, the value is acr's resource id. Changing this forces some new resources to be created. | `map(string)` | `{}` | no |
| <a name="input_auto_scaler_profile_balance_similar_node_groups"></a> [auto\_scaler\_profile\_balance\_similar\_node\_groups](#input\_auto\_scaler\_profile\_balance\_similar\_node\_groups) | Detect similar node groups and balance the number of nodes between them. Defaults to `false`. | `bool` | `false` | no |
| <a name="input_auto_scaler_profile_empty_bulk_delete_max"></a> [auto\_scaler\_profile\_empty\_bulk\_delete\_max](#input\_auto\_scaler\_profile\_empty\_bulk\_delete\_max) | Maximum number of empty nodes that can be deleted at the same time. Defaults to `10`. | `number` | `10` | no |
| <a name="input_auto_scaler_profile_enabled"></a> [auto\_scaler\_profile\_enabled](#input\_auto\_scaler\_profile\_enabled) | Enable configuring the auto scaler profile | `bool` | `false` | no |
| <a name="input_auto_scaler_profile_expander"></a> [auto\_scaler\_profile\_expander](#input\_auto\_scaler\_profile\_expander) | Expander to use. Possible values are `least-waste`, `priority`, `most-pods` and `random`. Defaults to `random`. | `string` | `"random"` | no |
| <a name="input_auto_scaler_profile_max_graceful_termination_sec"></a> [auto\_scaler\_profile\_max\_graceful\_termination\_sec](#input\_auto\_scaler\_profile\_max\_graceful\_termination\_sec) | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. Defaults to `600`. | `string` | `"600"` | no |
| <a name="input_auto_scaler_profile_max_node_provisioning_time"></a> [auto\_scaler\_profile\_max\_node\_provisioning\_time](#input\_auto\_scaler\_profile\_max\_node\_provisioning\_time) | Maximum time the autoscaler waits for a node to be provisioned. Defaults to `15m`. | `string` | `"15m"` | no |
| <a name="input_auto_scaler_profile_max_unready_nodes"></a> [auto\_scaler\_profile\_max\_unready\_nodes](#input\_auto\_scaler\_profile\_max\_unready\_nodes) | Maximum Number of allowed unready nodes. Defaults to `3`. | `number` | `3` | no |
| <a name="input_auto_scaler_profile_max_unready_percentage"></a> [auto\_scaler\_profile\_max\_unready\_percentage](#input\_auto\_scaler\_profile\_max\_unready\_percentage) | Maximum percentage of unready nodes the cluster autoscaler will stop if the percentage is exceeded. Defaults to `45`. | `number` | `45` | no |
| <a name="input_auto_scaler_profile_new_pod_scale_up_delay"></a> [auto\_scaler\_profile\_new\_pod\_scale\_up\_delay](#input\_auto\_scaler\_profile\_new\_pod\_scale\_up\_delay) | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. Defaults to `10s`. | `string` | `"10s"` | no |
| <a name="input_auto_scaler_profile_scale_down_delay_after_add"></a> [auto\_scaler\_profile\_scale\_down\_delay\_after\_add](#input\_auto\_scaler\_profile\_scale\_down\_delay\_after\_add) | How long after the scale up of AKS nodes the scale down evaluation resumes. Defaults to `10m`. | `string` | `"10m"` | no |
| <a name="input_auto_scaler_profile_scale_down_delay_after_delete"></a> [auto\_scaler\_profile\_scale\_down\_delay\_after\_delete](#input\_auto\_scaler\_profile\_scale\_down\_delay\_after\_delete) | How long after node deletion that scale down evaluation resumes. Defaults to the value used for `scan_interval`. | `string` | `null` | no |
| <a name="input_auto_scaler_profile_scale_down_delay_after_failure"></a> [auto\_scaler\_profile\_scale\_down\_delay\_after\_failure](#input\_auto\_scaler\_profile\_scale\_down\_delay\_after\_failure) | How long after scale down failure that scale down evaluation resumes. Defaults to `3m`. | `string` | `"3m"` | no |
| <a name="input_auto_scaler_profile_scale_down_unneeded"></a> [auto\_scaler\_profile\_scale\_down\_unneeded](#input\_auto\_scaler\_profile\_scale\_down\_unneeded) | How long a node should be unneeded before it is eligible for scale down. Defaults to `10m`. | `string` | `"10m"` | no |
| <a name="input_auto_scaler_profile_scale_down_unready"></a> [auto\_scaler\_profile\_scale\_down\_unready](#input\_auto\_scaler\_profile\_scale\_down\_unready) | How long an unready node should be unneeded before it is eligible for scale down. Defaults to `20m`. | `string` | `"20m"` | no |
| <a name="input_auto_scaler_profile_scale_down_utilization_threshold"></a> [auto\_scaler\_profile\_scale\_down\_utilization\_threshold](#input\_auto\_scaler\_profile\_scale\_down\_utilization\_threshold) | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down. Defaults to `0.5`. | `string` | `"0.5"` | no |
| <a name="input_auto_scaler_profile_scan_interval"></a> [auto\_scaler\_profile\_scan\_interval](#input\_auto\_scaler\_profile\_scan\_interval) | How often the AKS Cluster should be re-evaluated for scale up/down. Defaults to `10s`. | `string` | `"10s"` | no |
| <a name="input_auto_scaler_profile_skip_nodes_with_local_storage"></a> [auto\_scaler\_profile\_skip\_nodes\_with\_local\_storage](#input\_auto\_scaler\_profile\_skip\_nodes\_with\_local\_storage) | If `true` cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath. Defaults to `true`. | `bool` | `true` | no |
| <a name="input_auto_scaler_profile_skip_nodes_with_system_pods"></a> [auto\_scaler\_profile\_skip\_nodes\_with\_system\_pods](#input\_auto\_scaler\_profile\_skip\_nodes\_with\_system\_pods) | If `true` cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods). Defaults to `true`. | `bool` | `true` | no |
| <a name="input_automatic_channel_upgrade"></a> [automatic\_channel\_upgrade](#input\_automatic\_channel\_upgrade) | (Optional) The upgrade channel for this Kubernetes Cluster. Possible values are `patch`, `rapid`, `node-image` and `stable`. By default automatic-upgrades are turned off. Note that you cannot specify the patch version using `kubernetes_version` or `orchestrator_version` when using the `patch` upgrade channel. See [the documentation](https://learn.microsoft.com/en-us/azure/aks/auto-upgrade-cluster) for more information | `string` | `null` | no |
| <a name="input_azure_policy_enabled"></a> [azure\_policy\_enabled](#input\_azure\_policy\_enabled) | Enable Azure Policy Addon. | `bool` | `false` | no |
| <a name="input_brown_field_application_gateway_for_ingress"></a> [brown\_field\_application\_gateway\_for\_ingress](#input\_brown\_field\_application\_gateway\_for\_ingress) | [Definition of `brown_field`](https://learn.microsoft.com/en-us/azure/application-gateway/tutorial-ingress-controller-add-on-existing)<br>* `id` - (Required) The ID of the Application Gateway that be used as cluster ingress.<br>* `subnet_id` - (Required) The ID of the Subnet which the Application Gateway is connected to. Must be set when `create_role_assignments` is `true`. | <pre>object({<br>    id        = string<br>    subnet_id = string<br>  })</pre> | `null` | no |
| <a name="input_client_id"></a> [client\_id](#input\_client\_id) | (Optional) The Client ID (appId) for the Service Principal used for the AKS deployment | `string` | `""` | no |
| <a name="input_client_secret"></a> [client\_secret](#input\_client\_secret) | (Optional) The Client Secret (password) for the Service Principal used for the AKS deployment | `string` | `""` | no |
| <a name="input_cluster_log_analytics_workspace_name"></a> [cluster\_log\_analytics\_workspace\_name](#input\_cluster\_log\_analytics\_workspace\_name) | (Optional) The name of the Analytics workspace | `string` | `null` | no |
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | (Optional) The name for the AKS resources created in the specified Azure Resource Group. This variable overwrites the 'prefix' var (The 'prefix' var will still be applied to the dns\_prefix if it is set) | `string` | `null` | no |
| <a name="input_cluster_name_random_suffix"></a> [cluster\_name\_random\_suffix](#input\_cluster\_name\_random\_suffix) | Whether to add a random suffix on Aks cluster's name or not. `azurerm_kubernetes_cluster` resource defined in this module is `create_before_destroy = true` implicity now(described [here](https://github.com/Azure/terraform-azurerm-aks/issues/389)), without this random suffix we'll not be able to recreate this cluster directly due to the naming conflict. | `bool` | `false` | no |
| <a name="input_confidential_computing"></a> [confidential\_computing](#input\_confidential\_computing) | (Optional) Enable Confidential Computing. | <pre>object({<br>    sgx_quote_helper_enabled = bool<br>  })</pre> | `null` | no |
| <a name="input_create_role_assignment_network_contributor"></a> [create\_role\_assignment\_network\_contributor](#input\_create\_role\_assignment\_network\_contributor) | (Deprecated) Create a role assignment for the AKS Service Principal to be a Network Contributor on the subnets used for the AKS Cluster | `bool` | `false` | no |
| <a name="input_create_role_assignments_for_application_gateway"></a> [create\_role\_assignments\_for\_application\_gateway](#input\_create\_role\_assignments\_for\_application\_gateway) | (Optional) Whether to create the corresponding role assignments for application gateway or not. Defaults to `true`. | `bool` | `true` | no |
| <a name="input_default_node_pool_fips_enabled"></a> [default\_node\_pool\_fips\_enabled](#input\_default\_node\_pool\_fips\_enabled) | (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. | `bool` | `null` | no |
| <a name="input_disk_encryption_set_id"></a> [disk\_encryption\_set\_id](#input\_disk\_encryption\_set\_id) | (Optional) The ID of the Disk Encryption Set which should be used for the Nodes and Volumes. More information [can be found in the documentation](https://docs.microsoft.com/azure/aks/azure-disk-customer-managed-keys). Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_ebpf_data_plane"></a> [ebpf\_data\_plane](#input\_ebpf\_data\_plane) | (Optional) Specifies the eBPF data plane used for building the Kubernetes network. Possible value is `cilium`. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_enable_auto_scaling"></a> [enable\_auto\_scaling](#input\_enable\_auto\_scaling) | Enable node pool autoscaling | `bool` | `false` | no |
| <a name="input_enable_host_encryption"></a> [enable\_host\_encryption](#input\_enable\_host\_encryption) | Enable Host Encryption for default node pool. Encryption at host feature must be enabled on the subscription: https://docs.microsoft.com/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli | `bool` | `false` | no |
| <a name="input_enable_node_public_ip"></a> [enable\_node\_public\_ip](#input\_enable\_node\_public\_ip) | (Optional) Should nodes in this Node Pool have a Public IP Address? Defaults to false. | `bool` | `false` | no |
| <a name="input_green_field_application_gateway_for_ingress"></a> [green\_field\_application\_gateway\_for\_ingress](#input\_green\_field\_application\_gateway\_for\_ingress) | [Definition of `green_field`](https://learn.microsoft.com/en-us/azure/application-gateway/tutorial-ingress-controller-add-on-new)<br>* `name` - (Optional) The name of the Application Gateway to be used or created in the Nodepool Resource Group, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.<br>* `subnet_cidr` - (Optional) The subnet CIDR to be used to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.<br>* `subnet_id` - (Optional) The ID of the subnet on which to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. | <pre>object({<br>    name        = optional(string)<br>    subnet_cidr = optional(string)<br>    subnet_id   = optional(string)<br>  })</pre> | `null` | no |
| <a name="input_http_proxy_config"></a> [http\_proxy\_config](#input\_http\_proxy\_config) | optional(object({<br>    http\_proxy  = (Optional) The proxy address to be used when communicating over HTTP.<br>    https\_proxy = (Optional) The proxy address to be used when communicating over HTTPS.<br>    no\_proxy    = (Optional) The list of domains that will not use the proxy for communication. Note: If you specify the `default_node_pool.0.vnet_subnet_id`, be sure to include the Subnet CIDR in the `no_proxy` list. Note: You may wish to use Terraform's `ignore_changes` functionality to ignore the changes to this field.<br>    trusted\_ca  = (Optional) The base64 encoded alternative CA certificate content in PEM format.<br>}))<br>Once you have set only one of `http_proxy` and `https_proxy`, this config would be used for both `http_proxy` and `https_proxy` to avoid a configuration drift. | <pre>object({<br>    http_proxy  = optional(string)<br>    https_proxy = optional(string)<br>    no_proxy    = optional(list(string))<br>    trusted_ca  = optional(string)<br>  })</pre> | `null` | no |
| <a name="input_identity_ids"></a> [identity\_ids](#input\_identity\_ids) | (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this Kubernetes Cluster. | `list(string)` | `null` | no |
| <a name="input_identity_type"></a> [identity\_type](#input\_identity\_type) | (Optional) The type of identity used for the managed cluster. Conflicts with `client_id` and `client_secret`. Possible values are `SystemAssigned` and `UserAssigned`. If `UserAssigned` is set, an `identity_ids` must be set as well. | `string` | `"SystemAssigned"` | no |
| <a name="input_image_cleaner_enabled"></a> [image\_cleaner\_enabled](#input\_image\_cleaner\_enabled) | (Optional) Specifies whether Image Cleaner is enabled. | `bool` | `false` | no |
| <a name="input_image_cleaner_interval_hours"></a> [image\_cleaner\_interval\_hours](#input\_image\_cleaner\_interval\_hours) | (Optional) Specifies the interval in hours when images should be cleaned up. Defaults to `48`. | `number` | `48` | no |
| <a name="input_key_vault_secrets_provider_enabled"></a> [key\_vault\_secrets\_provider\_enabled](#input\_key\_vault\_secrets\_provider\_enabled) | (Optional) Whether to use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster. For more details: https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver | `bool` | `false` | no |
| <a name="input_kms_enabled"></a> [kms\_enabled](#input\_kms\_enabled) | (Optional) Enable Azure KeyVault Key Management Service. | `bool` | `false` | no |
| <a name="input_kms_key_vault_key_id"></a> [kms\_key\_vault\_key\_id](#input\_kms\_key\_vault\_key\_id) | (Optional) Identifier of Azure Key Vault key. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. | `string` | `null` | no |
| <a name="input_kms_key_vault_network_access"></a> [kms\_key\_vault\_network\_access](#input\_kms\_key\_vault\_network\_access) | (Optional) Network Access of Azure Key Vault. Possible values are: `Private` and `Public`. | `string` | `"Public"` | no |
| <a name="input_kubelet_identity"></a> [kubelet\_identity](#input\_kubelet\_identity) | - `client_id` - (Optional) The Client ID of the user-defined Managed Identity to be assigned to the Kubelets. If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created.<br>- `object_id` - (Optional) The Object ID of the user-defined Managed Identity assigned to the Kubelets.If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created.<br>- `user_assigned_identity_id` - (Optional) The ID of the User Assigned Identity assigned to the Kubelets. If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created. | <pre>object({<br>    client_id                 = optional(string)<br>    object_id                 = optional(string)<br>    user_assigned_identity_id = optional(string)<br>  })</pre> | `null` | no |
| <a name="input_kubernetes_version"></a> [kubernetes\_version](#input\_kubernetes\_version) | Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region | `string` | `null` | no |
| <a name="input_load_balancer_profile_enabled"></a> [load\_balancer\_profile\_enabled](#input\_load\_balancer\_profile\_enabled) | (Optional) Enable a load\_balancer\_profile block. This can only be used when load\_balancer\_sku is set to `standard`. | `bool` | `false` | no |
| <a name="input_load_balancer_profile_idle_timeout_in_minutes"></a> [load\_balancer\_profile\_idle\_timeout\_in\_minutes](#input\_load\_balancer\_profile\_idle\_timeout\_in\_minutes) | (Optional) Desired outbound flow idle timeout in minutes for the cluster load balancer. Must be between `4` and `120` inclusive. | `number` | `30` | no |
| <a name="input_load_balancer_profile_managed_outbound_ip_count"></a> [load\_balancer\_profile\_managed\_outbound\_ip\_count](#input\_load\_balancer\_profile\_managed\_outbound\_ip\_count) | (Optional) Count of desired managed outbound IPs for the cluster load balancer. Must be between `1` and `100` inclusive | `number` | `null` | no |
| <a name="input_load_balancer_profile_managed_outbound_ipv6_count"></a> [load\_balancer\_profile\_managed\_outbound\_ipv6\_count](#input\_load\_balancer\_profile\_managed\_outbound\_ipv6\_count) | (Optional) The desired number of IPv6 outbound IPs created and managed by Azure for the cluster load balancer. Must be in the range of `1` to `100` (inclusive). The default value is `0` for single-stack and `1` for dual-stack. Note: managed\_outbound\_ipv6\_count requires dual-stack networking. To enable dual-stack networking the Preview Feature Microsoft.ContainerService/AKS-EnableDualStack needs to be enabled and the Resource Provider re-registered, see the documentation for more information. https://learn.microsoft.com/en-us/azure/aks/configure-kubenet-dual-stack?tabs=azure-cli%2Ckubectl#register-the-aks-enabledualstack-preview-feature | `number` | `null` | no |
| <a name="input_load_balancer_profile_outbound_ip_address_ids"></a> [load\_balancer\_profile\_outbound\_ip\_address\_ids](#input\_load\_balancer\_profile\_outbound\_ip\_address\_ids) | (Optional) The ID of the Public IP Addresses which should be used for outbound communication for the cluster load balancer. | `set(string)` | `null` | no |
| <a name="input_load_balancer_profile_outbound_ip_prefix_ids"></a> [load\_balancer\_profile\_outbound\_ip\_prefix\_ids](#input\_load\_balancer\_profile\_outbound\_ip\_prefix\_ids) | (Optional) The ID of the outbound Public IP Address Prefixes which should be used for the cluster load balancer. | `set(string)` | `null` | no |
| <a name="input_load_balancer_profile_outbound_ports_allocated"></a> [load\_balancer\_profile\_outbound\_ports\_allocated](#input\_load\_balancer\_profile\_outbound\_ports\_allocated) | (Optional) Number of desired SNAT port for each VM in the clusters load balancer. Must be between `0` and `64000` inclusive. Defaults to `0` | `number` | `0` | no |
| <a name="input_load_balancer_sku"></a> [load\_balancer\_sku](#input\_load\_balancer\_sku) | (Optional) Specifies the SKU of the Load Balancer used for this Kubernetes Cluster. Possible values are `basic` and `standard`. Defaults to `standard`. Changing this forces a new kubernetes cluster to be created. | `string` | `"standard"` | no |
| <a name="input_local_account_disabled"></a> [local\_account\_disabled](#input\_local\_account\_disabled) | (Optional) - If `true` local accounts will be disabled. Defaults to `false`. See [the documentation](https://docs.microsoft.com/azure/aks/managed-aad#disable-local-accounts) for more information. | `bool` | `null` | no |
| <a name="input_location"></a> [location](#input\_location) | Location of cluster, if not defined it will be read from the resource-group | `string` | `null` | no |
| <a name="input_log_analytics_solution"></a> [log\_analytics\_solution](#input\_log\_analytics\_solution) | (Optional) Object which contains existing azurerm\_log\_analytics\_solution ID. Providing ID disables creation of azurerm\_log\_analytics\_solution. | <pre>object({<br>    id = string<br>  })</pre> | `null` | no |
| <a name="input_log_analytics_workspace"></a> [log\_analytics\_workspace](#input\_log\_analytics\_workspace) | (Optional) Existing azurerm\_log\_analytics\_workspace to attach azurerm\_log\_analytics\_solution. Providing the config disables creation of azurerm\_log\_analytics\_workspace. | <pre>object({<br>    id                  = string<br>    name                = string<br>    location            = optional(string)<br>    resource_group_name = optional(string)<br>  })</pre> | `null` | no |
| <a name="input_log_analytics_workspace_allow_resource_only_permissions"></a> [log\_analytics\_workspace\_allow\_resource\_only\_permissions](#input\_log\_analytics\_workspace\_allow\_resource\_only\_permissions) | (Optional) Specifies if the log Analytics Workspace allow users accessing to data associated with resources they have permission to view, without permission to workspace. Defaults to `true`. | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_cmk_for_query_forced"></a> [log\_analytics\_workspace\_cmk\_for\_query\_forced](#input\_log\_analytics\_workspace\_cmk\_for\_query\_forced) | (Optional) Is Customer Managed Storage mandatory for query management? | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_daily_quota_gb"></a> [log\_analytics\_workspace\_daily\_quota\_gb](#input\_log\_analytics\_workspace\_daily\_quota\_gb) | (Optional) The workspace daily quota for ingestion in GB. Defaults to -1 (unlimited) if omitted. | `number` | `null` | no |
| <a name="input_log_analytics_workspace_data_collection_rule_id"></a> [log\_analytics\_workspace\_data\_collection\_rule\_id](#input\_log\_analytics\_workspace\_data\_collection\_rule\_id) | (Optional) The ID of the Data Collection Rule to use for this workspace. | `string` | `null` | no |
| <a name="input_log_analytics_workspace_enabled"></a> [log\_analytics\_workspace\_enabled](#input\_log\_analytics\_workspace\_enabled) | Enable the integration of azurerm\_log\_analytics\_workspace and azurerm\_log\_analytics\_solution: https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-onboard | `bool` | `true` | no |
| <a name="input_log_analytics_workspace_identity"></a> [log\_analytics\_workspace\_identity](#input\_log\_analytics\_workspace\_identity) | - `identity_ids` - (Optional) Specifies a list of user managed identity ids to be assigned. Required if `type` is `UserAssigned`.<br>- `type` - (Required) Specifies the identity type of the Log Analytics Workspace. Possible values are `SystemAssigned` (where Azure will generate a Service Principal for you) and `UserAssigned` where you can specify the Service Principal IDs in the `identity_ids` field. | <pre>object({<br>    identity_ids = optional(set(string))<br>    type         = string<br>  })</pre> | `null` | no |
| <a name="input_log_analytics_workspace_immediate_data_purge_on_30_days_enabled"></a> [log\_analytics\_workspace\_immediate\_data\_purge\_on\_30\_days\_enabled](#input\_log\_analytics\_workspace\_immediate\_data\_purge\_on\_30\_days\_enabled) | (Optional) Whether to remove the data in the Log Analytics Workspace immediately after 30 days. | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_internet_ingestion_enabled"></a> [log\_analytics\_workspace\_internet\_ingestion\_enabled](#input\_log\_analytics\_workspace\_internet\_ingestion\_enabled) | (Optional) Should the Log Analytics Workspace support ingestion over the Public Internet? Defaults to `true`. | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_internet_query_enabled"></a> [log\_analytics\_workspace\_internet\_query\_enabled](#input\_log\_analytics\_workspace\_internet\_query\_enabled) | (Optional) Should the Log Analytics Workspace support querying over the Public Internet? Defaults to `true`. | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_local_authentication_disabled"></a> [log\_analytics\_workspace\_local\_authentication\_disabled](#input\_log\_analytics\_workspace\_local\_authentication\_disabled) | (Optional) Specifies if the log Analytics workspace should enforce authentication using Azure AD. Defaults to `false`. | `bool` | `null` | no |
| <a name="input_log_analytics_workspace_reservation_capacity_in_gb_per_day"></a> [log\_analytics\_workspace\_reservation\_capacity\_in\_gb\_per\_day](#input\_log\_analytics\_workspace\_reservation\_capacity\_in\_gb\_per\_day) | (Optional) The capacity reservation level in GB for this workspace. Possible values are `100`, `200`, `300`, `400`, `500`, `1000`, `2000` and `5000`. | `number` | `null` | no |
| <a name="input_log_analytics_workspace_resource_group_name"></a> [log\_analytics\_workspace\_resource\_group\_name](#input\_log\_analytics\_workspace\_resource\_group\_name) | (Optional) Resource group name to create azurerm\_log\_analytics\_solution. | `string` | `null` | no |
| <a name="input_log_analytics_workspace_sku"></a> [log\_analytics\_workspace\_sku](#input\_log\_analytics\_workspace\_sku) | The SKU (pricing level) of the Log Analytics workspace. For new subscriptions the SKU should be set to PerGB2018 | `string` | `"PerGB2018"` | no |
| <a name="input_log_retention_in_days"></a> [log\_retention\_in\_days](#input\_log\_retention\_in\_days) | The retention period for the logs in days | `number` | `30` | no |
| <a name="input_maintenance_window"></a> [maintenance\_window](#input\_maintenance\_window) | (Optional) Maintenance configuration of the managed cluster. | <pre>object({<br>    allowed = optional(list(object({<br>      day   = string<br>      hours = set(number)<br>      })), [<br>    ]),<br>    not_allowed = optional(list(object({<br>      end   = string<br>      start = string<br>    })), []),<br>  })</pre> | `null` | no |
| <a name="input_maintenance_window_auto_upgrade"></a> [maintenance\_window\_auto\_upgrade](#input\_maintenance\_window\_auto\_upgrade) | - `day_of_month` - (Optional) The day of the month for the maintenance run. Required in combination with RelativeMonthly frequency. Value between 0 and 31 (inclusive).<br>- `day_of_week` - (Optional) The day of the week for the maintenance run. Options are `Monday`, `Tuesday`, `Wednesday`, `Thurday`, `Friday`, `Saturday` and `Sunday`. Required in combination with weekly frequency.<br>- `duration` - (Required) The duration of the window for maintenance to run in hours.<br>- `frequency` - (Required) Frequency of maintenance. Possible options are `Weekly`, `AbsoluteMonthly` and `RelativeMonthly`.<br>- `interval` - (Required) The interval for maintenance runs. Depending on the frequency this interval is week or month based.<br>- `start_date` - (Optional) The date on which the maintenance window begins to take effect.<br>- `start_time` - (Optional) The time for maintenance to begin, based on the timezone determined by `utc_offset`. Format is `HH:mm`.<br>- `utc_offset` - (Optional) Used to determine the timezone for cluster maintenance.<br>- `week_index` - (Optional) The week in the month used for the maintenance run. Options are `First`, `Second`, `Third`, `Fourth`, and `Last`.<br><br>---<br>`not_allowed` block supports the following:<br>- `end` - (Required) The end of a time span, formatted as an RFC3339 string.<br>- `start` - (Required) The start of a time span, formatted as an RFC3339 string. | <pre>object({<br>    day_of_month = optional(number)<br>    day_of_week  = optional(string)<br>    duration     = number<br>    frequency    = string<br>    interval     = number<br>    start_date   = optional(string)<br>    start_time   = optional(string)<br>    utc_offset   = optional(string)<br>    week_index   = optional(string)<br>    not_allowed = optional(set(object({<br>      end   = string<br>      start = string<br>    })))<br>  })</pre> | `null` | no |
| <a name="input_maintenance_window_node_os"></a> [maintenance\_window\_node\_os](#input\_maintenance\_window\_node\_os) | - `day_of_month` -<br>- `day_of_week` - (Optional) The day of the week for the maintenance run. Options are `Monday`, `Tuesday`, `Wednesday`, `Thurday`, `Friday`, `Saturday` and `Sunday`. Required in combination with weekly frequency.<br>- `duration` - (Required) The duration of the window for maintenance to run in hours.<br>- `frequency` - (Required) Frequency of maintenance. Possible options are `Daily`, `Weekly`, `AbsoluteMonthly` and `RelativeMonthly`.<br>- `interval` - (Required) The interval for maintenance runs. Depending on the frequency this interval is week or month based.<br>- `start_date` - (Optional) The date on which the maintenance window begins to take effect.<br>- `start_time` - (Optional) The time for maintenance to begin, based on the timezone determined by `utc_offset`. Format is `HH:mm`.<br>- `utc_offset` - (Optional) Used to determine the timezone for cluster maintenance.<br>- `week_index` - (Optional) The week in the month used for the maintenance run. Options are `First`, `Second`, `Third`, `Fourth`, and `Last`.<br><br>---<br>`not_allowed` block supports the following:<br>- `end` - (Required) The end of a time span, formatted as an RFC3339 string.<br>- `start` - (Required) The start of a time span, formatted as an RFC3339 string. | <pre>object({<br>    day_of_month = optional(number)<br>    day_of_week  = optional(string)<br>    duration     = number<br>    frequency    = string<br>    interval     = number<br>    start_date   = optional(string)<br>    start_time   = optional(string)<br>    utc_offset   = optional(string)<br>    week_index   = optional(string)<br>    not_allowed = optional(set(object({<br>      end   = string<br>      start = string<br>    })))<br>  })</pre> | `null` | no |
| <a name="input_microsoft_defender_enabled"></a> [microsoft\_defender\_enabled](#input\_microsoft\_defender\_enabled) | (Optional) Is Microsoft Defender on the cluster enabled? Requires `var.log_analytics_workspace_enabled` to be `true` to set this variable to `true`. | `bool` | `false` | no |
| <a name="input_monitor_metrics"></a> [monitor\_metrics](#input\_monitor\_metrics) | (Optional) Specifies a Prometheus add-on profile for the Kubernetes Cluster<br>object({<br>  annotations\_allowed = "(Optional) Specifies a comma-separated list of Kubernetes annotation keys that will be used in the resource's labels metric."<br>  labels\_allowed      = "(Optional) Specifies a Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric."<br>}) | <pre>object({<br>    annotations_allowed = optional(string)<br>    labels_allowed      = optional(string)<br>  })</pre> | `null` | no |
| <a name="input_msi_auth_for_monitoring_enabled"></a> [msi\_auth\_for\_monitoring\_enabled](#input\_msi\_auth\_for\_monitoring\_enabled) | (Optional) Is managed identity authentication for monitoring enabled? | `bool` | `null` | no |
| <a name="input_net_profile_dns_service_ip"></a> [net\_profile\_dns\_service\_ip](#input\_net\_profile\_dns\_service\_ip) | (Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_net_profile_outbound_type"></a> [net\_profile\_outbound\_type](#input\_net\_profile\_outbound\_type) | (Optional) The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting. Defaults to loadBalancer. | `string` | `"loadBalancer"` | no |
| <a name="input_net_profile_pod_cidr"></a> [net\_profile\_pod\_cidr](#input\_net\_profile\_pod\_cidr) | (Optional) The CIDR to use for pod IP addresses. This field can only be set when network\_plugin is set to kubenet or network\_plugin is set to azure and network\_plugin\_mode is set to overlay. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_net_profile_service_cidr"></a> [net\_profile\_service\_cidr](#input\_net\_profile\_service\_cidr) | (Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_network_contributor_role_assigned_subnet_ids"></a> [network\_contributor\_role\_assigned\_subnet\_ids](#input\_network\_contributor\_role\_assigned\_subnet\_ids) | Create role assignments for the AKS Service Principal to be a Network Contributor on the subnets used for the AKS Cluster, key should be static string, value should be subnet's id | `map(string)` | `{}` | no |
| <a name="input_network_plugin"></a> [network\_plugin](#input\_network\_plugin) | Network plugin to use for networking. | `string` | `"kubenet"` | no |
| <a name="input_network_plugin_mode"></a> [network\_plugin\_mode](#input\_network\_plugin\_mode) | (Optional) Specifies the network plugin mode used for building the Kubernetes network. Possible value is `overlay`. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_network_policy"></a> [network\_policy](#input\_network\_policy) | (Optional) Sets up network policy to be used with Azure CNI. Network policy allows us to control the traffic flow between pods. Currently supported values are calico and azure. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_node_os_channel_upgrade"></a> [node\_os\_channel\_upgrade](#input\_node\_os\_channel\_upgrade) | (Optional) The upgrade channel for this Kubernetes Cluster Nodes' OS Image. Possible values are `Unmanaged`, `SecurityPatch`, `NodeImage` and `None`. | `string` | `null` | no |
| <a name="input_node_pools"></a> [node\_pools](#input\_node\_pools) | A map of node pools that need to be created and attached on the Kubernetes cluster. The key of the map can be the name of the node pool, and the key must be static string. The value of the map is a `node_pool` block as defined below:<br>map(object({<br>  name                          = (Required) The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created. A Windows Node Pool cannot have a `name` longer than 6 characters. A random suffix of 4 characters is always added to the name to avoid clashes during recreates.<br>  node\_count                    = (Optional) The initial number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` (inclusive) for user pools and between `1` and `1000` (inclusive) for system pools and must be a value in the range `min_count` - `max_count`.<br>  tags                          = (Optional) A mapping of tags to assign to the resource. At this time there's a bug in the AKS API where Tags for a Node Pool are not stored in the correct case - you [may wish to use Terraform's `ignore_changes` functionality to ignore changes to the casing](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changess) until this is fixed in the AKS API.<br>  vm\_size                       = (Required) The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created.<br>  host\_group\_id                 = (Optional) The fully qualified resource ID of the Dedicated Host Group to provision virtual machines from. Changing this forces a new resource to be created.<br>  capacity\_reservation\_group\_id = (Optional) Specifies the ID of the Capacity Reservation Group where this Node Pool should exist. Changing this forces a new resource to be created.<br>  custom\_ca\_trust\_enabled       = (Optional) Specifies whether to trust a Custom CA. This requires that the Preview Feature `Microsoft.ContainerService/CustomCATrustPreview` is enabled and the Resource Provider is re-registered, see [the documentation](https://learn.microsoft.com/en-us/azure/aks/custom-certificate-authority) for more information.<br>  enable\_auto\_scaling           = (Optional) Whether to enable [auto-scaler](https://docs.microsoft.com/azure/aks/cluster-autoscaler).<br>  enable\_host\_encryption        = (Optional) Should the nodes in this Node Pool have host encryption enabled? Changing this forces a new resource to be created.<br>  enable\_node\_public\_ip         = (Optional) Should each node have a Public IP Address? Changing this forces a new resource to be created.<br>  eviction\_policy               = (Optional) The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are `Deallocate` and `Delete`. Changing this forces a new resource to be created. An Eviction Policy can only be configured when `priority` is set to `Spot` and will default to `Delete` unless otherwise specified.<br>  gpu\_instance                  = (Optional) Specifies the GPU MIG instance profile for supported GPU VM SKU. The allowed values are `MIG1g`, `MIG2g`, `MIG3g`, `MIG4g` and `MIG7g`. Changing this forces a new resource to be created.<br>  kubelet\_config = optional(object({<br>    cpu\_manager\_policy        = (Optional) Specifies the CPU Manager policy to use. Possible values are `none` and `static`, Changing this forces a new resource to be created.<br>    cpu\_cfs\_quota\_enabled     = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created.<br>    cpu\_cfs\_quota\_period      = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created.<br>    image\_gc\_high\_threshold   = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    image\_gc\_low\_threshold    = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    topology\_manager\_policy   = (Optional) Specifies the Topology Manager policy to use. Possible values are `none`, `best-effort`, `restricted` or `single-numa-node`. Changing this forces a new resource to be created.<br>    allowed\_unsafe\_sysctls    = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in `*`). Changing this forces a new resource to be created.<br>    container\_log\_max\_size\_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created.<br>    container\_log\_max\_files   = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created.<br>    pod\_max\_pid               = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created.<br>  }))<br>  linux\_os\_config = optional(object({<br>    sysctl\_config = optional(object({<br>      fs\_aio\_max\_nr                      = (Optional) The sysctl setting fs.aio-max-nr. Must be between `65536` and `6553500`. Changing this forces a new resource to be created.<br>      fs\_file\_max                        = (Optional) The sysctl setting fs.file-max. Must be between `8192` and `12000500`. Changing this forces a new resource to be created.<br>      fs\_inotify\_max\_user\_watches        = (Optional) The sysctl setting fs.inotify.max\_user\_watches. Must be between `781250` and `2097152`. Changing this forces a new resource to be created.<br>      fs\_nr\_open                         = (Optional) The sysctl setting fs.nr\_open. Must be between `8192` and `20000500`. Changing this forces a new resource to be created.<br>      kernel\_threads\_max                 = (Optional) The sysctl setting kernel.threads-max. Must be between `20` and `513785`. Changing this forces a new resource to be created.<br>      net\_core\_netdev\_max\_backlog        = (Optional) The sysctl setting net.core.netdev\_max\_backlog. Must be between `1000` and `3240000`. Changing this forces a new resource to be created.<br>      net\_core\_optmem\_max                = (Optional) The sysctl setting net.core.optmem\_max. Must be between `20480` and `4194304`. Changing this forces a new resource to be created.<br>      net\_core\_rmem\_default              = (Optional) The sysctl setting net.core.rmem\_default. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>      net\_core\_rmem\_max                  = (Optional) The sysctl setting net.core.rmem\_max. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>      net\_core\_somaxconn                 = (Optional) The sysctl setting net.core.somaxconn. Must be between `4096` and `3240000`. Changing this forces a new resource to be created.<br>      net\_core\_wmem\_default              = (Optional) The sysctl setting net.core.wmem\_default. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>      net\_core\_wmem\_max                  = (Optional) The sysctl setting net.core.wmem\_max. Must be between `212992` and `134217728`. Changing this forces a new resource to be created.<br>      net\_ipv4\_ip\_local\_port\_range\_min   = (Optional) The sysctl setting net.ipv4.ip\_local\_port\_range min value. Must be between `1024` and `60999`. Changing this forces a new resource to be created.<br>      net\_ipv4\_ip\_local\_port\_range\_max   = (Optional) The sysctl setting net.ipv4.ip\_local\_port\_range max value. Must be between `1024` and `60999`. Changing this forces a new resource to be created.<br>      net\_ipv4\_neigh\_default\_gc\_thresh1  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh1. Must be between `128` and `80000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_neigh\_default\_gc\_thresh2  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh2. Must be between `512` and `90000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_neigh\_default\_gc\_thresh3  = (Optional) The sysctl setting net.ipv4.neigh.default.gc\_thresh3. Must be between `1024` and `100000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_fin\_timeout           = (Optional) The sysctl setting net.ipv4.tcp\_fin\_timeout. Must be between `5` and `120`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_keepalive\_intvl       = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_intvl. Must be between `10` and `75`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_keepalive\_probes      = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_probes. Must be between `1` and `15`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_keepalive\_time        = (Optional) The sysctl setting net.ipv4.tcp\_keepalive\_time. Must be between `30` and `432000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_max\_syn\_backlog       = (Optional) The sysctl setting net.ipv4.tcp\_max\_syn\_backlog. Must be between `128` and `3240000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_max\_tw\_buckets        = (Optional) The sysctl setting net.ipv4.tcp\_max\_tw\_buckets. Must be between `8000` and `1440000`. Changing this forces a new resource to be created.<br>      net\_ipv4\_tcp\_tw\_reuse              = (Optional) Is sysctl setting net.ipv4.tcp\_tw\_reuse enabled? Changing this forces a new resource to be created.<br>      net\_netfilter\_nf\_conntrack\_buckets = (Optional) The sysctl setting net.netfilter.nf\_conntrack\_buckets. Must be between `65536` and `147456`. Changing this forces a new resource to be created.<br>      net\_netfilter\_nf\_conntrack\_max     = (Optional) The sysctl setting net.netfilter.nf\_conntrack\_max. Must be between `131072` and `1048576`. Changing this forces a new resource to be created.<br>      vm\_max\_map\_count                   = (Optional) The sysctl setting vm.max\_map\_count. Must be between `65530` and `262144`. Changing this forces a new resource to be created.<br>      vm\_swappiness                      = (Optional) The sysctl setting vm.swappiness. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>      vm\_vfs\_cache\_pressure              = (Optional) The sysctl setting vm.vfs\_cache\_pressure. Must be between `0` and `100`. Changing this forces a new resource to be created.<br>    }))<br>    transparent\_huge\_page\_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are `always`, `madvise` and `never`. Changing this forces a new resource to be created.<br>    transparent\_huge\_page\_defrag  = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are `always`, `defer`, `defer+madvise`, `madvise` and `never`. Changing this forces a new resource to be created.<br>    swap\_file\_size\_mb             = (Optional) Specifies the size of swap file on each node in MB. Changing this forces a new resource to be created.<br>  }))<br>  fips\_enabled       = (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. FIPS support is in Public Preview - more information and details on how to opt into the Preview can be found in [this article](https://docs.microsoft.com/azure/aks/use-multiple-node-pools#add-a-fips-enabled-node-pool-preview).<br>  kubelet\_disk\_type  = (Optional) The type of disk used by kubelet. Possible values are `OS` and `Temporary`.<br>  max\_count          = (Optional) The maximum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be greater than or equal to `min_count`.<br>  max\_pods           = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.<br>  message\_of\_the\_day = (Optional) A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of the day for Linux nodes. It cannot be specified for Windows nodes and must be a static string (i.e. will be printed raw and not executed as a script). Changing this forces a new resource to be created.<br>  mode               = (Optional) Should this Node Pool be used for System or User resources? Possible values are `System` and `User`. Defaults to `User`.<br>  min\_count          = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between `0` and `1000` and must be less than or equal to `max_count`.<br>  node\_network\_profile = optional(object({<br>    node\_public\_ip\_tags = (Optional) Specifies a mapping of tags to the instance-level public IPs. Changing this forces a new resource to be created.<br>  }))<br>  node\_labels                  = (Optional) A map of Kubernetes labels which should be applied to nodes in this Node Pool.<br>  node\_public\_ip\_prefix\_id     = (Optional) Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. `enable_node_public_ip` should be `true`. Changing this forces a new resource to be created.<br>  node\_taints                  = (Optional) A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g `key=value:NoSchedule`). Changing this forces a new resource to be created.<br>  orchestrator\_version         = (Optional) Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade). AKS does not require an exact patch version to be specified, minor version aliases such as `1.22` are also supported. - The minor version's latest GA patch is automatically chosen in that case. More details can be found in [the documentation](https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#alias-minor-version). This version must be supported by the Kubernetes Cluster - as such the version of Kubernetes used on the Cluster/Control Plane may need to be upgraded first.<br>  os\_disk\_size\_gb              = (Optional) The Agent Operating System disk size in GB. Changing this forces a new resource to be created.<br>  os\_disk\_type                 = (Optional) The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created.<br>  os\_sku                       = (Optional) Specifies the OS SKU used by the agent pool. Possible values include: `Ubuntu`, `CBLMariner`, `Mariner`, `Windows2019`, `Windows2022`. If not specified, the default is `Ubuntu` if OSType=Linux or `Windows2019` if OSType=Windows. And the default Windows OSSKU will be changed to `Windows2022` after Windows2019 is deprecated. Changing this forces a new resource to be created.<br>  os\_type                      = (Optional) The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are `Linux` and `Windows`. Defaults to `Linux`.<br>  pod\_subnet\_id                = (Optional) The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created.<br>  priority                     = (Optional) The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are `Regular` and `Spot`. Defaults to `Regular`. Changing this forces a new resource to be created.<br>  proximity\_placement\_group\_id = (Optional) The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created. When setting `priority` to Spot - you must configure an `eviction_policy`, `spot_max_price` and add the applicable `node_labels` and `node_taints` [as per the Azure Documentation](https://docs.microsoft.com/azure/aks/spot-node-pool).<br>  spot\_max\_price               = (Optional) The maximum price you're willing to pay in USD per Virtual Machine. Valid values are `-1` (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created. This field can only be configured when `priority` is set to `Spot`.<br>  scale\_down\_mode              = (Optional) Specifies how the node pool should deal with scaled-down nodes. Allowed values are `Delete` and `Deallocate`. Defaults to `Delete`.<br>  snapshot\_id                  = (Optional) The ID of the Snapshot which should be used to create this Node Pool. Changing this forces a new resource to be created.<br>  ultra\_ssd\_enabled            = (Optional) Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to `false`. See [the documentation](https://docs.microsoft.com/azure/aks/use-ultra-disks) for more information. Changing this forces a new resource to be created.<br>  vnet\_subnet\_id               = (Optional) The ID of the Subnet where this Node Pool should exist. Changing this forces a new resource to be created. A route table must be configured on this Subnet.<br>  upgrade\_settings = optional(object({<br>    drain\_timeout\_in\_minutes      = number<br>    node\_soak\_duration\_in\_minutes = number<br>    max\_surge                     = string<br>  }))<br>  windows\_profile = optional(object({<br>    outbound\_nat\_enabled = optional(bool, true)<br>  }))<br>  workload\_runtime = (Optional) Used to specify the workload runtime. Allowed values are `OCIContainer` and `WasmWasi`. WebAssembly System Interface node pools are in Public Preview - more information and details on how to opt into the preview can be found in [this article](https://docs.microsoft.com/azure/aks/use-wasi-node-pools)<br>  zones            = (Optional) Specifies a list of Availability Zones in which this Kubernetes Cluster Node Pool should be located. Changing this forces a new Kubernetes Cluster Node Pool to be created.<br>  create\_before\_destroy = (Optional) Create a new node pool before destroy the old one when Terraform must update an argument that cannot be updated in-place. Set this argument to `true` will add add a random suffix to pool's name to avoid conflict. Default to `true`.<br>})) | <pre>map(object({<br>    name                          = string<br>    node_count                    = optional(number)<br>    tags                          = optional(map(string))<br>    vm_size                       = string<br>    host_group_id                 = optional(string)<br>    capacity_reservation_group_id = optional(string)<br>    custom_ca_trust_enabled       = optional(bool)<br>    enable_auto_scaling           = optional(bool)<br>    enable_host_encryption        = optional(bool)<br>    enable_node_public_ip         = optional(bool)<br>    eviction_policy               = optional(string)<br>    gpu_instance                  = optional(string)<br>    kubelet_config = optional(object({<br>      cpu_manager_policy        = optional(string)<br>      cpu_cfs_quota_enabled     = optional(bool)<br>      cpu_cfs_quota_period      = optional(string)<br>      image_gc_high_threshold   = optional(number)<br>      image_gc_low_threshold    = optional(number)<br>      topology_manager_policy   = optional(string)<br>      allowed_unsafe_sysctls    = optional(set(string))<br>      container_log_max_size_mb = optional(number)<br>      container_log_max_files   = optional(number)<br>      pod_max_pid               = optional(number)<br>    }))<br>    linux_os_config = optional(object({<br>      sysctl_config = optional(object({<br>        fs_aio_max_nr                      = optional(number)<br>        fs_file_max                        = optional(number)<br>        fs_inotify_max_user_watches        = optional(number)<br>        fs_nr_open                         = optional(number)<br>        kernel_threads_max                 = optional(number)<br>        net_core_netdev_max_backlog        = optional(number)<br>        net_core_optmem_max                = optional(number)<br>        net_core_rmem_default              = optional(number)<br>        net_core_rmem_max                  = optional(number)<br>        net_core_somaxconn                 = optional(number)<br>        net_core_wmem_default              = optional(number)<br>        net_core_wmem_max                  = optional(number)<br>        net_ipv4_ip_local_port_range_min   = optional(number)<br>        net_ipv4_ip_local_port_range_max   = optional(number)<br>        net_ipv4_neigh_default_gc_thresh1  = optional(number)<br>        net_ipv4_neigh_default_gc_thresh2  = optional(number)<br>        net_ipv4_neigh_default_gc_thresh3  = optional(number)<br>        net_ipv4_tcp_fin_timeout           = optional(number)<br>        net_ipv4_tcp_keepalive_intvl       = optional(number)<br>        net_ipv4_tcp_keepalive_probes      = optional(number)<br>        net_ipv4_tcp_keepalive_time        = optional(number)<br>        net_ipv4_tcp_max_syn_backlog       = optional(number)<br>        net_ipv4_tcp_max_tw_buckets        = optional(number)<br>        net_ipv4_tcp_tw_reuse              = optional(bool)<br>        net_netfilter_nf_conntrack_buckets = optional(number)<br>        net_netfilter_nf_conntrack_max     = optional(number)<br>        vm_max_map_count                   = optional(number)<br>        vm_swappiness                      = optional(number)<br>        vm_vfs_cache_pressure              = optional(number)<br>      }))<br>      transparent_huge_page_enabled = optional(string)<br>      transparent_huge_page_defrag  = optional(string)<br>      swap_file_size_mb             = optional(number)<br>    }))<br>    fips_enabled       = optional(bool)<br>    kubelet_disk_type  = optional(string)<br>    max_count          = optional(number)<br>    max_pods           = optional(number)<br>    message_of_the_day = optional(string)<br>    mode               = optional(string, "User")<br>    min_count          = optional(number)<br>    node_network_profile = optional(object({<br>      node_public_ip_tags = optional(map(string))<br>    }))<br>    node_labels                  = optional(map(string))<br>    node_public_ip_prefix_id     = optional(string)<br>    node_taints                  = optional(list(string))<br>    orchestrator_version         = optional(string)<br>    os_disk_size_gb              = optional(number)<br>    os_disk_type                 = optional(string, "Managed")<br>    os_sku                       = optional(string)<br>    os_type                      = optional(string, "Linux")<br>    pod_subnet_id                = optional(string)<br>    priority                     = optional(string, "Regular")<br>    proximity_placement_group_id = optional(string)<br>    spot_max_price               = optional(number)<br>    scale_down_mode              = optional(string, "Delete")<br>    snapshot_id                  = optional(string)<br>    ultra_ssd_enabled            = optional(bool)<br>    vnet_subnet_id               = optional(string)<br>    upgrade_settings = optional(object({<br>      drain_timeout_in_minutes      = number<br>      node_soak_duration_in_minutes = number<br>      max_surge                     = string<br>    }))<br>    windows_profile = optional(object({<br>      outbound_nat_enabled = optional(bool, true)<br>    }))<br>    workload_runtime      = optional(string)<br>    zones                 = optional(set(string))<br>    create_before_destroy = optional(bool, true)<br>  }))</pre> | `{}` | no |
| <a name="input_node_resource_group"></a> [node\_resource\_group](#input\_node\_resource\_group) | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_oidc_issuer_enabled"></a> [oidc\_issuer\_enabled](#input\_oidc\_issuer\_enabled) | Enable or Disable the OIDC issuer URL. Defaults to false. | `bool` | `false` | no |
| <a name="input_only_critical_addons_enabled"></a> [only\_critical\_addons\_enabled](#input\_only\_critical\_addons\_enabled) | (Optional) Enabling this option will taint default node pool with `CriticalAddonsOnly=true:NoSchedule` taint. Changing this forces a new resource to be created. | `bool` | `null` | no |
| <a name="input_open_service_mesh_enabled"></a> [open\_service\_mesh\_enabled](#input\_open\_service\_mesh\_enabled) | Is Open Service Mesh enabled? For more details, please visit [Open Service Mesh for AKS](https://docs.microsoft.com/azure/aks/open-service-mesh-about). | `bool` | `null` | no |
| <a name="input_orchestrator_version"></a> [orchestrator\_version](#input\_orchestrator\_version) | Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region | `string` | `null` | no |
| <a name="input_os_disk_size_gb"></a> [os\_disk\_size\_gb](#input\_os\_disk\_size\_gb) | Disk size of nodes in GBs. | `number` | `50` | no |
| <a name="input_os_disk_type"></a> [os\_disk\_type](#input\_os\_disk\_type) | The type of disk which should be used for the Operating System. Possible values are `Ephemeral` and `Managed`. Defaults to `Managed`. Changing this forces a new resource to be created. | `string` | `"Managed"` | no |
| <a name="input_os_sku"></a> [os\_sku](#input\_os\_sku) | (Optional) Specifies the OS SKU used by the agent pool. Possible values include: `Ubuntu`, `CBLMariner`, `Mariner`, `Windows2019`, `Windows2022`. If not specified, the default is `Ubuntu` if OSType=Linux or `Windows2019` if OSType=Windows. And the default Windows OSSKU will be changed to `Windows2022` after Windows2019 is deprecated. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_pod_subnet_id"></a> [pod\_subnet\_id](#input\_pod\_subnet\_id) | (Optional) The ID of the Subnet where the pods in the default Node Pool should exist. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_prefix"></a> [prefix](#input\_prefix) | (Optional) The prefix for the resources created in the specified Azure Resource Group. Omitting this variable requires both `var.cluster_log_analytics_workspace_name` and `var.cluster_name` have been set. | `string` | `""` | no |
| <a name="input_private_cluster_enabled"></a> [private\_cluster\_enabled](#input\_private\_cluster\_enabled) | If true cluster API server will be exposed only on internal IP address and available only in cluster vnet. | `bool` | `false` | no |
| <a name="input_private_cluster_public_fqdn_enabled"></a> [private\_cluster\_public\_fqdn\_enabled](#input\_private\_cluster\_public\_fqdn\_enabled) | (Optional) Specifies whether a Public FQDN for this Private Cluster should be added. Defaults to `false`. | `bool` | `false` | no |
| <a name="input_private_dns_zone_id"></a> [private\_dns\_zone\_id](#input\_private\_dns\_zone\_id) | (Optional) Either the ID of Private DNS Zone which should be delegated to this Cluster, `System` to have AKS manage this or `None`. In case of `None` you will need to bring your own DNS server and set up resolving, otherwise cluster will have issues after provisioning. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_public_ssh_key"></a> [public\_ssh\_key](#input\_public\_ssh\_key) | A custom ssh key to control access to the AKS cluster. Changing this forces a new resource to be created. | `string` | `""` | no |
| <a name="input_rbac_aad"></a> [rbac\_aad](#input\_rbac\_aad) | (Optional) Is Azure Active Directory integration enabled? | `bool` | `true` | no |
| <a name="input_rbac_aad_admin_group_object_ids"></a> [rbac\_aad\_admin\_group\_object\_ids](#input\_rbac\_aad\_admin\_group\_object\_ids) | Object ID of groups with admin access. | `list(string)` | `null` | no |
| <a name="input_rbac_aad_azure_rbac_enabled"></a> [rbac\_aad\_azure\_rbac\_enabled](#input\_rbac\_aad\_azure\_rbac\_enabled) | (Optional) Is Role Based Access Control based on Azure AD enabled? | `bool` | `null` | no |
| <a name="input_rbac_aad_client_app_id"></a> [rbac\_aad\_client\_app\_id](#input\_rbac\_aad\_client\_app\_id) | The Client ID of an Azure Active Directory Application. | `string` | `null` | no |
| <a name="input_rbac_aad_managed"></a> [rbac\_aad\_managed](#input\_rbac\_aad\_managed) | Is the Azure Active Directory integration Managed, meaning that Azure will create/manage the Service Principal used for integration. | `bool` | `false` | no |
| <a name="input_rbac_aad_server_app_id"></a> [rbac\_aad\_server\_app\_id](#input\_rbac\_aad\_server\_app\_id) | The Server ID of an Azure Active Directory Application. | `string` | `null` | no |
| <a name="input_rbac_aad_server_app_secret"></a> [rbac\_aad\_server\_app\_secret](#input\_rbac\_aad\_server\_app\_secret) | The Server Secret of an Azure Active Directory Application. | `string` | `null` | no |
| <a name="input_rbac_aad_tenant_id"></a> [rbac\_aad\_tenant\_id](#input\_rbac\_aad\_tenant\_id) | (Optional) The Tenant ID used for Azure Active Directory Application. If this isn't specified the Tenant ID of the current Subscription is used. | `string` | `null` | no |
| <a name="input_resource_group_name"></a> [resource\_group\_name](#input\_resource\_group\_name) | The resource group name to be imported | `string` | n/a | yes |
| <a name="input_role_based_access_control_enabled"></a> [role\_based\_access\_control\_enabled](#input\_role\_based\_access\_control\_enabled) | Enable Role Based Access Control. | `bool` | `false` | no |
| <a name="input_run_command_enabled"></a> [run\_command\_enabled](#input\_run\_command\_enabled) | (Optional) Whether to enable run command for the cluster or not. | `bool` | `true` | no |
| <a name="input_scale_down_mode"></a> [scale\_down\_mode](#input\_scale\_down\_mode) | (Optional) Specifies the autoscaling behaviour of the Kubernetes Cluster. If not specified, it defaults to `Delete`. Possible values include `Delete` and `Deallocate`. Changing this forces a new resource to be created. | `string` | `"Delete"` | no |
| <a name="input_secret_rotation_enabled"></a> [secret\_rotation\_enabled](#input\_secret\_rotation\_enabled) | Is secret rotation enabled? This variable is only used when `key_vault_secrets_provider_enabled` is `true` and defaults to `false` | `bool` | `false` | no |
| <a name="input_secret_rotation_interval"></a> [secret\_rotation\_interval](#input\_secret\_rotation\_interval) | The interval to poll for secret rotation. This attribute is only set when `secret_rotation` is `true` and defaults to `2m` | `string` | `"2m"` | no |
| <a name="input_service_mesh_profile"></a> [service\_mesh\_profile](#input\_service\_mesh\_profile) | `mode` - (Required) The mode of the service mesh. Possible value is `Istio`.<br>`internal_ingress_gateway_enabled` - (Optional) Is Istio Internal Ingress Gateway enabled? Defaults to `true`.<br>`external_ingress_gateway_enabled` - (Optional) Is Istio External Ingress Gateway enabled? Defaults to `true`. | <pre>object({<br>    mode                             = string<br>    internal_ingress_gateway_enabled = optional(bool, true)<br>    external_ingress_gateway_enabled = optional(bool, true)<br>  })</pre> | `null` | no |
| <a name="input_sku_tier"></a> [sku\_tier](#input\_sku\_tier) | The SKU Tier that should be used for this Kubernetes Cluster. Possible values are `Free`, `Standard` and `Premium` | `string` | `"Free"` | no |
| <a name="input_snapshot_id"></a> [snapshot\_id](#input\_snapshot\_id) | (Optional) The ID of the Snapshot which should be used to create this default Node Pool. `temporary_name_for_rotation` must be specified when changing this property. | `string` | `null` | no |
| <a name="input_storage_profile_blob_driver_enabled"></a> [storage\_profile\_blob\_driver\_enabled](#input\_storage\_profile\_blob\_driver\_enabled) | (Optional) Is the Blob CSI driver enabled? Defaults to `false` | `bool` | `false` | no |
| <a name="input_storage_profile_disk_driver_enabled"></a> [storage\_profile\_disk\_driver\_enabled](#input\_storage\_profile\_disk\_driver\_enabled) | (Optional) Is the Disk CSI driver enabled? Defaults to `true` | `bool` | `true` | no |
| <a name="input_storage_profile_disk_driver_version"></a> [storage\_profile\_disk\_driver\_version](#input\_storage\_profile\_disk\_driver\_version) | (Optional) Disk CSI Driver version to be used. Possible values are `v1` and `v2`. Defaults to `v1`. | `string` | `"v1"` | no |
| <a name="input_storage_profile_enabled"></a> [storage\_profile\_enabled](#input\_storage\_profile\_enabled) | Enable storage profile | `bool` | `false` | no |
| <a name="input_storage_profile_file_driver_enabled"></a> [storage\_profile\_file\_driver\_enabled](#input\_storage\_profile\_file\_driver\_enabled) | (Optional) Is the File CSI driver enabled? Defaults to `true` | `bool` | `true` | no |
| <a name="input_storage_profile_snapshot_controller_enabled"></a> [storage\_profile\_snapshot\_controller\_enabled](#input\_storage\_profile\_snapshot\_controller\_enabled) | (Optional) Is the Snapshot Controller enabled? Defaults to `true` | `bool` | `true` | no |
| <a name="input_support_plan"></a> [support\_plan](#input\_support\_plan) | The support plan which should be used for this Kubernetes Cluster. Possible values are `KubernetesOfficial` and `AKSLongTermSupport`. | `string` | `"KubernetesOfficial"` | no |
| <a name="input_tags"></a> [tags](#input\_tags) | Any tags that should be present on the AKS cluster resources | `map(string)` | `{}` | no |
| <a name="input_temporary_name_for_rotation"></a> [temporary\_name\_for\_rotation](#input\_temporary\_name\_for\_rotation) | (Optional) Specifies the name of the temporary node pool used to cycle the default node pool for VM resizing. the `var.agents_size` is no longer ForceNew and can be resized by specifying `temporary_name_for_rotation` | `string` | `null` | no |
| <a name="input_tracing_tags_enabled"></a> [tracing\_tags\_enabled](#input\_tracing\_tags\_enabled) | Whether enable tracing tags that generated by BridgeCrew Yor. | `bool` | `false` | no |
| <a name="input_tracing_tags_prefix"></a> [tracing\_tags\_prefix](#input\_tracing\_tags\_prefix) | Default prefix for generated tracing tags | `string` | `"avm_"` | no |
| <a name="input_ultra_ssd_enabled"></a> [ultra\_ssd\_enabled](#input\_ultra\_ssd\_enabled) | (Optional) Used to specify whether the UltraSSD is enabled in the Default Node Pool. Defaults to false. | `bool` | `false` | no |
| <a name="input_vnet_subnet_id"></a> [vnet\_subnet\_id](#input\_vnet\_subnet\_id) | (Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created. | `string` | `null` | no |
| <a name="input_web_app_routing"></a> [web\_app\_routing](#input\_web\_app\_routing) | object({<br>  dns\_zone\_id = "(Required) Specifies the ID of the DNS Zone in which DNS entries are created for applications deployed to the cluster when Web App Routing is enabled."<br>}) | <pre>object({<br>    dns_zone_id = string<br>  })</pre> | `null` | no |
| <a name="input_workload_autoscaler_profile"></a> [workload\_autoscaler\_profile](#input\_workload\_autoscaler\_profile) | `keda_enabled` - (Optional) Specifies whether KEDA Autoscaler can be used for workloads.<br>`vertical_pod_autoscaler_enabled` - (Optional) Specifies whether Vertical Pod Autoscaler should be enabled. | <pre>object({<br>    keda_enabled                    = optional(bool, false)<br>    vertical_pod_autoscaler_enabled = optional(bool, false)<br>  })</pre> | `null` | no |
| <a name="input_workload_identity_enabled"></a> [workload\_identity\_enabled](#input\_workload\_identity\_enabled) | Enable or Disable Workload Identity. Defaults to false. | `bool` | `false` | no |

## Outputs

| Name | Description |
|------|-------------|
| <a name="output_aci_connector_linux"></a> [aci\_connector\_linux](#output\_aci\_connector\_linux) | The `aci_connector_linux` block of `azurerm_kubernetes_cluster` resource. |
| <a name="output_aci_connector_linux_enabled"></a> [aci\_connector\_linux\_enabled](#output\_aci\_connector\_linux\_enabled) | Has `aci_connector_linux` been enabled on the `azurerm_kubernetes_cluster` resource? |
| <a name="output_admin_client_certificate"></a> [admin\_client\_certificate](#output\_admin\_client\_certificate) | The `client_certificate` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block.  Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
| <a name="output_admin_client_key"></a> [admin\_client\_key](#output\_admin\_client\_key) | The `client_key` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
| <a name="output_admin_cluster_ca_certificate"></a> [admin\_cluster\_ca\_certificate](#output\_admin\_cluster\_ca\_certificate) | The `cluster_ca_certificate` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
| <a name="output_admin_host"></a> [admin\_host](#output\_admin\_host) | The `host` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block. The Kubernetes cluster server host. |
| <a name="output_admin_password"></a> [admin\_password](#output\_admin\_password) | The `password` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block. A password or token used to authenticate to the Kubernetes cluster. |
| <a name="output_admin_username"></a> [admin\_username](#output\_admin\_username) | The `username` in the `azurerm_kubernetes_cluster`'s `kube_admin_config` block. A username used to authenticate to the Kubernetes cluster. |
| <a name="output_aks_id"></a> [aks\_id](#output\_aks\_id) | The `azurerm_kubernetes_cluster`'s id. |
| <a name="output_aks_name"></a> [aks\_name](#output\_aks\_name) | The `azurerm_kubernetes_cluster`'s name. |
| <a name="output_azure_policy_enabled"></a> [azure\_policy\_enabled](#output\_azure\_policy\_enabled) | The `azurerm_kubernetes_cluster`'s `azure_policy_enabled` argument. Should the Azure Policy Add-On be enabled? For more details please visit [Understand Azure Policy for Azure Kubernetes Service](https://docs.microsoft.com/en-ie/azure/governance/policy/concepts/rego-for-aks) |
| <a name="output_azurerm_log_analytics_workspace_id"></a> [azurerm\_log\_analytics\_workspace\_id](#output\_azurerm\_log\_analytics\_workspace\_id) | The id of the created Log Analytics workspace |
| <a name="output_azurerm_log_analytics_workspace_name"></a> [azurerm\_log\_analytics\_workspace\_name](#output\_azurerm\_log\_analytics\_workspace\_name) | The name of the created Log Analytics workspace |
| <a name="output_azurerm_log_analytics_workspace_primary_shared_key"></a> [azurerm\_log\_analytics\_workspace\_primary\_shared\_key](#output\_azurerm\_log\_analytics\_workspace\_primary\_shared\_key) | Specifies the workspace key of the log analytics workspace |
| <a name="output_client_certificate"></a> [client\_certificate](#output\_client\_certificate) | The `client_certificate` in the `azurerm_kubernetes_cluster`'s `kube_config` block. Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
| <a name="output_client_key"></a> [client\_key](#output\_client\_key) | The `client_key` in the `azurerm_kubernetes_cluster`'s `kube_config` block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
| <a name="output_cluster_ca_certificate"></a> [cluster\_ca\_certificate](#output\_cluster\_ca\_certificate) | The `cluster_ca_certificate` in the `azurerm_kubernetes_cluster`'s `kube_config` block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
| <a name="output_cluster_fqdn"></a> [cluster\_fqdn](#output\_cluster\_fqdn) | The FQDN of the Azure Kubernetes Managed Cluster. |
| <a name="output_cluster_identity"></a> [cluster\_identity](#output\_cluster\_identity) | The `azurerm_kubernetes_cluster`'s `identity` block. |
| <a name="output_cluster_portal_fqdn"></a> [cluster\_portal\_fqdn](#output\_cluster\_portal\_fqdn) | The FQDN for the Azure Portal resources when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
| <a name="output_cluster_private_fqdn"></a> [cluster\_private\_fqdn](#output\_cluster\_private\_fqdn) | The FQDN for the Kubernetes Cluster when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
| <a name="output_generated_cluster_private_ssh_key"></a> [generated\_cluster\_private\_ssh\_key](#output\_generated\_cluster\_private\_ssh\_key) | The cluster will use this generated private key as ssh key when `var.public_ssh_key` is empty or null. Private key data in [PEM (RFC 1421)](https://datatracker.ietf.org/doc/html/rfc1421) format. |
| <a name="output_generated_cluster_public_ssh_key"></a> [generated\_cluster\_public\_ssh\_key](#output\_generated\_cluster\_public\_ssh\_key) | The cluster will use this generated public key as ssh key when `var.public_ssh_key` is empty or null. The fingerprint of the public key data in OpenSSH MD5 hash format, e.g. `aa:bb:cc:....` Only available if the selected private key format is compatible, similarly to `public_key_openssh` and the [ECDSA P224 limitations](https://registry.terraform.io/providers/hashicorp/tls/latest/docs#limitations). |
| <a name="output_host"></a> [host](#output\_host) | The `host` in the `azurerm_kubernetes_cluster`'s `kube_config` block. The Kubernetes cluster server host. |
| <a name="output_http_application_routing_zone_name"></a> [http\_application\_routing\_zone\_name](#output\_http\_application\_routing\_zone\_name) | The `azurerm_kubernetes_cluster`'s `http_application_routing_zone_name` argument. The Zone Name of the HTTP Application Routing. |
| <a name="output_ingress_application_gateway"></a> [ingress\_application\_gateway](#output\_ingress\_application\_gateway) | The `azurerm_kubernetes_cluster`'s `ingress_application_gateway` block. |
| <a name="output_ingress_application_gateway_enabled"></a> [ingress\_application\_gateway\_enabled](#output\_ingress\_application\_gateway\_enabled) | Has the `azurerm_kubernetes_cluster` turned on `ingress_application_gateway` block? |
| <a name="output_key_vault_secrets_provider"></a> [key\_vault\_secrets\_provider](#output\_key\_vault\_secrets\_provider) | The `azurerm_kubernetes_cluster`'s `key_vault_secrets_provider` block. |
| <a name="output_key_vault_secrets_provider_enabled"></a> [key\_vault\_secrets\_provider\_enabled](#output\_key\_vault\_secrets\_provider\_enabled) | Has the `azurerm_kubernetes_cluster` turned on `key_vault_secrets_provider` block? |
| <a name="output_kube_admin_config_raw"></a> [kube\_admin\_config\_raw](#output\_kube\_admin\_config\_raw) | The `azurerm_kubernetes_cluster`'s `kube_admin_config_raw` argument. Raw Kubernetes config for the admin account to be used by [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) and other compatible tools. This is only available when Role Based Access Control with Azure Active Directory is enabled and local accounts enabled. |
| <a name="output_kube_config_raw"></a> [kube\_config\_raw](#output\_kube\_config\_raw) | The `azurerm_kubernetes_cluster`'s `kube_config_raw` argument. Raw Kubernetes config to be used by [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) and other compatible tools. |
| <a name="output_kubelet_identity"></a> [kubelet\_identity](#output\_kubelet\_identity) | The `azurerm_kubernetes_cluster`'s `kubelet_identity` block. |
| <a name="output_location"></a> [location](#output\_location) | The `azurerm_kubernetes_cluster`'s `location` argument. (Required) The location where the Managed Kubernetes Cluster should be created. |
| <a name="output_network_profile"></a> [network\_profile](#output\_network\_profile) | The `azurerm_kubernetes_cluster`'s `network_profile` block |
| <a name="output_node_resource_group"></a> [node\_resource\_group](#output\_node\_resource\_group) | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. |
| <a name="output_oidc_issuer_url"></a> [oidc\_issuer\_url](#output\_oidc\_issuer\_url) | The OIDC issuer URL that is associated with the cluster. |
| <a name="output_oms_agent"></a> [oms\_agent](#output\_oms\_agent) | The `azurerm_kubernetes_cluster`'s `oms_agent` argument. |
| <a name="output_oms_agent_enabled"></a> [oms\_agent\_enabled](#output\_oms\_agent\_enabled) | Has the `azurerm_kubernetes_cluster` turned on `oms_agent` block? |
| <a name="output_open_service_mesh_enabled"></a> [open\_service\_mesh\_enabled](#output\_open\_service\_mesh\_enabled) | (Optional) Is Open Service Mesh enabled? For more details, please visit [Open Service Mesh for AKS](https://docs.microsoft.com/azure/aks/open-service-mesh-about). |
| <a name="output_password"></a> [password](#output\_password) | The `password` in the `azurerm_kubernetes_cluster`'s `kube_config` block. A password or token used to authenticate to the Kubernetes cluster. |
| <a name="output_username"></a> [username](#output\_username) | The `username` in the `azurerm_kubernetes_cluster`'s `kube_config` block. A username used to authenticate to the Kubernetes cluster. |
| <a name="output_web_app_routing_identity"></a> [web\_app\_routing\_identity](#output\_web\_app\_routing\_identity) | The `azurerm_kubernetes_cluster`'s `web_app_routing_identity` block, it's type is a list of object. |
<!-- END_TF_DOCS -->

terraform-azurerm-aks's People

Contributors

aescrob avatar bonddim avatar davidkarlsen avatar dcaro avatar dependabot[bot] avatar digiserg avatar ecklm avatar eyenx avatar fgauna12 avatar github-actions[bot] avatar gzur avatar hairmare avatar iamamitgera avatar isantospardo avatar ishuar avatar jiaweitao001 avatar lebenitza avatar lonegunmanb avatar malantin avatar mcanevet avatar mr-onion-2 avatar nlamirault avatar olofmattsson-inriver avatar sblack4 avatar skolobov avatar the-technat avatar vermacodes avatar viters avatar yupwei68 avatar zioproto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-azurerm-aks's Issues

Allow RBAC for K8s Service Accounts with out the need for AzureAD

Allow Kubernetes RBAC to be enabled to use Kubernetes Service Accounts with out the need for AzureAD integration per docs

However, when enable_role_based_access_control = true and with default rbac_aad_managed=false currently it fails with the error

│ Error: You must specify client_app_id and server_app_id and server_app_secret when using managed aad rbac (managed = false)
│ 
│   with module.aks.azurerm_kubernetes_cluster.main,
│   on .terraform/modules/aks/main.tf line 10, in resource "azurerm_kubernetes_cluster" "main":
│   10: resource "azurerm_kubernetes_cluster" "main" {

kube_config_raw needs to be marked as nonsensitive to work with terraform v0.15

When using this module with terraform 0.15 we get the below error while trying to do terraform apply.

│ Error: Output refers to sensitive values
│ 
│   on .terraform/modules/aks/outputs.tf line 37:
│   37: output "kube_config_raw" {
│ 
│ Expressions used in outputs can only refer to sensitive values if the
│ sensitive attribute is true.

This forces the output kube_config_raw to be marked as sensitive or use the nonsensitive function to override Terraform's automatic detection.

Cannot override cluster location

cluster location defaults to using resource group location, and there's no way to override this.
Creating PR to address this.

Tags are unset for log analytics solution

Common compliance policies require tags to be set on all resources that support tags. At present, the azurerm_log_analytics_solution.main resource does not set the tags argument to the value of the tags value, which it does support.

This would be a extremely nice to have addressed.

Include node_resource_group as variable

Will be helpful have the variable for node_resource_group name

https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#node_resource_group

node_resource_group - (Optional) The name of the Resource Group where the Kubernetes Nodes should exist. Changing this forces a new resource to be created.

variable "node_resource_group" {
  description = "The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster."
  type        = string
  default     = null
}

AKS versions include_preview should default to false

Currently, the argument reference for include_preview in the data source azurerm_kubernetes_service_versions defaults to "true." This is unbeknownst to users and has caused some confusion.

include_preview should default to "false," OR we should add arguments for "latest_stable_version" and "latest_preview_version."

data "azurerm_kubernetes_service_versions" "current" {
  location = "West Europe"
  include_preview = true
}

output "versions" {
  value = data.azurerm_kubernetes_service_versions.current.versions
}

output "latest_version" {
  value = data.azurerm_kubernetes_service_versions.current.latest_version
}

Expose all outputs from azurerm_kubernetes_cluster resources

I would suggest exposing all outputs.

In my case, I've enabled AAD integration for Kubernetes cluster the and I've noticed it is no longer possible to seamlessly integrate the Azure part with Kubernetes/Helm provider as I don't want to use kubelogin.

Another way to connect with aks would be using kube_admin_config output block but it's not exposed in the module so please make all values available in output so users can use it accordingly.

Ref: Azure/AKS#1763

Proposal: Cluster-Name should follow Naming-recommendations or allow for change

From the Azure resource naming guidelines I understand the importance of a consistent naming across our resources. The azure naming module documentation even suggests to be even stricter and merely use suffixes for resource names.

However, in this very aks module I can only define a prefix, no suffix and there's no option to otherwise influence the name for the cluster or the log analysis workspace.
In order to establish consistent naming amongst my resources, I would like to be able to have a stronger influence on the name - either by providing an additional suffix input or by adding a name input for both, the aks cluster as well as the log analytics workspace.

Changing sku_tier forces replacement

Changing the sku_tier from Paid to Free forces a full rebuild of the cluster. If this can be done via the Azure CLI without a rebuild why should terraform?

Any insight would be helpful.

As is module failed to create a cluster

There are multiple issues which are identified as part of consumption to 1.0.0 module.

  1. Leaving default prefix variable value to cluster name fails to create the cluster as it is not unique name for log analytics resource. have overridden with additional property value.

  2. Running with above state it resulted in below failure. Need to override variable kubernetes_version . Instruction sets needs to get updated for mandatory values in order to support as is usage of the module.

* module.aks.module.kubernetes.azurerm_kubernetes_cluster.main: 1 error(s) occurred:

* azurerm_kubernetes_cluster.main: Error creating/updating Managed Kubernetes Cluster "aksclustermodule-aks" (Resource Group "aksclustermodule-resources"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter orchestratorProfile.OrchestratorVersion is invalid. Please see https://aka.ms/aks-naming-rules for more details." Target="orchestratorProfile.OrchestratorVersion"

when trying to deploy aks cluster using this module it fails with the error "ServiceCidrOverlapExistingSubnetsCidr"

Below is my main.tf file . when i try to deploy this it deploys RG ,Vnet and Subnet but fails during AKS deployment

**provider "azurerm" {
features {}
}

resource "azurerm_resource_group" "aks" {
name = var.resourcegroupname
location = var.location
tags = {
environment = "dev"
}
}

module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.aks.name
address_space = "10.0.0.0/16"
vnet_name = "${var.prefix}-vnet"
subnet_prefixes = ["10.0.1.0/24"]
subnet_names = ["aks-main"]
tags = {
environment = "dev"
}
depends_on = [azurerm_resource_group.aks]
}

module "aks" {
source = "Azure/aks/azurerm"
version = "4.2.0"
resource_group_name = azurerm_resource_group.aks.name
client_id = "your-service-principal-client-appid"
client_secret = "your-service-principal-client-password"
prefix = "${var.prefix}-aks"
vnet_subnet_id = module.network.vnet_subnets[0]
os_disk_size_gb = 126
agents_count = var.agents_count
agents_size = var.agents_size
enable_log_analytics_workspace = false
tags = {
environment = "dev"
}
depends_on = [module.network]
}**

Error: creating Managed Kubernetes Cluster "prod-aks-aks" (Resource Group "elasticsearch-rgs"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ServiceCidrOverlapExistingSubnetsCidr" Message="The specified service CIDR 10.0.0.0/16 is conflicted with an existing subnet CIDR 10.0.1.0/24" Target="networkProfile.serviceCIDR"

on .terraform/modules/aks/main.tf line 10, in resource "azurerm_kubernetes_cluster" "main":
10: resource "azurerm_kubernetes_cluster" "main" {

am I doing some thing wrong

Terraform 0.12 support

Hello,

It seems that the v1.0 is not compatible with terraform v0.12:

$ terraform init
Initializing modules...
Downloading Azure/aks/azurerm 1.0.0 for aks...
- aks in .terraform/modules/aks/Azure-terraform-azurerm-aks-72b73d0
- aks.kubernetes in .terraform/modules/aks/Azure-terraform-azurerm-aks-72b73d0/modules/kubernetes-cluster
- aks.log_analytics_solution in .terraform/modules/aks/Azure-terraform-azurerm-aks-72b73d0/modules/log-analytics-solution
- aks.log_analytics_workspace in .terraform/modules/aks/Azure-terraform-azurerm-aks-72b73d0/modules/log-analytics-workspace
- aks.ssh-key in .terraform/modules/aks/Azure-terraform-azurerm-aks-72b73d0/modules/ssh-key

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...

Provider "azurerm" v1.23.0 is not compatible with Terraform 0.12.2.

Provider version 1.27.0 is the earliest compatible version. Select it with 
the following version constraint:

	version = "~> 1.27"

Terraform checked all of the plugin versions matching the given constraint:
    =1.23.0

Consult the documentation for this provider for more information on
compatibility between provider and Terraform versions.


Error: incompatible provider version


Expose `os_disk_type` as a variable

I've just spun up a Kubernetes cluster in Azure using this module and am reviewing the advisor recommendations. One of the recommendations is to switch to emphemeral OS disks, which would be achieved by setting the os_disk_type argument for the default_node_pool block to Ephemeral from the default Managed.

Please add a variable to be able to switch OS disk types to make use of this feature.

Missing Kubernetes Cluster Version

Hi Azure team!

I want to use a "verified" AKS terraform module to deploy some clusters. I went to the terraform registry page and i found this one: https://registry.terraform.io/modules/Azure/aks/azurerm/3.0.0 which is the "official" maintained by Microsoft Azure team.

Awesome. But currently i can not understand why i can not specify Kubernetes Version variable (which is available both from web interface and terraform provider: https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#kubernetes_version)

So...

The questions are:

  1. Is this module currently maintained?
  2. Is there a reason to not expose the Kubernetes Version variable?
  3. Should i use another module?

Thanks!

Support for user_assigned_identity_id

Hi There!

The new release of the azurerm provider ( v2.44.0 link now has support for a user assigned identities, which makes managing a k8s cluster in a user defined network much easier ( cause you don't need to fiddle around with service principals ).

Are there already plans for upgrading to the new provider Version and to expose the neccessary configuration on a module level?

I'm also open to issue a PR myself, but i wanted to check in with the maintainers first.

Terraform 0.15 , azurerm 2.75.0

Hello,

Does this module work with thoses versions ?

I did a simple config and I got this error :

│ Error: waiting for creation of Managed Kubernetes Cluster "aks" (Resource Group "RG_AKS"): Code="CreateVMSSAgentPoolFailed" Message="AKS encountered an internal error while attempting the requested Creating operation. AKS will continuously retry the requested operation until successful or a retry timeout is hit. Check back to see if the operation requires resubmission. Correlation ID: 164824d8-3c2b-677c-4f4d-bc2bf8aef2ba, Operation ID: 575312e4-00f7-4d70-bb96-90aa0f373934, Timestamp: 2021-10-15T11:41:25Z."
│
│ with module.aks.azurerm_kubernetes_cluster.main,
│ on .terraform/modules/aks/main.tf line 10, in resource "azurerm_kubernetes_cluster" "main":
│ 10: resource "azurerm_kubernetes_cluster" "main" {

Here the TF code :

module "network" {
  source              = "Azure/network/azurerm"
  resource_group_name = azurerm_resource_group.example.name
  address_space       = "10.10.0.0/16"
  subnet_prefixes     = ["10.10.1.0/24"]
  subnet_names        = ["subnet1"]
  depends_on          = [azurerm_resource_group.example]
}

data "azuread_group" "aks_cluster_admins" {
  display_name = "xxx_members"
}

module "aks" {
  source                           = "Azure/aks/azurerm"
  resource_group_name              = azurerm_resource_group.example.name
  kubernetes_version               = "1.22.1"
  orchestrator_version             = "1.22.1"
  prefix                           = "prefix"
  cluster_name                     = "aks"
  network_plugin                   = "azure"
  vnet_subnet_id                   = module.network.vnet_subnets[0]
  os_disk_size_gb                  = 50
  sku_tier                         = "Paid" # defaults to Free
  enable_role_based_access_control = true
  rbac_aad_admin_group_object_ids  = [data.azuread_group.aks_cluster_admins.id]
  rbac_aad_managed                 = true
  private_cluster_enabled          = true # default value
  enable_http_application_routing  = true
  enable_azure_policy              = true
  enable_auto_scaling              = true
  enable_host_encryption           = true
  agents_min_count                 = 1
  agents_max_count                 = 2
  agents_count                     = null # Please set `agents_count` `null` while `enable_auto_scaling` is `true` to avoid possible `agents_count` changes.
  agents_max_pods                  = 100
  agents_pool_name                 = "exnodepool"
  agents_availability_zones        = ["1", "2"]
  agents_type                      = "VirtualMachineScaleSets"

  agents_labels = {
    "nodepool" : "defaultnodepool"
  }

  agents_tags = {
    "Agent" : "defaultnodepoolagent"
  }
  network_policy                 = "azure"
  net_profile_dns_service_ip     = "10.0.0.10"
  net_profile_docker_bridge_cidr = "170.10.0.1/16"
  net_profile_service_cidr       = "10.0.0.0/16"

  depends_on = [module.network]
}

RG, backend etc... are created in another file.

Any idea ?

Regards

Error with output variable "http_application_routing_zone_name" while running terraform apply

Problem

Hey there! I'm currently having issues while running apply with an existing kubernetes cluster built with previous versions of this module. The error occurs as the following:

2020-09-09T12:47:11.1875976Z �[1m�[31mError: �[0m�[0m�[1mInvalid index�[0m
2020-09-09T12:47:11.1876299Z 
2020-09-09T12:47:11.1876716Z �[0m  on .terraform\modules\aks\outputs.tf line 42, in output "http_application_routing_zone_name":
2020-09-09T12:47:11.1877381Z   42:   value = azurerm_kubernetes_cluster.main.addon_profile[0].http_application_routing�[4m[0]�[0m.http_application_routing_zone_name
2020-09-09T12:47:11.1877748Z �[0m    �[90m|----------------�[0m
2020-09-09T12:47:11.1878072Z �[0m    �[90m|�[0m �[1mazurerm_kubernetes_cluster.main.addon_profile[0].http_application_routing�[0m is empty list of object�[0m
2020-09-09T12:47:11.1878462Z �[0m
2020-09-09T12:47:11.1878686Z The given key does not identify an element in this collection value.

I'm using terraform version 0.13.0 and Azure/aks/azurerm 4.1.0 for aks in azure pipelines.

How to specify Kubernetes version?

The module has the Kubernetes version (1.11.3) hardcoded in the module which is unsupported in westus2.

Error: Error applying plan:

1 error(s) occurred:

* module.aks.module.kubernetes.azurerm_kubernetes_cluster.main: 1 error(s) occurred:

* azurerm_kubernetes_cluster.main: containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="InvalidParameter" Message="The value of parameter orchestratorProfile.OrchestratorVersion is invalid." Target="orchestratorProfile.OrchestratorVersion"

I would like to specify a version by providing a value like the following:

module "aks" {
  source  = "Azure/aks/azurerm"
  version = "0.9.0"

  CLIENT_ID = "${var.CLIENT_ID}"
  CLIENT_SECRET = "${var.CLIENT_SECRET}"
  location = "${var.location}"
  prefix = "${var.prefix}"
  kubernetes_version = "${var.kubernetes_version}"
}

I can change the module code in main.tf and variables.tf, but am wondering if there a way to override the hardcoded value or if the module code does need to be changed? If so, I'm happy to submit a PR to add this support. Thanks.

Please remove the aks suffix

Can you please make adding the "aks" suffix to the cluster name optional.

name = "${var.prefix}-aks"

Right now -aks is tacked on to the cluster name and this makes it less flexible. Thank you!!

Supporting Azure RBAC for AKS Cluster

Hello,

I am trying to enable Azure RBAC support for the AKS cluster provisioned by the module.

But it seems that it is not supported yet, locally I have tested the cluster creation by modifying the azurerm_kubernetes_cluster resource with following snippet:

  role_based_access_control {
    enabled = var.enable_role_based_access_control

    dynamic "azure_active_directory" {
      for_each = var.enable_role_based_access_control && var.rbac_aad_managed ? ["rbac"] : []
      content {
        managed                = true
        admin_group_object_ids = var.rbac_aad_admin_group_object_ids
        azure_rbac_enabled     = true
      }
    }

It works great, but is the addition of azure_rbac_enabled something in the works already?

If not, I would be glad to contribute, may be adding another variable to support Azure RBAC?

provisioning windows nodes with custom scripts during cluster creation

Hello!

Is there any approach to apply few lines of code during nodes creation process. Do you have some field like a user_data inside agent_pool_profile or windows_profile blocks?

I can solve the problem with helps ssh pipe to Windows node, but it's a bit complex, when i will try to autoscale nodepool.

Thank you!

Proposal - add cluster autoscaler support

Hello.
I'd like to use AKS cluster autoscaler to scale nodes in my cluster.

Could I create Pull Request with autoscaler support? Terraform module supports cluster autoscaling.

What do you think?

resource group creation

Resource group creation is a bit random.

There is node resource group, and aks resource group. Yet only node resource group is exported, while aks resource kept hidden.

I would like the module to:

  1. Take resource group as parameter, so I can plugin module into bigger deployment;
  2. Put AKS and node into same resource group.

The argument "workspace_name" was already set

PS D:\TerraformProjects\TestAzureAKS> terraform.exe init
Initializing modules...
Downloading Azure/aks/azurerm 1.0.0 for aks...
- aks in .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0
- aks.kubernetes in .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\kubernetes-cluster
- aks.log_analytics_solution in .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\log-analytics-solution
- aks.log_analytics_workspace in .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\log-analytics-workspace
- aks.ssh-key in .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\ssh-key
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Attribute redefined

  on .terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\log-analytics-solution\main.tf line 7, in resource "azurerm_log_analytics_solution" "main":
   7:   workspace_name        = "${var.workspace_name}"

The argument "workspace_name" was already set at
.terraform\modules\aks\Azure-terraform-azurerm-aks-72b73d0\modules\log-analytics-solution\main.tf:3,3-17.
Each argument may be set only once.

workspace_name is defined twice in the log analytics solution module

resource "azurerm_log_analytics_solution" "main" {
  solution_name         = "ContainerInsights"
  workspace_name        = "${var.prefix}-log-analytics-workspace"
  location              = "${var.location}"
  resource_group_name   = "${var.resource_group_name}"
  workspace_resource_id = "${var.workspace_resource_id}"
  workspace_name        = "${var.workspace_name}"

  plan {
    publisher = "Microsoft"
    product   = "OMSGallery/ContainerInsights"
  }
}

not sure why

Updating values forces create new resources

Hello, This is a very easy module to work with but after deploying AKS cluster but when I try to apply some changes/updates it creates a new cluster.

After debugging I figure out that there are some values that are by default set up as "null" which causing this re-creation.

Check Below screenshots from my plan

Screen Shot 2021-01-06 at 1 01 43 PM

As you can see node_taints (this var needs to be added in the module), agents_availability_zones should be [] but as its default value is null so if a user doesn't provide it so resource recreate aks.

If we don't set auto_scaling for the default node pool then max and min pod should be 0 not null and the same goes with network_policy as well.

Please update/add the following variable's default values.

  • network_policy = "azure"
  • agents_availability_zones = []
  • agents_max_count = 0 (also change 0 here)
  • agents_min_count = 0
  • node_taints= [] (add new var in default_node_pool block)

Problem with ingress_application_gateway add_on

There is problem with ingress_application_gateway add on functionality. This is part of code I use for creation of AKS cluster

addon_profile {
    ingress_application_gateway {
      enabled    = true
      gateway_name = "appgw-clusters"
      subnet_cidr  = var.appgw_address_prefix
    }
  }

App gw will be created only when for identity service principal is defined

service_principal {
    client_id     = var.azure_client_id
    client_secret = var.azure_client_secret
}

But, in that case, after I deploy ingress in the cluster, it is not getting IP!

When , for identity is set managed identity

identity {
    type = "SystemAssigned"

App gw will not be created.
And interesting thing is, when I run code while service principal is set as identity, and then change identity to be SystemAssigned
and run such code again, then everything is working as expected.

How creation of app gw using add-on is impacted by identity setup in the AKS? I use terraform 13, and azurerm_kubernetes_cluster resource to create AKS cluster.

Thanx,
Miroslav

Preview features

Hello

When will this support the node pool?

Can we disable log analytics

user_assigned_identity_id is not expected

╷
│ Error: Unsupported argument
│ 
│   on .terraform/modules/aks/main.tf line 81, in resource "azurerm_kubernetes_cluster" "main":
│   81:       user_assigned_identity_id = var.user_assigned_identity_id
│ 
│ An argument named "user_assigned_identity_id" is not expected here.

Hi I have tried to find many work arounds assigning null to the argument, but the only thing that seems to work is to remove the argument from the module.

I am using identity_type = "SystemAssigned" , with no client_id nor client_secret.

Bug: Can not disable Managed AAD RBAC while enabling RBAC

Hi, when I try to disable managed AAD RBAC and keeping the RBAC enabled, I have the following error:

╷
│ Error: You must specify client_app_id and server_app_id and server_app_secret when using managed aad rbac (managed = false)
│ 
│   with module.aks.azurerm_kubernetes_cluster.main,
│   on .terraform/modules/aks/main.tf line 10, in resource "azurerm_kubernetes_cluster" "main":
│   10: resource "azurerm_kubernetes_cluster" "main" {
│ 
╵

Here is may aks config setting:

module "aks" {
  source                           = "Azure/aks/azurerm"
  version                          = "4.9.0"
  .....
  enable_role_based_access_control = true
  rbac_aad_managed                 = false
  ....
}

kube_dashboard is enabled, but provider can't disable it.

Problem

terraform apply to spawn a brand new k8s cluster in a brand new resource group works fine, however each successive terraform apply wants to make the following modication to the cluster, even if I change nothing in my HCL files :

[...]
      ~ addon_profile {
          - kube_dashboard {
              - enabled = true -> null
            }
[...]

This does not seem to be manageable using module variables, or I missed it.

Criticity

Mostly an annoyance as everything else works fine, would be fine to fix though.

Versions

Terraform v0.12.26
+ provider.azurerm v2.15.0
+ provider.local v1.4.0
+ provider.tls v2.1.1

Configure node pool lifecycles to ignore changes for modules configured to autoscale.

This is to prevent terraform from treating the node pools in the AKS cluster as externally modified resources whenever autoscaling events happen and the current node count doesn't match the configured node count.

For example:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

 # module.cluster.module.aks.azurerm_kubernetes_cluster.main has been changed
  ~ resource "azurerm_kubernetes_cluster" "main" {
        id                              = "/subscriptions/<redacted>/resourcegroups/<redacted>/providers/Microsoft.ContainerService/managedClusters/<redacted>"
        name                            = "<redacted>"
        tags                            = {}
        # (16 unchanged attributes hidden)



      ~ default_node_pool {
            name                         = "nodepool"
          ~ node_count                   = 3 -> 4
            tags                         = {
                "Agent" = "nodepoolagent"
            }
            # (16 unchanged attributes hidden)
        }





        # (7 unchanged blocks hidden)
    }
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following plan may include actions to undo or respond to these changes.

Improving upgrading logic for node pools

Hi,

When wanting to upgrade/alter the default node pool, Terraform will trigger a complete cluster destroy. We now work around this by:

  • Creating a temporary node pool
  • Drain the default node pool
  • Delete the default node pool manually
  • Create a new node pool with the desired settings with the same name
  • Migrate the temp node pool back to the default node pool

Could this sequence be something that Terraform could handle for people? One wouldn't often edit the default node pool, but sometimes it is necessary for cost savings, or when MS introduces new features (like Managed -> Ephemeral Storage)

Add support for attaching Azure Container Registries

Hi we would like to attach an ACR isntance to a cluster, both at creation time or as an update as per these docs.

It would be nice if the module would support both these use cases:

  1. Create a new AKS cluster with ACR integration
# set this to the name of your Azure Container Registry.  It must be globally unique
MYACR=myContainerRegistry

# Run the following line to create an Azure Container Registry if you do not already have one
az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic

# Create an AKS cluster with ACR integration
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR
  1. Configure ACR integration for existing AKS clusters
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>

I've recently seen an uptick in ACR usage so this might be something you would want to prioritize. I haven't checked if the terraform provider already supports this.

Changing the Azurerm Provider Version Constraint

Is it possible to change the required version constraint from ~> 2.34.0 to ~> 2.34? At the moment, having 2.34.0 is basically the same as being fixed as only the rightmost version component is incremented and the azurerm provider always tends to be released in minor increments.

If you try and use a higher provider elsewhere e.g. 2.41.0, you'll see an error similar to:

Could not retrieve the list of available versions for provider
hashicorp/azurerm: no available releases match the given constraints ~>
2.34.0, 2.34.1

Terraform plan output is the recreation of AKS Cluster

I have implemented this modular approach in creation of AKS. Terraform Plan and Terraform Apply did their work, and I got my cluster up and running. But then by not doing any change to my AKS cluster, I executed Terraform plan again and as per it's output my AKS Cluster need to recreated(I don't know why it is happening, or why the Terraform is unable to match the states)
To reproduce the issue: You just need to fork the repo, make some required changes, execute terraform plan and terraform apply, when you see that the cluster is up and running. Again run terraform plan(without changing anything to the cluster)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.