Giter VIP home page Giter VIP logo

terraform-kubernetes-addons's Introduction

terraform-kubernetes-addons

semantic-release terraform-kubernetes-addons

Main components

Name Description Generic AWS Scaleway GCP Azure
admiralty A system of Kubernetes controllers that intelligently schedules workloads across clusters ✔️ ✔️ ✔️ ✔️ ✔️
aws-ebs-csi-driver Enable new feature and the use of gp3 volumes N/A ✔️ N/A N/A N/A
aws-efs-csi-driver Enable EFS Support N/A ✔️ N/A N/A N/A
aws-for-fluent-bit Cloudwatch logging with fluent bit instead of fluentd N/A ✔️ N/A N/A N/A
aws-load-balancer-controller Use AWS ALB/NLB for ingress and services N/A ✔️ N/A N/A N/A
aws-node-termination-handler Manage spot instance lifecyle N/A ✔️ N/A N/A N/A
aws-calico Use calico for network policy N/A ✔️ N/A N/A N/A
secrets-store-csi-driver-provider-aws AWS Secret Store and Parameter store driver for secret store CSI driver ✔️ N/A N/A N/A N/A
cert-manager automatically generate TLS certificates, supports ACME v2 ✔️ ✔️ ✔️ ✔️ N/A
cluster-autoscaler scale worker nodes based on workload N/A ✔️ Included Included Included
cni-metrics-helper Provides cloudwatch metrics for VPC CNI plugins N/A ✔️ N/A N/A N/A
external-dns sync ingress and service records in route53 ✔️ ✔️ ✔️
flux2 Open and extensible continuous delivery solution for Kubernetes. Powered by GitOps Toolkit ✔️ ✔️ ✔️ ✔️ ✔️
ingress-nginx processes Ingress object and acts as a HTTP/HTTPS proxy (compatible with cert-manager) ✔️ ✔️ ✔️ ✔️
k8gb A cloud native Kubernetes Global Balancer ✔️ ✔️ ✔️ ✔️ ✔️
karma An alertmanager dashboard ✔️ ✔️ ✔️ ✔️ ✔️
keda Kubernetes Event-driven Autoscaling ✔️ ✔️ ✔️ ✔️ ✔️
kong API Gateway ingress controller ✔️ ✔️ ✔️
kube-prometheus-stack Monitoring / Alerting / Dashboards ✔️ ✔️ ✔️
loki-stack Grafana Loki logging stack ✔️ ✔️ 🚧
promtail Ship log to loki from other cluster (eg. mTLS) 🚧 ✔️ 🚧
prometheus-adapter Prometheus metrics for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+ ✔️ ✔️ ✔️ ✔️ ✔️
prometheus-cloudwatch-exporter An exporter for Amazon CloudWatch, for Prometheus. ✔️ ✔️ ✔️ ✔️ ✔️
prometheus-blackbox-exporter The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP. ✔️ ✔️ ✔️ ✔️ ✔️
rabbitmq-cluster-operator The RabbitMQ Cluster Operator automates provisioning, management of RabbitMQ clusters. ✔️ ✔️ ✔️ ✔️ ✔️
metrics-server enable metrics API and horizontal pod scaling (HPA) ✔️ ✔️ Included Included Included
node-problem-detector Forwards node problems to Kubernetes events ✔️ ✔️ Included Included Included
secrets-store-csi-driver Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume. ✔️ ✔️ ✔️ ✔️ ✔️
sealed-secrets Technology agnostic, store secrets on git ✔️ ✔️ ✔️ ✔️ ✔️
thanos Open source, highly available Prometheus setup with long term storage capabilities ✔️ 🚧
thanos-memcached Open source, highly available Prometheus setup with long term storage capabilities ✔️ 🚧
thanos-storegateway Additional storegateway to query multiple object stores ✔️ 🚧
thanos-tls-querier Thanos TLS querier for cross cluster collection ✔️ 🚧

Submodules

Submodules are used for specific cloud provider configuration such as IAM role for AWS. For a Kubernetes vanilla cluster, generic addons should be used.

Any contribution supporting a new cloud provider is welcomed.

Doc generation

Code formatting and documentation for variables and outputs is generated using pre-commit-terraform hooks which uses terraform-docs.

Follow these instructions to install pre-commit locally.

And install terraform-docs with go get github.com/segmentio/terraform-docs or brew install terraform-docs.

Contributing

Report issues/questions/feature requests on in the issues section.

Full contributing guidelines are covered here.

Requirements

Name Version
terraform >= 1.3.2
flux ~> 1.0
github ~> 6.0
helm ~> 2.0
http >= 3
kubectl ~> 2.0
kubernetes ~> 2.0, != 2.12
tls ~> 4.0

Providers

Name Version
flux ~> 1.0
github ~> 6.0
helm ~> 2.0
http >= 3
kubectl ~> 2.0
kubernetes ~> 2.0, != 2.12
random n/a
time n/a
tls ~> 4.0

Modules

No modules.

Resources

Name Type
flux_bootstrap_git.flux resource
github_branch_default.main resource
github_repository.main resource
github_repository_deploy_key.main resource
helm_release.admiralty resource
helm_release.cert-manager resource
helm_release.cert-manager-csi-driver resource
helm_release.ingress-nginx resource
helm_release.k8gb resource
helm_release.karma resource
helm_release.keda resource
helm_release.kong resource
helm_release.kube-prometheus-stack resource
helm_release.linkerd-control-plane resource
helm_release.linkerd-crds resource
helm_release.linkerd-viz resource
helm_release.linkerd2-cni resource
helm_release.loki-stack resource
helm_release.metrics-server resource
helm_release.node-problem-detector resource
helm_release.prometheus-adapter resource
helm_release.prometheus-blackbox-exporter resource
helm_release.promtail resource
helm_release.reloader resource
helm_release.sealed-secrets resource
helm_release.secrets-store-csi-driver resource
helm_release.tigera-operator resource
helm_release.traefik resource
helm_release.victoria-metrics-k8s-stack resource
kubectl_manifest.calico_crds resource
kubectl_manifest.cert-manager_cluster_issuers resource
kubectl_manifest.csi-external-snapshotter resource
kubectl_manifest.kong_crds resource
kubectl_manifest.linkerd resource
kubectl_manifest.linkerd-viz resource
kubectl_manifest.prometheus-operator_crds resource
kubectl_manifest.tigera-operator_crds resource
kubernetes_config_map.loki-stack_grafana_ds resource
kubernetes_namespace.admiralty resource
kubernetes_namespace.cert-manager resource
kubernetes_namespace.flux2 resource
kubernetes_namespace.ingress-nginx resource
kubernetes_namespace.k8gb resource
kubernetes_namespace.karma resource
kubernetes_namespace.keda resource
kubernetes_namespace.kong resource
kubernetes_namespace.kube-prometheus-stack resource
kubernetes_namespace.linkerd resource
kubernetes_namespace.linkerd-viz resource
kubernetes_namespace.linkerd2-cni resource
kubernetes_namespace.loki-stack resource
kubernetes_namespace.metrics-server resource
kubernetes_namespace.node-problem-detector resource
kubernetes_namespace.prometheus-adapter resource
kubernetes_namespace.prometheus-blackbox-exporter resource
kubernetes_namespace.promtail resource
kubernetes_namespace.reloader resource
kubernetes_namespace.sealed-secrets resource
kubernetes_namespace.secrets-store-csi-driver resource
kubernetes_namespace.tigera-operator resource
kubernetes_namespace.traefik resource
kubernetes_namespace.victoria-metrics-k8s-stack resource
kubernetes_network_policy.admiralty_allow_namespace resource
kubernetes_network_policy.admiralty_default_deny resource
kubernetes_network_policy.cert-manager_allow_control_plane resource
kubernetes_network_policy.cert-manager_allow_monitoring resource
kubernetes_network_policy.cert-manager_allow_namespace resource
kubernetes_network_policy.cert-manager_default_deny resource
kubernetes_network_policy.flux2_allow_monitoring resource
kubernetes_network_policy.flux2_allow_namespace resource
kubernetes_network_policy.ingress-nginx_allow_control_plane resource
kubernetes_network_policy.ingress-nginx_allow_ingress resource
kubernetes_network_policy.ingress-nginx_allow_linkerd_viz resource
kubernetes_network_policy.ingress-nginx_allow_monitoring resource
kubernetes_network_policy.ingress-nginx_allow_namespace resource
kubernetes_network_policy.ingress-nginx_default_deny resource
kubernetes_network_policy.k8gb_allow_namespace resource
kubernetes_network_policy.k8gb_default_deny resource
kubernetes_network_policy.karma_allow_ingress resource
kubernetes_network_policy.karma_allow_namespace resource
kubernetes_network_policy.karma_default_deny resource
kubernetes_network_policy.keda_allow_namespace resource
kubernetes_network_policy.keda_default_deny resource
kubernetes_network_policy.kong_allow_ingress resource
kubernetes_network_policy.kong_allow_monitoring resource
kubernetes_network_policy.kong_allow_namespace resource
kubernetes_network_policy.kong_default_deny resource
kubernetes_network_policy.kube-prometheus-stack_allow_control_plane resource
kubernetes_network_policy.kube-prometheus-stack_allow_ingress resource
kubernetes_network_policy.kube-prometheus-stack_allow_namespace resource
kubernetes_network_policy.kube-prometheus-stack_default_deny resource
kubernetes_network_policy.linkerd-viz_allow_control_plane resource
kubernetes_network_policy.linkerd-viz_allow_monitoring resource
kubernetes_network_policy.linkerd-viz_allow_namespace resource
kubernetes_network_policy.linkerd-viz_default_deny resource
kubernetes_network_policy.linkerd2-cni_allow_namespace resource
kubernetes_network_policy.linkerd2-cni_default_deny resource
kubernetes_network_policy.loki-stack_allow_ingress resource
kubernetes_network_policy.loki-stack_allow_namespace resource
kubernetes_network_policy.loki-stack_default_deny resource
kubernetes_network_policy.metrics-server_allow_control_plane resource
kubernetes_network_policy.metrics-server_allow_namespace resource
kubernetes_network_policy.metrics-server_default_deny resource
kubernetes_network_policy.npd_allow_namespace resource
kubernetes_network_policy.npd_default_deny resource
kubernetes_network_policy.prometheus-adapter_allow_namespace resource
kubernetes_network_policy.prometheus-adapter_default_deny resource
kubernetes_network_policy.prometheus-blackbox-exporter_allow_namespace resource
kubernetes_network_policy.prometheus-blackbox-exporter_default_deny resource
kubernetes_network_policy.promtail_allow_ingress resource
kubernetes_network_policy.promtail_allow_namespace resource
kubernetes_network_policy.promtail_default_deny resource
kubernetes_network_policy.reloader_allow_namespace resource
kubernetes_network_policy.reloader_default_deny resource
kubernetes_network_policy.sealed-secrets_allow_namespace resource
kubernetes_network_policy.sealed-secrets_default_deny resource
kubernetes_network_policy.secrets-store-csi-driver_allow_namespace resource
kubernetes_network_policy.secrets-store-csi-driver_default_deny resource
kubernetes_network_policy.tigera-operator_allow_namespace resource
kubernetes_network_policy.tigera-operator_default_deny resource
kubernetes_network_policy.traefik_allow_ingress resource
kubernetes_network_policy.traefik_allow_monitoring resource
kubernetes_network_policy.traefik_allow_namespace resource
kubernetes_network_policy.traefik_default_deny resource
kubernetes_network_policy.victoria-metrics-k8s-stack_allow_control_plane resource
kubernetes_network_policy.victoria-metrics-k8s-stack_allow_ingress resource
kubernetes_network_policy.victoria-metrics-k8s-stack_allow_namespace resource
kubernetes_network_policy.victoria-metrics-k8s-stack_default_deny resource
kubernetes_priority_class.kubernetes_addons resource
kubernetes_priority_class.kubernetes_addons_ds resource
kubernetes_secret.linkerd_trust_anchor resource
kubernetes_secret.loki-stack-ca resource
kubernetes_secret.promtail-tls resource
kubernetes_secret.webhook_issuer_tls resource
random_string.grafana_password resource
time_sleep.cert-manager_sleep resource
tls_cert_request.promtail-csr resource
tls_locally_signed_cert.promtail-cert resource
tls_private_key.identity resource
tls_private_key.linkerd_trust_anchor resource
tls_private_key.loki-stack-ca-key resource
tls_private_key.promtail-key resource
tls_private_key.webhook_issuer_tls resource
tls_self_signed_cert.linkerd_trust_anchor resource
tls_self_signed_cert.loki-stack-ca-cert resource
tls_self_signed_cert.webhook_issuer_tls resource
github_repository.main data source
http_http.calico_crds data source
http_http.csi-external-snapshotter data source
http_http.kong_crds data source
http_http.prometheus-operator_crds data source
http_http.prometheus-operator_version data source
http_http.tigera-operator_crds data source
kubectl_file_documents.calico_crds data source
kubectl_file_documents.csi-external-snapshotter data source
kubectl_file_documents.kong_crds data source
kubectl_file_documents.tigera-operator_crds data source
kubectl_path_documents.cert-manager_cluster_issuers data source

Inputs

Name Description Type Default Required
admiralty Customize admiralty chart, see admiralty.tf for supported values any {} no
cert-manager Customize cert-manager chart, see cert-manager.tf for supported values any {} no
cert-manager-csi-driver Customize cert-manager-csi-driver chart, see cert-manager.tf for supported values any {} no
cluster-autoscaler Customize cluster-autoscaler chart, see cluster-autoscaler.tf for supported values any {} no
cluster-name Name of the Kubernetes cluster string "sample-cluster" no
csi-external-snapshotter Customize csi-external-snapshotter, see csi-external-snapshotter.tf for supported values any {} no
external-dns Map of map for external-dns configuration: see external_dns.tf for supported values any {} no
flux2 Customize Flux chart, see flux2.tf for supported values any {} no
helm_defaults Customize default Helm behavior any {} no
ingress-nginx Customize ingress-nginx chart, see nginx-ingress.tf for supported values any {} no
ip-masq-agent Configure ip masq agent chart, see ip-masq-agent.tf for supported values. This addon works only on GCP. any {} no
k8gb Customize k8gb chart, see k8gb.tf for supported values any {} no
karma Customize karma chart, see karma.tf for supported values any {} no
keda Customize keda chart, see keda.tf for supported values any {} no
kong Customize kong-ingress chart, see kong.tf for supported values any {} no
kube-prometheus-stack Customize kube-prometheus-stack chart, see kube-prometheus-stack.tf for supported values any {} no
labels_prefix Custom label prefix used for network policy namespace matching string "particule.io" no
linkerd Customize linkerd chart, see linkerd.tf for supported values any {} no
linkerd-viz Customize linkerd-viz chart, see linkerd-viz.tf for supported values any {} no
linkerd2 Customize linkerd2 chart, see linkerd2.tf for supported values any {} no
linkerd2-cni Customize linkerd2-cni chart, see linkerd2-cni.tf for supported values any {} no
loki-stack Customize loki-stack chart, see loki-stack.tf for supported values any {} no
metrics-server Customize metrics-server chart, see metrics_server.tf for supported values any {} no
npd Customize node-problem-detector chart, see npd.tf for supported values any {} no
priority-class Customize a priority class for addons any {} no
priority-class-ds Customize a priority class for addons daemonsets any {} no
prometheus-adapter Customize prometheus-adapter chart, see prometheus-adapter.tf for supported values any {} no
prometheus-blackbox-exporter Customize prometheus-blackbox-exporter chart, see prometheus-blackbox-exporter.tf for supported values any {} no
promtail Customize promtail chart, see loki-stack.tf for supported values any {} no
reloader Customize reloader chart, see reloader.tf for supported values any {} no
sealed-secrets Customize sealed-secrets chart, see sealed-secrets.tf for supported values any {} no
secrets-store-csi-driver Customize secrets-store-csi-driver chart, see secrets-store-csi-driver.tf for supported values any {} no
thanos Customize thanos chart, see thanos.tf for supported values any {} no
thanos-memcached Customize thanos chart, see thanos.tf for supported values any {} no
thanos-storegateway Customize thanos chart, see thanos.tf for supported values any {} no
thanos-tls-querier Customize thanos chart, see thanos.tf for supported values any {} no
tigera-operator Customize tigera-operator chart, see tigera-operator.tf for supported values any {} no
traefik Customize traefik chart, see traefik.tf for supported values any {} no
victoria-metrics-k8s-stack Customize Victoria Metrics chart, see victoria-metrics-k8s-stack.tf for supported values any {} no

Outputs

Name Description
grafana_password n/a
loki-stack-ca n/a
promtail-cert n/a
promtail-key n/a

terraform-kubernetes-addons's People

Contributors

archifleks avatar bogdando avatar cebidhem avatar dud225 avatar giany avatar gwenall avatar malvex avatar mentos1386 avatar mergify[bot] avatar oleksiimorozenko avatar rayanebel avatar redref avatar renovate-bot avatar renovate[bot] avatar rguichard avatar stefan-matic avatar tbobm avatar tpxp avatar travertischio avatar vedit avatar yasinlachiny avatar yawboateng avatar zestrells avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-kubernetes-addons's Issues

[bug] csi-external-snapshotter terraform error

Describe the bug

First fresh install goes OK, but later after another apply I have this:

│ Error: Invalid for_each argument
│
│   on .terraform/modules/eks_addons.eks-aws-addons/modules/aws/csi-external-snapshotter.tf line 38, in resource "kubectl_manifest" "csi-external-snapshotter":
│   38:   for_each  = local.csi-external-snapshotter.enabled ? { for v in local.csi-external-snapshotter_apply : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content } : {}
│     ├────────────────
│     │ local.csi-external-snapshotter.enabled is true
│     │ local.csi-external-snapshotter_apply will be known only after apply
│
│ The "for_each" map includes keys derived from resource attributes that
│ cannot be determined until apply, and so Terraform cannot determine the
│ full set of keys that will identify the instances of this resource.
│
│ When working with unknown values in for_each, it's better to define the map
│ keys statically in your configuration and place apply-time results only in
│ the map values.
│
│ Alternatively, you could use the -target planning option to first apply
│ only the resources that the for_each value depends on, and then apply a
│ second time to fully converge.
╵
ERRO[0059] Terraform invocation failed in /Users/dejw/devel/dornach-aws-nonprod/environments/dornach_nonprod/eks
ERRO[0059] 1 error occurred:
	* exit status 1

What is the current behavior?

TF error after another apply with csi external snapshotter

How to reproduce? Please include a code sample if relevant.

Add cso external spanshotter - deploy it for a first time - it should be OK - and then do the next TF apply and the error should come up.

What's the expected behavior?

No TF error after consecutive TF apply commands

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: csi-external-snapshotter
  • OS: MacOS
  • Terraform version: 1.3.5
  • Kubernetes version: 1.21
  • terraform-kubernetes-addons: 11.0.0

Any other relevant info

[bug] ignore change in namespace annotations

Describe the bug

When using flux, flux annotate namespace and Terraform try to reconcile by removing it

What is the current behavior?

How to reproduce? Please include a code sample if relevant.

What's the expected behavior?

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version:
  • OS:
  • Terraform version:
  • Kubernetes version

Any other relevant info

Cluster autoscaler not deploying[bug]

Describe the bug

Setting cluster autoscaler enabled to true does not deploy the autoscaler

What is the current behavior?

cluster-autoscaler not deploying

How to reproduce? Please include a code sample if relevant.

module "addons" {
  source  = "particuleio/addons/kubernetes"
  version = "12.5.0"

  cluster-autoscaler = {
    enabled = true
  }
}

What's the expected behavior?

cluster-autoscaler deploying on server

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: v9.24.0
  • OS: Ubuntu
  • Terraform version: 1.4
  • Kubernetes version: 1.25

Any other relevant info

[enhancement] AWS ECR support for containers required by addons

Is your feature request related to a problem? Please describe.
When deploying private EKS clusters on AWS, or just optimizing external traffic flows into EKS, it would be nice to have an option to reconfigure helm release resources for Helm charts, and kube manifest resources for k8s API, to use container images prepared for ECR.
And even more nice to have a programmatically defined list of such containers for each addon, and automate preparing ECR repositories, then copying external images with skopeo.

Describe the solution you'd like
I have a WIP branch of tEKS and another one for this addons repo. Once my testing is done, I'll propose a PR.

Describe alternatives you've considered
Leave users on their own when preparing AWS ECR images for private EKS clusters, or to be brave and pull images from public sources to deploy on public EKS (though, Fargate node groups only support private clusters, for example...). Unfortunately the former wouldn't be possible with this terraform-kubernetes-addons automation for tEKS.

Additional context
It mostly done for my use case, I hope that would be a useful feature to contribute

[Note] AWS IRSA Roles

Neither an issue nor a feature request, more of a PSA 😅

I see that the https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-assumable-role-with-oidc module is used quite a bit for providing addons permissions via IRSA which is great. I just wanted to also point out that we have added a number of "canned" IRSA roles for common addons as well in https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks#iam-role-for-service-accounts-in-eks

Check it out if you get a chance, it may or may not be worthwhile. We have see great traction since we've added and more are getting added all the time. We are also seeing interactions with the community addons to push for better permissions that are more narrowly scoped which has been great as well. Thats all - great work, carry on, and feel free to close this out at any time since theres no action required!

[bug] update IAM for AWS Load Balancer controller

Describe the bug

Upstream chart non breaking

What is the current behavior?

PR was merged

How to reproduce? Please include a code sample if relevant.

What's the expected behavior?

PR is not merged because upstream breaks properly

Are you able to fix this problem and submit a PR? Link here if you have already.

Need to merge https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json

Environment details

  • Affected module version: all
  • OS: all
  • Terraform version: all
  • Kubernetes version: all

Any other relevant info

feature: CloudWatch Container Insights agent, would the project be open to a PR?

Hello,

This is a great project!

If I contributed code for AWS CloudWatch Container Insights support as an additional module, would you be open to including this in Terraform Kubernetes Addons ?

Currently I am using https://registry.terraform.io/modules/Young-ook/eks/aws/latest/submodules/container-insights but would love to have it integrated into this project. I would be willing to do the work to put the code together and submit a PR if you're willing to consider having it included.

Before I start, I would like to know that you are open to including it. What do you think?

Thanks,
Matt

[enhancement] Add support for traefik v2

Is your feature request related to a problem? Please describe.
We're lacking flexibility regarding ingress controllers.

Describe the solution you'd like
Implement an addon for the traefik controller. Traefik

Additional context

The traefik controller supports both CRD-based configuration (using the IngressRoute resource) or the standard Ingress specification.

Todos

  • Implement the traefik addon
  • Add support for the CRDs (maybe bootstrap the providers ?)

In the future

  • Implement a network policy from the control plane if traefik/traefik#7379 gets implemented (currently marked as priority/P2)

[bug] - License?

Describe the bug

Thanks a lot folks for publishing all these submodules for Kubernetes. This is great stuff. Can you please clarify which license this is offered under?

What is the current behavior?

No license added, so unsure what license this falls under

How to reproduce? Please include a code sample if relevant.

If this is opensource, please add for example the apache license from here: https://www.apache.org/licenses/LICENSE-2.0

What's the expected behavior?

If opensource a license file should be added.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version:
  • OS:
  • Terraform version:
  • Kubernetes version

Any other relevant info

[bug] gavinbunney/kubectl with Kubernetes 1.27 causes Error: ... failed to fetch resource from kubernetes: the server could not find the requested resource

Kubernetes version 1.27 with gavinbunney/kubectl provider occasionally causes error on kubectl_manifest resources.

This issue is outlined by this github issue:
gavinbunney/terraform-provider-kubectl#270

Suggested fix:
Migrate to alekc/kubectl provider where a fix has been implemented and is being maintained.
gavinbunney/kubectl provider is now a dead repo and isn't being maintained.

secrets-store-csi-driver-aws not working

Describe the bug

When I enable secrets-store-csi-driver-aws and deploy it with terragrunt, it looks to be deployed.
But when I do kubectl get csidriver I do not get the secrets-csi in the output for the kube-system namespace.

I currently only have this enabled below:

  secrets-store-csi-driver-aws = {
    enabled = true
  }

Do I need to also enable the non-aws csi driver to get it deployed in my EKS cluster and working?

  secrets-store-csi-driver = {
    enabled = true
  }

@ArchiFleKs

[bug] helm provider issue

Describe the bug

When using latest version of helm provider (2.7.1) I get this message:

│ Error: unable to build kubernetes objects from current release manifest: unable to recognize "": Get "https://xxxxx.gr7.us-east-1.eks.amazonaws.com/api?timeout=32s": dial tcp: lookup xxxxxx.gr7.us-east-1.eks.amazonaws.com on 192.168.1.1:53: no such host
│
│   with helm_release.kube-prometheus-stack[0],
│   on kube-prometheus.tf line 462, in resource "helm_release" "kube-prometheus-stack":
│  462: resource "helm_release" "kube-prometheus-stack" {

Note that this has been happening for a while..the latest helm provider version that works for me is 2.4.1.

Environment details

  • Affected module version: > 2.4.1
  • OS: MacOS
  • Terraform version: 1.2.8
  • Kubernetes version: 1.23

Any hints what could the issue be?

[aws-ebs-csi-driver] - Helm Chart does not support to create VolumeSnapshotClass

Actually Helm Chart does not support to create kind: VolumeSnapshotClass (kubernetes-sigs/aws-ebs-csi-driver#1146)

eks-addons-critical$ terragrunt apply

│ Error: Failed to determine GroupVersionResource for manifest

│ with kubernetes_manifest.aws-ebs-csi-driver_vsc[0],
│ on aws-ebs-csi-driver.tf line 216, in resource "kubernetes_manifest" "aws-ebs-csi-driver_vsc":
│ 216: resource "kubernetes_manifest" "aws-ebs-csi-driver_vsc" {

│ no matches for kind "VolumeSnapshotClass" in group
│ "snapshot.storage.k8s.io"

ERRO[0016] 1 error occurred:
* exit status 1

feature: Cortex

Hi folks, hope you are doing great!
First, congrats for the awesome project! Have you considered to add cortex as addon to work with prometheus?
https://cortexmetrics.io/

BR

Andre Martins

[bug] \"for_each\" value depends on resource attributes that cannot be determined until apply

Describe the bug

Cannot deploy addons to EKS cluster: terraform says it could not create a plan.

What is the current behavior?

Terraform plan could not be created\r
STDOUT: Releasing state lock. This may take a few moments...


STDERR:
Error: Invalid for_each argument

  on .terraform/modules/eks-addons/modules/aws/kong-crds.tf line 25, in resource \"kubectl_manifest\" \"kong_crds\":
  25:   for_each  = local.kong.enabled && local.kong.manage_crds ? { for v in local.kong_crds_apply : lower(join(\"/\", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, \"namespace\", \"\"), v.data.metadata.name]))) => v.content } : {}
    ├────────────────
    │ local.kong.enabled is true
    │ local.kong.manage_crds is true
    │ local.kong_crds_apply will be known only after apply

The \"for_each\" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

Error: Invalid for_each argument

  on .terraform/modules/eks-addons/modules/aws/kube-prometheus-crd.tf line 29, in data \"http\" \"prometheus-operator_crds\":
  29:   for_each = (local.victoria-metrics-k8s-stack.enabled && local.victoria-metrics-k8s-stack.install_prometheus_operator_crds) || (local.kube-prometheus-stack.enabled && local.kube-prometheus-stack.manage_crds) ? toset(local.prometheus-operator_crds) : []
    ├────────────────
    │ local.kube-prometheus-stack.enabled is true
    │ local.kube-prometheus-stack.manage_crds is true
    │ local.prometheus-operator_crds is tuple with 8 elements
    │ local.victoria-metrics-k8s-stack.enabled is false
    │ local.victoria-metrics-k8s-stack.install_prometheus_operator_crds is true

The \"for_each\" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.

How to reproduce? Please include a code sample if relevant.

Terraform file:

provider "aws" {
  region = "us-east-1"
  profile = "ops-release"
}

terraform {
  required_version = ">= 0.13"
  required_providers {
    aws        = "~> 3.0"
    helm       = "~> 2.0"
    kubernetes = "~> 2.0"
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "~> 1.0"
    }
  }

  backend "s3" {
    encrypt        = "true"
  }
}

data "terraform_remote_state" "eks" {
  backend = "s3"

  config = {
    bucket = "tf-state.ops.tradingcentral.com"
    key    = "ops-eks"
    region = "us-east-1"
    profile = "ops-release"
  }
}


provider "kubectl" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.cluster.token
  }
}

data "aws_eks_cluster" "cluster" {
  name = data.terraform_remote_state.eks.outputs.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = data.terraform_remote_state.eks.outputs.eks.cluster_id
}

locals {
  env_name                  = "ops"
  cluster_name              = "ops-eks"
  subnet_cidrs              = ["172.30.64.0/18", "172.30.128.0/18"]
}

# try splitting out some things to do first to try to solve the problems
# EKS has starting everything
module "eks-addons-first" {
  source = "particuleio/addons/kubernetes//modules/aws"
  version = "2.29.0"

  cluster-name = data.terraform_remote_state.eks.outputs.eks.cluster_id

  eks = {
    cluster_oidc_issuer_url = data.terraform_remote_state.eks.outputs.eks.cluster_oidc_issuer_url
  }

  # Handle events causing unavailability of EC2 instances
  aws-node-termination-handler = {
    enabled = true
    # FIXME: figure out how to use SQS queue-processor mode
  }

  # Network policy engine. Note this chart recommends using tigera-operator instead
  tigera-operator = {
    enabled = true
  }
}

module "eks-addons" {
  source = "particuleio/addons/kubernetes//modules/aws"

  depends_on = [module.eks-addons-first]

  cluster-name = data.terraform_remote_state.eks.outputs.eks.cluster_id

  eks = {
    cluster_oidc_issuer_url = data.terraform_remote_state.eks.outputs.eks.cluster_oidc_issuer_url
  }

  # disable things already created by eks-addons-first
  priority-class = {
    create = false
  }
  priority-class-ds = {
    create = false
  }

  # Use EBS for persistent volumes
  aws-ebs-csi-driver = {
    enabled = true
  }

  # Scale worker nodes based on workload
  cluster-autoscaler = {
    enabled = true
    version = "v1.22.1"
  }

  # Synchronise exposed services and ingresses with DNS providers (route53)
  external-dns = {
    external-dns = {
      enabled = true
        extra_values = "policy: 'sync'"
    },
  }

  # Kong ingress controller
  kong = {
    enabled = true
  }

  # Prometheus & related services for monitoring and alerting
  # Note that grafana is removed, to be added via grafana operator separately
  kube-prometheus-stack = {
    enabled = true
    # the only way to get this deploying was to enable the thanos sidecar
    thanos_sidecar_enabled = true
    extra_values = <<VALUES
grafana:
  enabled: false
VALUES
  }

  # Use prometheus metrics for autoscaling
  prometheus-adapter = {
    enabled = true
  }

  # Set up monitoring of endpoints over HTTP, HTTPS, DNS, TCP and ICMP
  prometheus-blackbox-exporter = {
    enabled = true
  }

  metrics-server = {
    enabled       = true
    allowed_cidrs = local.subnet_cidrs
  }

  npd = {
    enabled = true
  }
}

What's the expected behavior?

Terraform plans and deploys everything as configured.

Environment details

  • Affected module version: 2.29.0
  • OS: Fedora 34
  • Terraform version: 1.0.1, 1.0.9
  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"archive", BuildDate:"2021-03-30T00:00:00Z", GoVersion:"go1.16", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.2-eks-0389ca3", GitCommit:"8a4e27b9d88142bbdd21b997b532eb6d493df6d2", GitTreeState:"clean", BuildDate:"2021-07-31T01:34:46Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}

Any other relevant info

This is being run from the community.general.terraform ansible module. It has worked -- yesterday it did, but then did not an hour later and has not since. I am 99% certain no changes were made, as I was working on things further down the playbook. The backend configuration is being passed in via ansible, and the whole terraform directory and backend state were deleted each time the EKS was destroyed to start over.

[enhancement] create an option to setup prometheus monitoring using managed aws prometheus

Is your feature request related to a problem? Please describe.
I would like to use https://aws.amazon.com/prometheus/ instead of Thanos for long term storage.

Describe the solution you'd like
Would be nice if i could just write use_managed_prometheus = true and everything would be done behind the scene.

Describe alternatives you've considered
Setting it up myself, which i will do, but would rather have this module to also provision this for me.

Additional context
https://aws.amazon.com/blogs/mt/getting-started-amazon-managed-service-for-prometheus/

[bug] Uninstallation Failure Due to Lingering Job 'tigera-operator-uninstall' Preventing Deletion


Describe the bug

When attempting to uninstall the Calico module using Terraform, a job named tigera-operator-uninstall is created which prevents the deletion of the module. This job seems to be interfering with the uninstallation process.

What is the current behavior?

The uninstallation process fails with the following error message:

Error: warning: Hook pre-delete tigera-operator/templates/tigera-operator/00-uninstall.yaml failed: 1 error occurred:
	* jobs.batch "tigera-operator-uninstall" already exists

How to reproduce? Please include a code sample if relevant.

  1. Install the Calico module using Terraform.
  2. Attempt to uninstall the Calico module using Terraform.

What's the expected behavior?

The Calico module should be uninstalled successfully without any errors or interference from leftover jobs.

Are you able to fix this problem and submit a PR? Link here if you have already.

Unfortunately, I'm not able to fix this issue myself. However, I'm open to providing any additional information or testing if needed.

Environment details

  • Affected module version: 15.2.0
  • OS: Ubuntu 22.04.3 LTS (GitHub Runner)
  • Terraform version:
  • Kubernetes version: 1.29
  • Tigera-operator version: v3.27.0

Any other relevant info

After deleting the jobs manually with CLI at the same time parallelly while destruction, the process is getting completed successfully.


[bug] Issue with loki-stack due loki 3.x upgrade

Describe the bug

After upgrading loki to 3.x I get the followWhen using 2.16.0 everything seems to be fine.

What is the current behavior?

Using Loki 3.x I get the following error:

╷
│ Error: cannot patch "loki" with kind Ingress: Ingress.extensions "loki" is invalid: spec.rules[0].host: Invalid value: "map[host:logz.stg.xxxx.com paths:[/]]": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
│
│   with helm_release.loki-stack[0],
│   on loki-stack.tf line 136, in resource "helm_release" "loki-stack":
│  136: resource "helm_release" "loki-stack" {

How to reproduce? Please include a code sample if relevant.

What's the expected behavior?

Apply should work.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: 3.0.3 (happens with 3.0.0/3.0.1 as well)
  • OS: MacOS
  • Terraform version: 1.2.8
  • Kubernetes version: EKS 1.22

[enhancement] velero and restic config and support

Is your feature request related to a problem? Please describe.
We cannot see any configuration for restic in Velero, is it supported? For AWS EFS you do not have snapshots so restic must be used for this

Describe the solution you'd like
Additional configuration for adding restic backups in Velero for EFS volumes while snapshots should be used for CSI based only.

Describe alternatives you've considered
Not yet any.

Additional context
Velero cannot backup properly any project which use EFS based PVC in EKS.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/pr-title.yml
  • amannn/action-semantic-pull-request v5.5.3
.github/workflows/pre-commit.yml
  • actions/checkout v4
  • clowdhaus/terraform-composite-actions v1.11.0
  • actions/checkout v4
  • clowdhaus/terraform-min-max v1.3.1
  • clowdhaus/terraform-composite-actions v1.11.0
  • clowdhaus/terraform-composite-actions v1.11.0
  • actions/checkout v4
  • clowdhaus/terraform-min-max v1.3.1
  • clowdhaus/terraform-composite-actions v1.11.0
.github/workflows/release.yml
  • actions/checkout v4
  • cycjimmy/semantic-release-action v3
.github/workflows/renovate.yaml
  • actions/checkout v4
  • suzuki-shunsuke/github-action-renovate-config-validator v1.1.0
.github/workflows/stale-actions.yaml
  • actions/stale v9
helmv3
helm-dependencies.yaml
  • admiralty 0.13.2
  • secrets-store-csi-driver 1.4.5
  • aws-ebs-csi-driver 2.35.0
  • aws-efs-csi-driver 3.0.8
  • aws-for-fluent-bit 0.1.34
  • aws-load-balancer-controller 1.8.2
  • aws-node-termination-handler 0.21.0
  • cert-manager v1.15.3
  • cert-manager-csi-driver v0.10.1
  • cluster-autoscaler 9.37.0
  • external-dns 1.15.0
  • flux 1.13.3
  • ingress-nginx 4.11.2
  • k8gb v0.13.0
  • karma 1.7.2
  • karpenter 1.0.2
  • keda 2.15.1
  • kong 2.41.1
  • kube-prometheus-stack 62.7.0
  • linkerd2-cni 30.12.2
  • linkerd-control-plane 1.16.11
  • linkerd-crds 1.8.0
  • linkerd-viz 30.12.11
  • loki 6.12.0
  • promtail 6.16.5
  • metrics-server 3.12.1
  • node-problem-detector 2.3.13
  • prometheus-adapter 4.11.0
  • prometheus-cloudwatch-exporter 0.25.3
  • prometheus-blackbox-exporter 9.0.0
  • scaleway-webhook v0.0.1
  • sealed-secrets 2.16.1
  • thanos 15.7.25
  • tigera-operator v3.28.1
  • traefik 30.1.0
  • memcached 7.4.16
  • velero 7.2.1
  • victoria-metrics-k8s-stack 0.25.16
  • yet-another-cloudwatch-exporter 0.14.0
  • reloader 1.1.0
terraform
admiralty.tf
cert-manager-csi-driver.tf
cert-manager.tf
ingress-nginx.tf
k8gb.tf
karma.tf
keda.tf
kong.tf
kube-prometheus.tf
linkerd-viz.tf
linkerd.tf
linkerd2-cni.tf
loki-stack.tf
metrics-server.tf
modules/aws/aws-ebs-csi-driver.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/aws-efs-csi-driver.tf
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/security-group/aws ~> 5.0
modules/aws/aws-for-fluent-bit.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/aws-load-balancer-controller.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/aws-node-termination-handler.tf
modules/aws/cert-manager.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/cluster-autoscaler.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/cni-metrics-helper.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/external-dns.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/ingress-nginx.tf
modules/aws/karpenter.tf
  • terraform-aws-modules/eks/aws ~> 20.0
modules/aws/kube-prometheus.tf
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
modules/aws/loki-stack.tf
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
modules/aws/prometheus-cloudwatch-exporter.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/s3-logging.tf
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
modules/aws/thanos-memcached.tf
modules/aws/thanos-storegateway.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/aws/thanos-tls-querier.tf
modules/aws/thanos.tf
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
modules/aws/tigera-operator.tf
modules/aws/velero.tf
  • terraform-aws-modules/iam/aws ~> 5.0
  • terraform-aws-modules/s3-bucket/aws ~> 4.0
modules/aws/versions.tf
  • aws >= 5.27
  • flux ~> 1.0
  • github ~> 6.0
  • helm ~> 2.0
  • http >= 3
  • kubectl ~> 2.0
  • tls ~> 4.0
  • hashicorp/terraform >= 1.3.2
modules/aws/victoria-metrics-k8s-stack.tf
modules/aws/yet-another-cloudwatch-exporter.tf
  • terraform-aws-modules/iam/aws ~> 5.0
modules/azure/ingress-nginx.tf
modules/azure/version.tf
  • azurerm ~> 4.0
  • flux ~> 1.0
  • github ~> 6.0
  • helm ~> 2.0
  • http >= 3
  • kubectl ~> 2.0
  • tls ~> 4.0
  • hashicorp/terraform >= 1.3.2
modules/google/cert-manager.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0.0
modules/google/external-dns.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0.0
modules/google/ingress-nginx.tf
modules/google/kube-prometheus.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/iam/google ~> 7.6
  • terraform-google-modules/cloud-storage/google ~> 6.0
  • terraform-google-modules/kms/google ~> 2.2
modules/google/loki-stack.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/cloud-storage/google ~> 6.0
  • terraform-google-modules/iam/google ~> 7.6
  • terraform-google-modules/kms/google ~> 2.2
modules/google/thanos-memcached.tf
modules/google/thanos-storegateway.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/iam/google ~> 7.6
modules/google/thanos-tls-querier.tf
modules/google/thanos.tf
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/kubernetes-engine/google ~> 32.0
  • terraform-google-modules/cloud-storage/google ~> 6.0
  • terraform-google-modules/kms/google ~> 2.2
modules/google/versions.tf
  • flux ~> 1.0
  • github ~> 6.0
  • google >= 4.69
  • google-beta >= 4.69
  • helm ~> 2.0
  • http >= 3
  • jinja ~> 2.0
  • kubectl ~> 2.0
  • tls ~> 4.0
  • hashicorp/terraform >= 1.3
modules/google/victoria-metrics-k8s-stack.tf
modules/scaleway/cert-manager.tf
modules/scaleway/external-dns.tf
modules/scaleway/ingress-nginx.tf
modules/scaleway/kube-prometheus.tf
modules/scaleway/loki-stack.tf
modules/scaleway/thanos-memcached.tf
modules/scaleway/thanos-storegateway.tf
modules/scaleway/thanos-tls-querier.tf
modules/scaleway/thanos.tf
modules/scaleway/versions.tf
  • flux ~> 1.0
  • github ~> 6.0
  • helm ~> 2.0
  • http >= 3
  • kubectl ~> 2.0
  • scaleway >= 2.2.0
  • tls ~> 4.0
  • hashicorp/terraform >= 1.3.2
modules/scaleway/victoria-metrics-k8s-stack.tf
node-problem-detector.tf
prometheus-adapter.tf
prometheus-blackbox-exporter.tf
promtail.tf
reloader.tf
sealed-secrets.tf
secrets-store-csi-driver.tf
tigera-operator.tf
traefik.tf
versions.tf
  • flux ~> 1.0
  • github ~> 6.0
  • helm ~> 2.0
  • http >= 3
  • kubectl ~> 2.0
  • tls ~> 4.0
  • hashicorp/terraform >= 1.3.2
victoria-metrics-k8s-stack.tf

  • Check this box to trigger a request for Renovate to run again on this repository

[enhancement]: handle major chart update that need CRDs

Is your feature request related to a problem? Please describe.

Some charts, like kube-prometheus-stack needs manual CRDs update between major update because Helm does not handle CRDs update

Describe the solution you'd like

We should provide a mecanism to handle CRDs update and not having to do manual actions :

kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_probes.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.46.0/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml

Describe alternatives you've considered
Letting user handle the CRDs update but updating a major release whitout updating CRDs first might cause issues

Additional context

Some projects like cert-manager handles CRDs upgrade at the application/charts level but this is not the case for prometheus operator

[bug] Karpenter v1beta1 api's do not work with current configured settings

Affected module: Karpenter

As mentioned in the following github thread terraform-aws-modules/terraform-aws-eks#2733

The IAM role created when running this module (KarpenterIRSA-...) is missing the key IAM role iam:GetInstanceProfile. Preventing karpenter from creating any new nodes.

To fix this I would suggest either manually adding the role to the list of additional policies that you add to KarpenterIRSA

data "aws_iam_policy_document" "karpenter_additional" {

Or by adding the following flag (as stated in the github issue linked at the top of the page) when calling terraform-aws-eks/module/karpenter
enable_karpenter_instance_profile_creation=True

Plan failed when flux2 is enabled

plan failed when flux2 is enabled. IMO, it's link to hashicorp/terraform#22405. It's merged and closed but I still encounter it.

Error: Inconsistent conditional result types

  on .terraform/modules/addons/flux2.tf line 22, in locals:
  22:   apply = local.flux2["enabled"] ? [for v in data.kubectl_file_documents.apply[0].documents : {
  23:     data : yamldecode(v)
  24:     content : v
  25:     }
  26:   ] : []
    |----------------
    | data.kubectl_file_documents.apply[0].documents is list of string with 24 elements
    | local.flux2["enabled"] is true

The true and false result expressions must have consistent types. The given
expressions are tuple and tuple, respectively.

[bug] ingress-nginx Helm chart 4.4.3 not found

Describe the bug

#1761 introduces the update of ingress-nginx Helm chart 4.4.3. Trying to deploy the latest version of this module fails because the chart version is not found in Helm repo.

Taking a quick look at ingress-nginx, latest release version is helm-chart-4.4.2 and with the git tag helm-chart-4.4.3, we can see this:
image

What is the current behavior?

Helm upgrade is failing.

│ Error: chart "ingress-nginx" version "4.4.3" not found in https://kubernetes.github.io/ingress-nginx repository
│ 
│   with helm_release.ingress-nginx[0],
│   on ingress-nginx.tf line 140, in resource "helm_release" "ingress-nginx":
│  140: resource "helm_release" "ingress-nginx" {

How to reproduce? Please include a code sample if relevant.

Deploy terraform-kubernetes-addons v12.1.0.

What's the expected behavior?

The Chart version of ingress-nginx should be deployable.

Are you able to fix this problem and submit a PR? Link here if you have already.

Yes, a PR will follow shortly if the commit cannot be reverted.

Environment details

  • Affected module version:
  • OS:
  • Terraform version:
  • Kubernetes version

Any other relevant info

[bug] cert-manager prometheus dashboard doesn't work

Describe the bug

The cert-manager dashboard on Prometheus doesn't work, there is no data because datasource cannot be found.

What is the current behavior?

No data

image

How to reproduce? Please include a code sample if relevant.

Deploy kubernetes-addons (1.27.1) with cert-manager and kube-prometheus-stack enabled.

What's the expected behavior?

Data ^^

Environment details

  • Affected module version: 1.27.1
  • OS: Linux
  • Terraform version: 0.14.7
  • Kubernetes version: 1.20.4

[enhancement]

Is your feature request related to a problem? Please describe.
The behavior for Kubernetes 1.24 and later clusters is simplified. For earlier versions, you need to tag the underlying Amazon EC2 Auto Scaling group with the details of the nodes for which it was responsible. For Kubernetes 1.24 and later clusters, when there are no running nodes in the managed node group, the Cluster Autoscaler calls the Amazon EKS DescribeNodegroup API operation. This API operation provides the information that the Cluster Autoscaler requires of the managed node group's resources, labels, and taints. This feature requires that you add the eks:DescribeNodegroup permission to the Cluster Autoscaler service account IAM policy. When the value of a Cluster Autoscaler tag on the Auto Scaling group powering an Amazon EKS managed node group conflicts with the node group itself, the Cluster Autoscaler prefers the value of the Auto Scaling group tag. This is so that you can override values as needed.

Describe the solution you'd like
Adding eks:DescribeNodegroup permission to the Cluster Autoscaler

Describe alternatives you've considered

Additional context

[bug] VolumeSnapshotClass APIVersion isn't valid for cluster

I got error when I trying to install EBS CSI driver.
The EBS CSI driver can be install and working fine, but the VolumeSnapshotClass install failed.

│ Error: csi-aws-vsc failed to create kubernetes rest client for update of resource: resource [snapshot.storage.k8s.io/v1/VolumeSnapshotClass] isn't valid for cluster, check the APIVersion and Kind fields are valid
│ 
│   with kubectl_manifest.aws-ebs-csi-driver_vsc[0],
│   on aws-ebs-csi-driver.tf line 220, in resource "kubectl_manifest" "aws-ebs-csi-driver_vsc":
│  220: resource "kubectl_manifest" "aws-ebs-csi-driver_vsc" {
  • OS: MacOS
  • Terraform version: 1.3.1
  • Kubernetes version: 1.23

[bug] Wrong url provided to prometheus-adapter

Describe the bug

Wrong url is provided to prometheus-adapter. Which means it cannot connect to Prometheus instance

What is the current behavior?

  + resource "helm_release" "prometheus-adapter" {
      + atomic                     = false
      + chart                      = "prometheus-adapter"
      + cleanup_on_fail            = false
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "prometheus-adapter"
      + namespace                  = "monitoring-system"
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://prometheus-community.github.io/helm-charts"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 3600
      + values                     = [
          + <<-EOT
                prometheus:
                  url: http://"kube-prometheus-stack-prometheus:9090".monitoring-system.svc
            EOT,
          + "",
        ]
      + verify                     = false
      + version                    = "2.12.1"
      + wait                       = true
    }

How to reproduce? Please include a code sample if relevant.

I am using the following values provided to the module

  kube-prometheus-stack = {
    enabled                     = true
    namespace                   = "monitoring-system"
    thanos_sidecar_enabled      = true
    thanos_bucket_force_destroy = true
    chart_version               = "14.0.1" # https://github.com/particuleio/terraform-kubernetes-addons/pull/79
  }
  thanos = {
    enabled              = true
    namespace            = "monitoring-system"
    bucket_force_destroy = true
  }
  prometheus-adapter = {
    enabled   = true
    namespace = "monitoring-system"
  }

What's the expected behavior?

url: http://kube-prometheus-stack-prometheus.monitoring-system.svc:9090

Are you able to fix this problem and submit a PR? Link here if you have already.

I can create a PR if needed.

Environment details

  • Affected module version: 1.28.0
  • OS: Linux
  • Terraform version: 0.14.x
  • Kubernetes version 1.19.x

Any other relevant info

[warning] Argument is deprecated in aws s3 bucket lifecycle

Describe the bug

Warning: Argument is deprecated
│
│   with module.eks_addons.module.eks-aws-addons.module.kube-prometheus-stack_thanos_bucket.aws_s3_bucket.this,
│   on .terraform/modules/eks_addons.eks-aws-addons.kube-prometheus-stack_thanos_bucket/main.tf line 7, in resource "aws_s3_bucket" "this":
│    7: resource "aws_s3_bucket" "this" {
│
│ Use the aws_s3_bucket_lifecycle_configuration resource instead
│
│ (and 43 more similar warnings elsewhere)

What is the current behavior?

Warnings related to depracated s3 bucket lifecycle settings are shown

How to reproduce? Please include a code sample if relevant.

Use the latest aws terraform module (registry.terraform.io/hashicorp/aws = 3.75.1)

What's the expected behavior?

No warnings like that, this can break things in future.

[enhancement] manage addons via gitops (flux)

Is your feature request related to a problem? Please describe.
As more and more gitops adoption is on the horizon it would be nice if you could manage the addons via gitops instead. But create the AWS resources which are necessary for the addon (like s3, iam ...)

Describe the solution you'd like
AWS Integration and Automation does that for example aready with argocd_manage_add_ons link. It basically creates only the AWS resources if you set this and leave the helm chart/kustomize managed by gitops (example)

Additional context
Please let me know what you think. I would be open for creating a pull request if you consider that option.

[bug] External-dns addon crashes TF during preparing a plan

Describe the bug

I have a problem with adding external-dns add-on. There is such error during creating a plan in Terraform:

Error: Error in function call

│ on .terraform/modules/eks-aws-addons/modules/aws/external-dns.tf line 3, in locals:
│ 3: external-dns = { for k, v in var.external-dns : k => merge(
│ 4: local.helm_defaults,
│ 5: {
│ 6: chart = local.helm_dependencies[index(local.helm_dependencies..name, "external-dns")].name
│ 7: repository = local.helm_dependencies[index(local.helm_dependencies.
.name, "external-dns")].repository
│ 8: chart_version = local.helm_dependencies[index(local.helm_dependencies.*.name, "external-dns")].version
│ 9: name = k
│ 10: namespace = k
│ 11: service_account_name = "external-dns"
│ 12: enabled = false
│ 13: create_iam_resources_irsa = true
│ 14: iam_policy_override = null
│ 15: default_network_policy = true
│ 16: },
│ 17: v,
│ 18: ) }
│ ├────────────────
│ │ local.helm_defaults is object with 16 attributes
│ │ local.helm_dependencies is tuple with 32 elements

│ Call to function "merge" failed: arguments must be maps or objects, got
│ "bool".

What is the current behavior?

Terraform exits with error while creating a plan.

How to reproduce? Please include a code sample if relevant.

Try to add external-dns using such versions of deps:

provider "registry.terraform.io/fluxcd/flux" {
version = "0.1.4"
}

provider "registry.terraform.io/gavinbunney/kubectl" {
version = "1.10.0"
}

provider "registry.terraform.io/hashicorp/aws" {
version = "3.38.0"
}

provider "registry.terraform.io/hashicorp/helm" {
version = "2.1.2"
}

provider "registry.terraform.io/hashicorp/http" {
version = "2.1.0"
}

provider "registry.terraform.io/hashicorp/kubernetes" {
version = "2.1.0"
}

provider "registry.terraform.io/hashicorp/local" {
version = "2.1.0"
}

provider "registry.terraform.io/hashicorp/null" {
version = "3.1.0"
}

provider "registry.terraform.io/hashicorp/random" {
version = "3.1.0"
}

provider "registry.terraform.io/hashicorp/template" {
version = "2.2.0"
}

provider "registry.terraform.io/hashicorp/time" {
version = "0.7.1"
}

provider "registry.terraform.io/hashicorp/tls" {
version = "3.1.0"
}

provider "registry.terraform.io/integrations/github" {
version = "4.9.2"
}

What's the expected behavior?

External-dns add-on shown in a plan and later deployed with apply.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: 2.0.0
  • OS: macos 11.3.1
  • Terraform version: 0.15.1
  • Kubernetes version: 1.19.8

Any other relevant info

[bug]

Describe the bug

Url for prometheus-adapter is broken.

What is the current behavior?

Port should not be appended to the url.
The helm chart appends the port to the url here

How to reproduce? Please include a code sample if relevant.

Simply install a cluster with kube-prometheus-stack and prometheus-adapter. Prometheus-adapter will have the port appended twice:
--prometheus-url=http://kube-prometheus-stack-prometheus.monitoring.svc:9090:9090

Custom metrics will get a 404. Upon editing the deployment and removing the redundant ":9090", the prometheus-adapter functions properly again.

What's the expected behavior?

The url should not include the port, as the port is a separate variable in the upstream helm chart and is templated by the upstream helm chart.

Are you able to fix this problem and submit a PR? Link here if you have already.

I will create a PR

Environment details

  • Affected module version: 2.0.0
  • OS: Mac OS 11.2.3
  • Terraform version: 0.14.10
  • Kubernetes version: 1.19.6

Any other relevant info

[bug] 'kong' install via helm_release does not create 'kong' ingressclass

Describe the bug

After deploying kong on a cluster via terraform-kubernetes-addons, everything seems to be OK (ingress controller is created and looks fine) but the 'kong 'ingressclass' does not exist. The, ingress creation fails when they try to reference the unexisting 'kong' ingress class.

After deploying the same version of the kong chart using the helm CLI, a 'kong' ingressclass exists and subsequent ingress object creation succeeds.

To reproduce :
Using module version 2.15.1 with an EKS cluster v 1.21.

configuration is :

  kong = {
    enabled = true
    chart_version = "2.6.1"
    extra_values = <<-EXTRA_VALUES
      autoscaling:
        enabled: true
        minReplicas: ${var.cfg.addons.kong.minReplicas}
    EXTRA_VALUES
  }

Workaround: I created a 'kong' ingressclass containing 'controller: konghq.com/ingress-controller' and I am currently testing if ingresses work with this configuration but AFAIK, it cannot be the right solution as helm CLI does not require to create this object manually.

Investigating in the helm chart, I see that my 'ingressVersion' is 'extensions/v1beta1' (deprecated) when it should be 'networking.k8s.io/v1'. It means that '(.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress")' returns false when executed via an helm_release and true when executed by the helm CLI (should be true because 'kubectl api-resources | grep Ingress' shows an 'Ingress' resource in 'networking.k8s.io/v1').

Did someone already meet (and fix) this problem ? I am not sure we can do something at this level because, AFAIK, we cannot control what '.Capabilities.APIVersions' is doing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.