Giter VIP home page Giter VIP logo

iplabs / terraform-kubernetes-alb-ingress-controller Goto Github PK

View Code? Open in Web Editor NEW
57.0 13.0 67.0 47 KB

Terraform module to ease deployment of the AWS ALB Ingress Controller

Home Page: https://registry.terraform.io/modules/iplabs/alb-ingress-controller/kubernetes/

License: Mozilla Public License 2.0

HCL 100.00%
terraform aws alb-ingress-controller alb ingress-controller kubernetes k8s terraform-module

terraform-kubernetes-alb-ingress-controller's Introduction

Terraform module: AWS ALB Ingress Controller installation

This Terraform module can be used to install the AWS ALB Ingress Controller into a Kubernetes cluster.

Improved integration with Amazon Elastic Kubernetes Service (EKS)

This module can be used to install the ALB Ingress controller into a "vanilla" Kubernetes cluster (which is the default) or it can be used to integrate tightly with AWS-managed EKS clusters which allows the deployed pods to use IAM roles for service accounts.

It is required, that an OpenID connect provider has already been created for your EKS cluster for this feature to work.

Just make sure that you set the variable k8s_cluster_type to eks type if running on EKS.

Examples

EKS deployment

To deploy the AWS ALB Ingress Controller into an EKS cluster, the following snippet might be used.

locals {
   # Your AWS EKS Cluster ID goes here.
  "k8s_cluster_name" = "my-k8s-cluster"
}

data "aws_region" "current" {}

data "aws_eks_cluster" "target" {
  name = local.k8s_cluster_name
}

data "aws_eks_cluster_auth" "aws_iam_authenticator" {
  name = data.aws_eks_cluster.target.name
}

provider "kubernetes" {
  alias = "eks"
  host                   = data.aws_eks_cluster.target.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.target.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.aws_iam_authenticator.token
  load_config_file       = false
}

module "alb_ingress_controller" {
  source  = "iplabs/alb-ingress-controller/kubernetes"
  version = "3.4.0"

  providers = {
    kubernetes = "kubernetes.eks"
  }

  k8s_cluster_type = "eks"
  k8s_namespace    = "kube-system"

  aws_region_name  = data.aws_region.current.name
  k8s_cluster_name = data.aws_eks_cluster.target.name
}

terraform-kubernetes-alb-ingress-controller's People

Contributors

dannyrandall avatar headcr4sh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-kubernetes-alb-ingress-controller's Issues

Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...

First question is can I run this on a EKS+ fargate ๐Ÿ˜„

I'm facing this error

Error: Waiting for rollout to finish: 0 of 1 updated replicas are available...

  on .terraform/modules/cluster.alb_ingress_controller/terraform-kubernetes-alb-ingress-controller-3.1.0/main.tf line 309, in resource "kubernetes_deployment" "this":
 309: resource "kubernetes_deployment" "this" {

The weird behavior is that after I got the error if a ran a Plan i got this object to be replaced

-/+ resource "kubernetes_deployment" "this" {
      ~ id = "crawler/aws-alb-ingress-controller" -> (known after apply)

      ~ metadata {
            annotations      = {
                "field.cattle.io/description" = "AWS ALB Ingress Controller"
            }
          ~ generation       = 1 -> (known after apply)
            labels           = {
                "app.kubernetes.io/managed-by" = "terraform"
                "app.kubernetes.io/name"       = "aws-alb-ingress-controller"
                "app.kubernetes.io/version"    = "v1.1.6"
            }
            name             = "aws-alb-ingress-controller"
            namespace        = "crawler"
          ~ resource_version = "376158" -> (known after apply)
          ~ self_link        = "/apis/apps/v1/namespaces/crawler/deployments/aws-alb-ingress-controller" -> (known after apply)
          ~ uid              = "3e654aec-57f3-4832-8629-039adfb44482" -> (known after apply)
        }

      ~ spec {
            min_ready_seconds         = 0
            paused                    = false
            progress_deadline_seconds = 600
            replicas                  = 1
            revision_history_limit    = 10

            selector {
                match_labels = {
                    "app.kubernetes.io/name" = "aws-alb-ingress-controller"
                }
            }

          ~ strategy {
              ~ type = "RollingUpdate" -> (known after apply)

              ~ rolling_update {
                  ~ max_surge       = "25%" -> (known after apply)
                  ~ max_unavailable = "25%" -> (known after apply)
                }
            }

          ~ template {
              ~ metadata {
                    annotations      = {
                        "iam.amazonaws.com/role" = "arn:aws:iam::333374442078:role/k8s-rooftop-cluster-stage-alb-ingress-controller"
                    }
                  ~ generation       = 0 -> (known after apply)
                    labels           = {
                        "app.kubernetes.io/name"    = "aws-alb-ingress-controller"
                        "app.kubernetes.io/version" = "1.1.6"
                    }
                  + name             = (known after apply)
                  + resource_version = (known after apply)
                  + self_link        = (known after apply)
                  + uid              = (known after apply)
                }

              ~ spec {
                  - active_deadline_seconds          = 0 -> null
                  - automount_service_account_token  = false -> null
                    dns_policy                       = "ClusterFirst"
                    host_ipc                         = false
                    host_network                     = false
                    host_pid                         = false
                  + hostname                         = (known after apply)
                  + node_name                        = (known after apply)
                  - node_selector                    = {} -> null
                    restart_policy                   = "Always"
                    service_account_name             = "aws-alb-ingress-controller"
                    share_process_namespace          = false
                    termination_grace_period_seconds = 60

                  ~ container {
                        args                     = [
                            "--ingress-class=alb",
                            "--cluster-name=rooftop-cluster-stage",
                            "--aws-vpc-id=vpc-03a0cc5578bdbfb97",
                            "--aws-region=us-east-2",
                            "--aws-max-retries=10",
                        ]
                      - command                  = [] -> null
                        image                    = "docker.io/amazon/aws-alb-ingress-controller:v1.1.6"
                        image_pull_policy        = "Always"
                        name                     = "server"
                        stdin                    = false
                        stdin_once               = false
                        termination_message_path = "/dev/termination-log"
                        tty                      = false

                      ~ liveness_probe {
                            failure_threshold     = 3
                            initial_delay_seconds = 60
                            period_seconds        = 60
                            success_threshold     = 1
                            timeout_seconds       = 1

                          ~ http_get {
                                path   = "/healthz"
                                port   = "health"
                                scheme = "HTTP"
                            }
                        }

                      ~ port {
                            container_port = 10254
                          - host_port      = 0 -> null
                            name           = "health"
                            protocol       = "TCP"
                        }

                      ~ readiness_probe {
                            failure_threshold     = 3
                            initial_delay_seconds = 30
                            period_seconds        = 60
                            success_threshold     = 1
                            timeout_seconds       = 3

                          ~ http_get {
                                path   = "/healthz"
                                port   = "health"
                                scheme = "HTTP"
                            }
                        }

                      ~ resources {
                          + limits {
                              + cpu    = (known after apply)
                              + memory = (known after apply)
                            }

                          + requests {
                              + cpu    = (known after apply)
                              + memory = (known after apply)
                            }
                        }

                      ~ volume_mount {
                            mount_path        = "/var/run/secrets/kubernetes.io/serviceaccount"
                            mount_propagation = "None"
                            name              = "aws-alb-ingress-controller-token-qsfhm"
                            read_only         = true
                        }
                    }

                  + image_pull_secrets {
                      + name = (known after apply)
                    }

                  ~ volume {
                        name = "aws-alb-ingress-controller-token-qsfhm"

                      ~ secret {
                            default_mode = "0644"
                          - optional     = false -> null
                            secret_name  = "aws-alb-ingress-controller-token-qsfhm"
                        }
                    }
                }
            }
        }
    }

Dependency on KIAM

My understanding of this system is that it depends on KIAM being set up correctly, such that the alb-ingress-controller deployment can give the following annotation:

        annotations = {
          # Annotation to be used by KIAM
          "iam.amazonaws.com/role" = aws_iam_role.this.arn
        }

(seen here https://github.com/iplabs/terraform-kubernetes-alb-ingress-controller/blob/master/main.tf#L286).

However this project does not declare a dependency on KIAM. Is that a bug, or would you expect this to work out of the box for a terraform created EKS cluster?

In addition, it seems like EKS has a new mechanism for allowing pods to take on IAM roles https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html However my Terraform created EKS cluster does not have it enabled. Perhaps this project could be updated to use the new EKS OIDC mechanism for role assumption instead of KIAM, ideally with some instructions about how to create an EKS cluster with OIDC enabled such that it works with this package?

provider kubernetes is undefined

โ•ท
โ”‚ Warning: Provider kubernetes is undefined
โ”‚ 
โ”‚   on main.tf line 209, in module "alb-ingress-controller-eu":
โ”‚  209:     kubernetes = kubernetes.eu
โ”‚ 
โ”‚ Module module.alb-ingress-controller-eu does not declare a provider named kubernetes.
โ”‚ If you wish to specify a provider configuration for the module, add an entry for kubernetes in the required_providers block within the module.

Throwing an error on terraform v0.12.7

When executing terraform init, the folllowing error is thrown
`Error: Invalid argument name

on .terraform/modules/alb.alb-ingress-controller/iplabs-terraform-kubernetes-alb-ingress-controller-2671b50/main.tf line 171, in resource "kubernete_account" "this":
171: "app" = "aws-alb-ingress-controller"

Argument names must not be quoted.
`
How do we fix this?

Connection Refused Error

Using the following configuration

data "aws_region" "default" {}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

resource "aws_security_group" "worker_security_group" {
  name_prefix = "worker_security_group"
  vpc_id      = var.vpc_id

  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"

    cidr_blocks = var.cidr_blocks
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "12.2.0"

  cluster_version = "1.17"
  cluster_name    = var.cluster_name
  cluster_create_timeout = "1h"
  cluster_endpoint_private_access = true

  vpc_id  = var.vpc_id
  subnets = var.subnets

  worker_groups = [
    {
      instance_type        = "m5.xlarge"
      asg_desired_capacity = 1
      asg_max_size         = 1
    }
  ]
  worker_additional_security_group_ids = [aws_security_group.worker_security_group.id]
}

provider "kubernetes" {
  alias                  = "eks"
  host                   = data.aws_eks_cluster.cluster.endpoint
  token                  = data.aws_eks_cluster_auth.cluster.token
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
}

module "alb_ingress_controller" {
  source  = "iplabs/alb-ingress-controller/kubernetes"
  version = "3.4.0"

  providers = {
    kubernetes = kubernetes.eks
  }

  k8s_cluster_type = "eks"
  k8s_namespace    = "kube-system"

  aws_region_name  = data.aws_region.default.name
  k8s_cluster_name = data.aws_eks_cluster.cluster.name
}

Running a terraform plan gives the following error

Error: Get "http://localhost/api/v1/namespaces/kube-system/serviceaccounts/aws-alb-ingress-controller": dial tcp [::1]:80: connect: connection refused

Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterroles/aws-alb-ingress-controller": dial tcp [::1]:80: connect: connection refused

Unsupported argument

Error: Unsupported argument
โ”‚
โ”‚ on temp.tf line 21, in provider "kubernetes":
โ”‚ 21: load_config_file = false
โ”‚
โ”‚ An argument named "load_config_file" is not expected here.

Need to support depends_on parameters - Does not work with fargate

I have an alb-ingress-controller configured as follows:

module "alb-ingress-controller" {
  source  = "iplabs/alb-ingress-controller/kubernetes"
  version = "3.4.0"
  aws_region_name  = data.aws_region.current.name
  aws_vpc_id       = module.vpc.vpc_id
  k8s_cluster_name = aws_eks_cluster.demo.name
  k8s_cluster_type = "eks"
  # k8s_namespace    = "kube-system"
}

However, I am running a fargate only compute mode with fargate profile defined to run 'default' and 'kube-system' nodes. Since this module only waits for eks cluster to start and not the fargate profile, it cannot start the node it needs to and fails the terraform script.

We need a way to be able to specify additional dependencies this module does not directly depend on.

Here is my fargate profile.

resource "aws_eks_fargate_profile" "fargate-profile" {
  cluster_name           = aws_eks_cluster.demo.name
  fargate_profile_name   = "myFirstFargateProfile"
  pod_execution_role_arn = aws_iam_role.fargate-role.arn
  subnet_ids             = module.vpc.private_subnets

  selector {
    namespace = "default"
  }
  
  selector {
    namespace = "kube-system"
  }
  
  lifecycle {
    create_before_destroy = false
  }
  
  timeouts {
    create = "60m"
    delete = "2h"
  }
  # customize vertical or horizontal pod scaler here - TBD
}

Here is the issue

aws_eks_cluster.demo: Still creating... [12m10s elapsed]

**aws_eks_cluster.demo: Creation complete after 12m19s [id=terraform-eks-demo]**

data.aws_eks_cluster_auth.aws_iam_authenticator: Refreshing state...

module.alb-ingress-controller.data.aws_eks_cluster.selected[0]: Refreshing state...

** aws_eks_fargate_profile.fargate-profile: Creating... ** --> this did not finish, but the ingress module starts executing

module.alb-ingress-controller.aws_iam_policy.this: Creating...

module.alb-ingress-controller.data.aws_vpc.selected: Refreshing state...

module.alb-ingress-controller.data.aws_iam_policy_document.eks_oidc_assume_role[0]: Refreshing state...

module.alb-ingress-controller.aws_iam_role.this: Creating...

module.alb-ingress-controller.aws_iam_role.this: Creation complete after 0s [id=k8s-terraform-eks-demo-alb-ingress-controller]

**module.alb-ingress-controller.kubernetes_service_account.this: Creating...**

module.alb-ingress-controller.aws_iam_policy.this: Creation complete after 1s [id=arn:aws:iam::656891419177:policy/k8s-terraform-eks-demo-alb-management]

module.alb-ingress-controller.aws_iam_role_policy_attachment.this: Creating...

InvalidParameter on IAM Role and Policy

The default aws_iam_path_prefix parameter of "" causes a validation error on terraform apply:

Error: Error creating IAM Role k8s-av-alb-ingress-controller: InvalidParameter: 1 validation error(s) found.                                
- minimum field size of 1, CreateRoleInput.Path.                                                                                            
                                                                                                                                            
                                                                                                                                            
  on .terraform/modules/alb-ingress-controller/iplabs-terraform-kubernetes-alb-ingress-controller-76eb4a6/main.tf line 58, in resource "aws_iam_role" "this":
  58: resource "aws_iam_role" "this" {                                                                                                      
                                                                      
                                                                                                                                            
                                                                                                                                            
Error: Error creating IAM policy k8s-av-alb-management: InvalidParameter: 1 validation error(s) found.                                      
- minimum field size of 1, CreatePolicyInput.Path.                                                                                                                                                                                                                                       
                                                                                                                                            
                                                                                                                                                                                                                                                                                         
  on .terraform/modules/alb-ingress-controller/iplabs-terraform-kubernetes-alb-ingress-controller-76eb4a6/main.tf line 186, in resource "aws_iam_policy" "this":
 186: resource "aws_iam_policy" "this" {

It seems like path has to at least have a length of 1 to be valid. There are two possible solutions I can think of:

  1. Change the default aws_iam_path_prefix to "/" like it is in the terraform documentation
  2. Change the default aws_iam_path_prefix to null, and only set the path value if a path prefix was passed in.

I'm happy to make a PR if that would help.

Deprecated attribute: default_secret_name

โ”‚ Warning: Deprecated attribute
โ”‚
โ”‚ on .terraform/modules/alb_ingress_controller/main.tf line 370, in resource "kubernetes_deployment" "this":
โ”‚ 370: name = kubernetes_service_account.this.default_secret_name
โ”‚
โ”‚ The attribute "default_secret_name" is deprecated. Refer to the provider documentation for details.
โ”‚
โ”‚ (and 5 more similar warnings elsewhere)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.