Giter VIP home page Giter VIP logo

terraform-aws-eks's Introduction

Welcome ๐Ÿ‘‹

WesleyCharlesBlake's github stats

Open Source Projects

Project Stars Forks Issues Version
terraform-aws-eks GitHub stars GitHub forks GitHub issues GitHub release (latest SemVer)
eth-gas-exporter GitHub stars GitHub forks GitHub issues GitHub release (latest SemVer)
a16z-helios GitHub stars GitHub forks GitHub issues GitHub release (latest SemVer)
aws-codepipeline-slack GitHub stars GitHub forks GitHub issues GitHub release (latest SemVer)

Who am I

Principal Cloud Architect | AWS Certified Solutions Architect | DevOps Engineer | Kubernetes | Sysadmin | Linux | Comptia Cloud+ SME | Blockchain and Open Source advocate. Detailed technical knowledge and hands-on experience of DevOps, Automation, Build Engineering and Configuration Management. Extensive experience in the design and implementation of fully automated Continuous Integration, Continuous Delivery, Continuous Deployment pipelines and DevOps processes and highly available, automated cloud infrastructure through Infrastructure as Code (Terraform). In-depth knowledge of JavaScript, Python, micro service architecture and serverless application models, and running large enterprise scale Kubernetes Clusters. Highly experienced in Start Ups.

terraform-aws-eks's People

Contributors

aliartiza75 avatar dependabot[bot] avatar infinitydon avatar joestack avatar sysrex avatar wesleycharlesblake avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-eks's Issues

The module doesn't seem to be using node-instance-type variable

Bug Report

Current Behavior
Every time I run the module it creates instance types t3.medium which is the default behaviour of aws_eks_node_group TF resource.

Input Code

Usage:

module "eks" {
  source  = "WesleyCharlesBlake/eks/aws"

  aws-region          = var.aws_region
  aws_profile         = var.aws_profile
  availability-zones  = ["us-east-1a", "us-east-1b", "us-east-1c"]
  cluster-name        = "my-cluster"
  k8s-version         = "1.17"
  node-instance-type  = "t3a.medium"
  root-block-size     = "20"
  desired-capacity    = "1"
  max-size            = "1"
  min-size            = "1"
  vpc-subnet-cidr     = "10.0.0.0/16"
  private-subnet-cidr = ["10.0.0.0/19", "10.0.32.0/19", "10.0.64.0/19"]
  public-subnet-cidr  = ["10.0.128.0/20", "10.0.144.0/20", "10.0.160.0/20"]
  eks-cw-logging      = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  ec2-key-public-key  = "my-key"
}

Expected behaviour/code
I would expect the module to create instances of type t3a.medium in the specific case of my configuration

Environment

  • Terraform version: Terraform v0.13.2
  • AWS Region: us-east-1
  • OS: OSX 10.15.6

Possible Solution
In file eks-node-group the instance_types field is not defined. Can be quickly fixed by adding:

resource "aws_eks_node_group" "eks-node-group" {
...
    instance_types = [
        var.node-instance-type
    ]
...
}

However, will be better if the variable node-instance-type in variables.tf is a list of supported instance types.

I can do a quick PR for this if allowed :)

AWS ami name change and multiple versions

Bug Report

Current Behavior
since there are two current eks amis (ver 1.10 and ver 1.11), the image-id filter does not pick up the correct AMIs intended.

Possible Solution
set kubernetes version through vars

Is your module dead?

I really want almost the same setup of EKS with bastion as described in your project, but I see that it was last updated in November, 2019.

Error: Unsupported block type, modules\eks\vpc.tf, modules\eks\eks-cluster.tf

Bug Report

Current Behavior
When I run terraform validate command Error: Unsupported block type appear

Input Code

terraform validate

Error: Unsupported block type

  on modules\eks\eks-cluster.tf line 44, in resource "aws_security_group" "cluster":
  44:   tags {

Blocks of type "tags" are not expected here. Did you mean to define argument
"tags"? If so, use the equals sign to assign it a value.


Error: Unsupported block type

  on modules\eks\vpc.tf line 38, in resource "aws_internet_gateway" "eks":
  38:   tags {

Expected behavior/code

Environment

  • Terraform version Terraform v0.12.0
  • AWS Region us-east-2
  • OS: [Windows 10 using gitbash]
  • How you are using using the eks-terraform module:
    git clone from this repository

Possible Solution

tags {} should be change for tags = {}

error while destroying cluster

Bug Report

Current Behavior
I created an eks cluster using the provided terraform manifests but the issue is that when I tried to delete the eks cluster it didn't delete the cluster properly.

I have provided the logs below but the problem with this module is that although the vpc exists but still module wasn't able to delete that cluster properly.

When I applied the apply operation it showed that the subnets have been created successfully and also in the terraform.tfstate file the subnet records exists but when i try to destroy the cluster i get the error given below:

logs

module.eks.module.bastion-asg.aws_launch_configuration.this[0]: Refreshing state... [id=bastion-lc-20190904162029969000X]
module.eks.module.bastion-asg.random_pet.asg_name[0]: Refreshing state... [id=innocent-boa]

Error: no matching subnet found for vpc with id vpc-0fbd66be703dd9102

  on modules/eks/data.tf line 5, in data "aws_subnet_ids" "private":
   5: data "aws_subnet_ids" "private" {



Error: no matching subnet found for vpc with id vpc-0fbd66be703dXXXX

  on modules/eks/data.tf line 13, in data "aws_subnet_ids" "public":
  13: data "aws_subnet_ids" "public" {


Makefile:16: recipe for target 'destroy' failed
make: *** [destroy] Error 1

Input Code

  • REPL or Repo link if applicable:
var your => (code) => here;

Expected behavior/code
I must have deleted the cluster without any issue.

Environment

  • Terraform version: v0.12.7
  • AWS Region: eu-west-1
  • OS: [e.g. OSX 10.13.4, Windows 10]
  • How you are using using the eks-terraform module:

Possible Solution

**Additional context
Add any other context about the problem here. If applicable, add screenshots to help explain.

Repository terraform-aws-eks is large due to git pack file.

ls -lha .git/objects/pack/
total 20M
drwxr-xr-x. 2 qubit qubit  121 Feb  6 17:17 .
drwxr-xr-x. 4 qubit qubit   30 Feb  6 17:14 ..
-r--r--r--. 1 qubit qubit 199K Feb  6 17:17 pack-69f33f9b47057f21388e202e5bd343cd7b8d7698.idx
-r--r--r--. 1 qubit qubit  20M Feb  6 17:17 pack-69f33f9b47057f21388e202e5bd343cd7b8d7698.pack
   19.4 MiB [######################] /.git
   80.0 KiB [                      ]  package-lock.json
   16.0 KiB [                      ]  README.md
    4.0 KiB [                      ]  variables.tf
    4.0 KiB [                      ]  sec-groups.tf
    4.0 KiB [                      ]  iam.tf
    4.0 KiB [                      ] /examples
    4.0 KiB [                      ] /.circleci
    4.0 KiB [                      ]  bastion.tf
    4.0 KiB [                      ]  network.tf
    4.0 KiB [                      ]  LICENSE
    4.0 KiB [                      ]  data.tf
    4.0 KiB [                      ]  config.tf
    4.0 KiB [                      ]  eks-node-group.tf
    4.0 KiB [                      ]  eks-cluster.tf
    4.0 KiB [                      ]  .gitignore
    4.0 KiB [                      ]  package.json
    4.0 KiB [                      ]  outputs.tf
    4.0 KiB [                      ]  ec2-key.tf
    4.0 KiB [                      ]  workstation-external-ip.tf
    4.0 KiB [                      ]  providers.tf
    4.0 KiB [                      ]  .release-it.json
    4.0 KiB [                      ]  versions.tf

Hi @WesleyCharlesBlake

Please refer to next posts for solution:

Consider cleaning up the .git folder to reduce the large repo size #439

Remove large .pack file created by git

EKS cluster deployment using this module fails few times with Error: error creating EKS Cluster (haroon-cluster): InvalidParameterException: Subnets specified must be in at least two different AZs

Bug Report

Current Behavior
Every time when I trigger Terraform It should be able to deploy the whole EKS cluster without any error. However during some of my trigger I see the below mentioned error. I am not sure why I get it. If I check those resources in the terraform logs. It looks like create complete state on different AZs. Thats why I am curious why still i get this issue. However if I try to re-run it works. Problem with this behaviour is, I am going to use it in my automation so odd failure might result is whole automation failure. Please help me with the fix

Input Code
auto@auto-virtual-machine:~/eks-terraform/gcb/terraform$ terraform apply -auto-approve
module.eks.data.http.workstation-external-ip: Refreshing state...
module.eks.data.aws_ami.bastion: Refreshing state...
module.eks.data.aws_ami.eks-worker-ami: Refreshing state...
module.pod_ip.random_uuid.uuid: Creating...
module.pod_ip.random_uuid.uuid: Creation complete after 0s [id=e387aaa8-6e3e-e664-0d6b-cc812d685539]
module.eks.aws_key_pair.deployer: Creating...
module.eks.aws_iam_role.cluster: Creating...
module.eks.aws_iam_role.node: Creating...
module.eks.module.vpc.aws_vpc.this[0]: Creating...
module.eks.module.vpc.aws_eip.nat[0]: Creating...
module.eks.module.vpc.aws_eip.nat[1]: Creating...
module.eks.module.vpc.aws_eip.nat[2]: Creating...
module.eks.aws_key_pair.deployer: Creation complete after 1s [id=vm_automation]
module.eks.aws_iam_role.cluster: Creation complete after 1s [id=haroon-cluster-eks-cluster-role]
module.eks.aws_iam_role_policy_attachment.cluster-AmazonEKSClusterPolicy: Creating...
module.eks.aws_iam_role_policy_attachment.cluster-AmazonEKSServicePolicy: Creating...
module.eks.aws_iam_role.node: Creation complete after 0s [id=haroon-cluster-eks-node-role]
module.eks.aws_iam_role_policy_attachment.node-AmazonEKSWorkerNodePolicy: Creating...
module.eks.aws_iam_role_policy_attachment.node-AmazonEC2ContainerRegistryReadOnly: Creating...
module.eks.aws_iam_role_policy_attachment.node-AmazonEKS_CNI_Policy: Creating...
module.eks.aws_iam_instance_profile.node: Creating...
module.eks.module.vpc.aws_eip.nat[2]: Creation complete after 1s [id=eipalloc-0354661c110fc0e17]
module.eks.module.vpc.aws_eip.nat[1]: Creation complete after 1s [id=eipalloc-0556dc3651e292b4b]
module.eks.aws_iam_role_policy_attachment.cluster-AmazonEKSClusterPolicy: Creation complete after 1s [id=haroon-cluster-eks-cluster-role-20200526072952625700000001]
module.eks.aws_iam_role_policy_attachment.cluster-AmazonEKSServicePolicy: Creation complete after 1s [id=haroon-cluster-eks-cluster-role-20200526072952633200000002]
module.eks.module.vpc.aws_eip.nat[0]: Creation complete after 1s [id=eipalloc-029122239887a8908]
module.eks.aws_iam_role_policy_attachment.node-AmazonEC2ContainerRegistryReadOnly: Creation complete after 1s [id=haroon-cluster-eks-node-role-20200526072952668200000004]
module.eks.aws_iam_role_policy_attachment.node-AmazonEKS_CNI_Policy: Creation complete after 1s [id=haroon-cluster-eks-node-role-20200526072952664900000003]
module.eks.aws_iam_role_policy_attachment.node-AmazonEKSWorkerNodePolicy: Creation complete after 1s [id=haroon-cluster-eks-node-role-20200526072952679100000005]
module.eks.aws_iam_instance_profile.node: Creation complete after 1s [id=haroon-cluster-eks-node-instance-profile]
module.eks.module.vpc.aws_vpc.this[0]: Creation complete after 4s [id=vpc-0d604b8f91d791212]
module.eks.module.vpc.aws_subnet.database[2]: Creating...
module.eks.module.vpc.aws_route_table.private[2]: Creating...
module.eks.data.aws_vpc.eks: Refreshing state...
module.eks.module.vpc.aws_route_table.public[0]: Creating...
module.eks.module.vpc.aws_route_table.private[1]: Creating...
module.eks.module.vpc.aws_subnet.private[2]: Creating...
module.eks.module.vpc.aws_subnet.public[1]: Creating...
module.eks.module.vpc.aws_subnet.database[1]: Creating...
module.eks.module.vpc.aws_route_table.private[0]: Creating...
module.eks.module.vpc.aws_subnet.public[2]: Creating...
module.eks.module.vpc.aws_route_table.private[1]: Creation complete after 2s [id=rtb-02a8a04dcc7fb9920]
module.eks.module.vpc.aws_subnet.public[0]: Creating...
module.eks.module.vpc.aws_route_table.private[2]: Creation complete after 2s [id=rtb-0302f767c99202a3f]
module.eks.module.vpc.aws_subnet.database[0]: Creating...
module.eks.module.vpc.aws_internet_gateway.this[0]: Creating...
module.eks.module.vpc.aws_route_table.public[0]: Creation complete after 2s [id=rtb-088fd98758d8367a2]
module.eks.module.vpc.aws_subnet.private[0]: Creating...
module.eks.module.vpc.aws_route_table.private[0]: Creation complete after 2s [id=rtb-090ae1d8e9766376d]
module.eks.aws_eip.spirent_pub_ip: Creating...
module.eks.module.vpc.aws_subnet.database[1]: Creation complete after 2s [id=subnet-033aabf2848a07141]
module.eks.module.vpc.aws_subnet.private[2]: Creation complete after 2s [id=subnet-04e7fafc9a6a9fb97]
module.eks.data.aws_subnet_ids.public: Refreshing state...
module.eks.module.vpc.aws_subnet.private[1]: Creating...
module.eks.module.vpc.aws_subnet.database[2]: Creation complete after 2s [id=subnet-08ae1154e5ba5c9df]
module.eks.data.aws_subnet_ids.private: Refreshing state...
module.eks.module.vpc.aws_subnet.public[2]: Creation complete after 3s [id=subnet-0d251cce89661df3f]
module.eks.data.aws_subnet.public: Refreshing state...
module.eks.module.vpc.aws_subnet.public[1]: Creation complete after 3s [id=subnet-07dec9e44e38607dd]
module.eks.module.cluster-sg.aws_security_group.this_name_prefix[0]: Creating...
module.eks.module.node-sg.aws_security_group.this_name_prefix[0]: Creating...
module.eks.module.traffic_sg.aws_security_group.this_name_prefix[0]: Creating...
module.eks.aws_eip.spirent_pub_ip: Creation complete after 1s [id=eipalloc-0c0522cbcd5ae7610]
module.eks.module.vpc.aws_subnet.database[0]: Creation complete after 2s [id=subnet-0eb1432c35500a80a]
module.eks.module.vpc.aws_route_table_association.database[1]: Creating...
module.eks.module.vpc.aws_subnet.private[0]: Creation complete after 2s [id=subnet-0bf28bce9877054e1]
module.eks.module.vpc.aws_route_table_association.database[2]: Creating...
module.eks.module.vpc.aws_db_subnet_group.database[0]: Creating...
module.eks.module.vpc.aws_route_table_association.database[0]: Creating...
module.eks.module.vpc.aws_subnet.public[0]: Creation complete after 2s [id=subnet-0c57f2f42d4036b05]
module.eks.module.vpc.aws_route_table_association.public[1]: Creating...
module.eks.module.vpc.aws_subnet.private[1]: Creation complete after 2s [id=subnet-07a2a06d3d1302491]
module.eks.module.vpc.aws_route_table_association.public[0]: Creating...
module.eks.module.vpc.aws_route_table_association.database[1]: Creation complete after 0s [id=rtbassoc-0da253fdd7d60bfe1]
module.eks.module.vpc.aws_route_table_association.public[2]: Creating...
module.eks.module.vpc.aws_route_table_association.database[0]: Creation complete after 0s [id=rtbassoc-02222cb1c9b808211]
module.eks.module.vpc.aws_route_table_association.database[2]: Creation complete after 0s [id=rtbassoc-03f5a0b666d906d0d]
module.eks.module.vpc.aws_route_table_association.private[0]: Creating...
module.eks.module.vpc.aws_internet_gateway.this[0]: Creation complete after 2s [id=igw-0209b00636a9a2c25]
module.eks.module.vpc.aws_route_table_association.private[1]: Creating...
module.eks.module.vpc.aws_route_table_association.private[2]: Creating...
module.eks.module.vpc.aws_route_table_association.public[1]: Creation complete after 0s [id=rtbassoc-08bf206d6dcdd6108]
module.eks.module.vpc.aws_nat_gateway.this[2]: Creating...
module.eks.module.vpc.aws_route_table_association.public[0]: Creation complete after 0s [id=rtbassoc-030db0245bd3b6d2f]
module.eks.module.vpc.aws_nat_gateway.this[1]: Creating...
module.eks.module.vpc.aws_route_table_association.public[2]: Creation complete after 1s [id=rtbassoc-048c69c7bb8b26ab1]
module.eks.module.vpc.aws_nat_gateway.this[0]: Creating...
module.eks.module.vpc.aws_route_table_association.private[0]: Creation complete after 1s [id=rtbassoc-0b5ec5d595dac1cb0]
module.eks.module.vpc.aws_route.public_internet_gateway[0]: Creating...
module.eks.module.vpc.aws_route_table_association.private[2]: Creation complete after 1s [id=rtbassoc-04f6e9f46cf85f448]
module.eks.module.vpc.aws_route_table_association.private[1]: Creation complete after 1s [id=rtbassoc-0da54896805ec68f8]
module.eks.module.vpc.aws_route.public_internet_gateway[0]: Creation complete after 0s [id=r-rtb-088fd98758d8367a21080289494]
module.eks.module.vpc.aws_db_subnet_group.database[0]: Creation complete after 2s [id=eks-vpc]
module.eks.module.traffic_sg.aws_security_group.this_name_prefix[0]: Creation complete after 3s [id=sg-03491fed3e9bf42e5]
module.eks.data.aws_security_group.traffic: Refreshing state...
module.eks.module.traffic_sg.aws_security_group_rule.egress_rules[0]: Creating...
module.eks.module.traffic_sg.aws_security_group_rule.ingress_rules[0]: Creating...
module.eks.module.cluster-sg.aws_security_group.this_name_prefix[0]: Creation complete after 3s [id=sg-0324a58996980a72c]
module.eks.data.aws_security_group.cluster: Refreshing state...
module.eks.module.node-sg.aws_security_group.this_name_prefix[0]: Creation complete after 3s [id=sg-0118688727049f8a3]
module.eks.data.aws_security_group.node: Refreshing state...
module.eks.module.cluster-sg.aws_security_group_rule.egress_rules[0]: Creating...
module.eks.module.cluster-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[0]: Creating...
module.eks.module.node-sg.aws_security_group_rule.egress_rules[0]: Creating...
module.eks.module.traffic_sg.aws_security_group_rule.egress_rules[0]: Creation complete after 1s [id=sgrule-2962350051]
module.eks.module.cluster-sg.aws_security_group_rule.egress_rules[0]: Creation complete after 2s [id=sgrule-2635060909]
module.eks.module.node-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[0]: Creating...
module.eks.module.node-sg.aws_security_group_rule.egress_rules[0]: Creation complete after 2s [id=sgrule-4024016120]
module.eks.module.node-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[1]: Creating...
module.eks.module.traffic_sg.aws_security_group_rule.ingress_rules[0]: Creation complete after 2s [id=sgrule-681682410]
module.eks.module.node-sg.aws_security_group_rule.ingress_with_self[0]: Creating...
module.eks.aws_eks_cluster.eks: Creating...
module.eks.module.cluster-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[0]: Creation complete after 3s [id=sgrule-37372434]
module.eks.module.node-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[0]: Creation complete after 1s [id=sgrule-2985098431]
module.eks.module.node-sg.aws_security_group_rule.computed_ingress_with_source_security_group_id[1]: Creation complete after 2s [id=sgrule-2660918460]
module.eks.module.node-sg.aws_security_group_rule.ingress_with_self[0]: Creation complete after 4s [id=sgrule-2774219423]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [50s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [50s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [50s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m0s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [1m0s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m0s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [1m10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m10s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [1m20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m20s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [1m30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m30s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Still creating... [1m40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m40s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[1]: Creation complete after 1m50s [id=nat-0f5bd8ead0bf51f05]
module.eks.module.vpc.aws_nat_gateway.this[2]: Still creating... [1m50s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[0]: Still creating... [1m50s elapsed]
module.eks.module.vpc.aws_nat_gateway.this[2]: Creation complete after 2m0s [id=nat-01f48ba92d14055b7]
module.eks.module.vpc.aws_nat_gateway.this[0]: Creation complete after 1m59s [id=nat-0254cb4943f78fa17]
module.eks.module.vpc.aws_route.private_nat_gateway[0]: Creating...
module.eks.module.vpc.aws_route.private_nat_gateway[2]: Creating...
module.eks.module.vpc.aws_route.private_nat_gateway[1]: Creating...
module.eks.module.vpc.aws_route.private_nat_gateway[1]: Creation complete after 1s [id=r-rtb-02a8a04dcc7fb99201080289494]
module.eks.module.vpc.aws_route.private_nat_gateway[0]: Creation complete after 1s [id=r-rtb-090ae1d8e9766376d1080289494]
module.eks.module.vpc.aws_route.private_nat_gateway[2]: Creation complete after 1s [id=r-rtb-0302f767c99202a3f1080289494]

**Error: error creating EKS Cluster (haroon-cluster): InvalidParameterException: Subnets specified must be in at least two different AZs
{
RespMetadata: {
StatusCode: 400,
RequestID: "4ecfda93-6750-4171-993a-cba91af87ad9"
},
ClusterName: "haroon-cluster",
Message_: "Subnets specified must be in at least two different AZs"
}

on modules/eks/eks-cluster.tf line 11, in resource "aws_eks_cluster" "eks":
11: resource "aws_eks_cluster" "eks" {**

Expected behavior/code
Each time when I trigger Terraform it should deploy successfully.

Environment

  • Terraform version 0.12
  • AWS Region - us-east-1
  • OS: Ubuntu-18.04
  • How you are using using the eks-terraform module:
    Using this to deploy EKS cluster and added my terraform files to deploy K8s resources
    If applicable, add screenshots to help explain.

Timeout when using kubectl logs or kubectl exec

Bug Report

Current Behavior
After terraform finishes creating the kubernetes cluster, PODs are in-accessible either by using "kubectl logs or kubectl exec"

Sample error output:

Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 172.22.31.213:10250: getsockopt: no route to host

Expected behavior/code
kubectl calls to PODs should not be timing out.

Environment

  • Terraform version: v0.11.13
  • AWS Region: eu-central-1
  • OS: Ubuntu 18.04.1 LTS
  • How you are using using the eks-terraform module: terraform apply

Possible Solution
Update the locals section of eks-worker-nodes.tf according to terraform EKS documentation so as to properly bootstrap the AWS EKS AMI.
I will also try to create a PR

MissingClusterDNS when deploy application

Bug Report

MissingClusterDNS when deploy application
Current Behavior

kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.

Expected behavior/code
A clear and concise description of what you expected to happen (or code).

Environment

  • Terraform version v0.11.13
  • AWS Region ap-southeast-1
  • OS: [e.g. OSX 10.13.4, Windows 10] MacOS 10.14.4
  • How you are using using the eks-terraform module:
    'aws', 'EKS'

Need to resolve this issue in this code

when I plan this project I faced this error can you please resolve this and guide me?

terraform plan

Error: Invalid function argument
โ”‚
โ”‚ on .terraform/modules/vpc/outputs.tf line 353, in output "vpc_endpoint_sqs_id":
โ”‚ 353: value = "${element(concat(aws_vpc_endpoint.sqs.*.id, tolist("")), 0)}"
โ”‚
โ”‚ Invalid value for "v" parameter: cannot convert string to list of any single type.

โ†’ Plan โœ˜ Couldn't Generate Local Execution Plan Error: Error in function call

Hi @WesleyCharlesBlake

terraform plan fails:


โ†’ Plan
  โœ˜ Couldn't Generate Local Execution Plan

Error: Error in function call

  on .terraform/modules/vpc/outputs.tf line 353, in output "vpc_endpoint_sqs_id":
 353:   value       = "${element(concat(aws_vpc_endpoint.sqs.*.id, list("")), 0)}"
    โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    โ”‚ while calling list(vals...)

Call to function "list" failed: the "list" function was deprecated in
Terraform v0.12 and is no longer available; use tolist([ ... ]) syntax to
write a literal list.

Error: Error in function call

  on .terraform/modules/vpc/outputs.tf line 488, in output "vpc_endpoint_ecs_id":
 488:   value       = "${element(concat(aws_vpc_endpoint.ecs.*.id, list("")), 0)}"
    โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    โ”‚ while calling list(vals...)

Call to function "list" failed: the "list" function was deprecated in
Terraform v0.12 and is no longer available; use tolist([ ... ]) syntax to
write a literal list.

Error: Error in function call

  on .terraform/modules/vpc/outputs.tf line 503, in output "vpc_endpoint_ecs_agent_id":
 503:   value       = "${element(concat(aws_vpc_endpoint.ecs_agent.*.id, list("")), 0)}"
    โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    โ”‚ while calling list(vals...)

Call to function "list" failed: the "list" function was deprecated in
Terraform v0.12 and is no longer available; use tolist([ ... ]) syntax to
write a literal list.

Error: Error in function call

  on .terraform/modules/vpc/outputs.tf line 518, in output "vpc_endpoint_ecs_telemetry_id":
 518:   value       = "${element(concat(aws_vpc_endpoint.ecs_telemetry.*.id, list("")), 0)}"
    โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
    โ”‚ while calling list(vals...)

Call to function "list" failed: the "list" function was deprecated in
Terraform v0.12 and is no longer available; use tolist([ ... ]) syntax to
write a literal list.

must contain valid network CIDR

Bug Report

Current Behavior

sean@levanter $ terraform apply
data.http.workstation-external-ip: Refreshing state...
data.aws_availability_zones.available: Refreshing state...
data.aws_region.current: Refreshing state...
data.aws_ami.eks-worker-ami: Refreshing state...

Error: Error running plan: 1 error(s) occurred:

  • module.module.module.eks.aws_security_group_rule.cluster-ingress-workstation-https: "cidr_blocks.0" must contain a valid network CIDR, got "2604:2000:12c0:c140:11b9:e6fd:85d4:c2c6/32"

Input Code
https://github.com/WesleyCharlesBlake/terraform-aws-eks/blob/master/modules/eks/workstation-external-ip.tf

Appears that "icanhazip.com" is returning IPV6 address. Terraform wants IPV4?

Expected behavior/code
Created module as-is line for line from README.md

Environment

  • Terraform version 0.11.7
  • AWS provider version 1.28.0
  • http provider version 1.0.1
  • AWS Region - us-east-1
  • OS: [e.g. OSX 10.13.4, Windows 10]
  • How you are using using the eks-terraform module:

Nothing special. Just trying to create a simple cluster.

Possible Solution

Terraform require IPV6 setting?

**Additional context
Add any other context about the problem here. If applicable, add screenshots to help explain.

eks eu-west-1 (Irland) support

Feature Request

To support EKS in eu-west-1 (Irland) the following change in eks-worker-nodes.tf is needed:

data "aws_ami" "eks-worker-ami" {
  filter {
    name   = "name"
    # values = ["eks-worker-*"]
    values = ["amazon-eks-node-v*"]
  }

  most_recent = true
  owners      = ["602401143452"] # Amazon
}

Document required IAM policies to deploy a cluster

I have attached all EKS policies to my API user. When running a plan I get the following output

data.http.workstation-external-ip: Refreshing state...
data.aws_region.current: Refreshing state...
data.aws_ami.eks-worker-ami: Refreshing state...
data.aws_availability_zones.available: Refreshing state...

Error: Error refreshing state: 2 error(s) occurred:

* module.eks.data.aws_ami.eks-worker-ami: 1 error(s) occurred:

* module.eks.data.aws_ami.eks-worker-ami: data.aws_ami.eks-worker-ami: UnauthorizedOperation: You are not authorized to perform this operation.
	status code: 403, request id: bd448f81-8a7a-4a76-8b03-d6e1bca5a1ce
* module.eks.data.aws_availability_zones.available: 1 error(s) occurred:

* module.eks.data.aws_availability_zones.available: data.aws_availability_zones.available: Error fetching Availability Zones: UnauthorizedOperation: You are not authorized to perform this operation.
	status code: 403, request id: 2a4946a3-1cb7-48cd-b5e5-285c7aa35b4a

I'll discover the required permissions soon and likely have a PR documenting them.

Notably, the AWS managed policies for EKS do not permit a user to create an EKS cluster.

* module.eks.aws_eks_cluster.eks: 1 error(s) occurred:

* aws_eks_cluster.eks: error creating EKS Cluster (xxx): AccessDeniedException: User: arn:aws:iam::000000000000:user/xxx is not authorized to perform: eks:CreateCluster on resource: arn:aws:eks:us-west-2:000000000000:cluster/xxx
	status code: 403, request id: 45eaff2f-d872-11e8-818c-956d46f857d3

worker nodes fail to join cluster

Bug Report

Current Behavior
A clear and concise description of the behavior.
follow instructions, encountered no errors. worker nodes fail to connect to cluster after running kubectl apply -f config-map-aws-auth.yaml

Input Code

  • REPL or Repo link if applicable:
var your => (code) => here;

Expected behavior/code
A clear and concise description of what you expected to happen (or code). worker nodes connect

Environment

  • Terraform version - v0.11.11
  • AWS Region us-east-1
  • OS: [e.g. OSX 10.13.4, Windows 10] Mojave 10.14.3
  • How you are using using the eks-terraform module: per instructions in readme

Possible Solution

**Additional context
Add any other context about the problem here. If applicable, add screenshots to help explain.

NAME                                 DATA   AGE
aws-auth                             1      8m
coredns                              1      13m
extension-apiserver-authentication   5      13m
kube-proxy                           1      13m```

```(โŽˆ aws:)โžœ  zoo โœ— kubectl describe configmap aws-auth -n kube-system
Name:         aws-auth
Namespace:    kube-system
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","data":{"mapRoles":"- rolearn: arn:aws:iam::xxxx:role/moon-eks-node-role\n  username: system:node:{{EC2PrivateD...

Data
====
mapRoles:
----
- rolearn: arn:aws:iam::xxxx:role/moon-eks-node-role
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes

Events:  <none>```

No resources found using amazon-eks-node-${var.k8s-version}-*

Bug Report

Current Behavior
since there are two current eks amis for version 1.10 and ver 1.11 . while using version 1.11 ami ;getting node resources not found error

Expected behavior/code
Output for systemctl status kubelet -l on node getting below error

Jan 09 16:11:27 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:27.118005 4453 kubelet_node_status.go:103] Unable to register node "ip-10-20-0-199.eu-west-1.compute.internal" with API server: Unauthorized
Jan 09 16:11:27 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:27.286395 4453 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Jan 09 16:11:27 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:27.489382 4453 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Unauthorized
Jan 09 16:11:27 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:27.660353 4453 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Unauthorized
Jan 09 16:11:28 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:28.413905 4453 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Jan 09 16:11:28 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:28.616409 4453 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Unauthorized
Jan 09 16:11:28 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: W0109 16:11:28.776632 4453 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 09 16:11:28 ip-10-20-0-199.eu-west-1.compute.internal kubelet[4453]: E0109 16:11:28.777200 4453 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Environment

  • Terraform version Terraform v0.11.7
  • AWS Region: eu-west-1
  • OS: [e.g. OSX 10.13.4, Windows 10] : Amazon-linux (kubectl and heptio iam installed)
  • How you are using using the eks-terraform module:

**Additional context

  1. AMI ID : curl http://169.254.169.254/latest/meta-data/ami-id
    ami-0a9006fb385703b54
  2. cluster's CNI version:
    kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
    kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
    amazon-k8s-cni:1.2.1
  3. kubectl get pods
    No resources found.

Thanks to look into this issue

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.