Giter VIP home page Giter VIP logo

pulumi-eks's Introduction

Build Status Slack npm version Python version NuGet version PkgGoDev

Pulumi Amazon Web Services (AWS) EKS Components

The Pulumi EKS library provides a Pulumi component that creates and manages the resources necessary to run an EKS Kubernetes cluster in AWS. This component exposes the Crosswalk for AWS functionality documented in the Pulumi Elastic Kubernetes Service guide as a package available in all Pulumi languages.

This includes:

  • The EKS cluster control plane.
  • The cluster's worker nodes configured as node groups, which are managed by an auto scaling group.
  • The AWS CNI Plugin aws-k8s-cni to manage pod networking in Kubernetes.

Installing

This package is available in many languages in the standard packaging formats.

Node.js (JavaScript/TypeScript)

To use from JavaScript or TypeScript in Node.js, install it using either npm:

$ npm install @pulumi/eks

or yarn:

$ yarn add @pulumi/eks

Python

To use from Python, install using pip:

$ pip install pulumi_eks

Go

To use from Go, use go get to grab the latest version of the library

$ go get github.com/pulumi/pulumi-eks/sdk/go

.NET

To use from .NET, install using dotnet add package:

$ dotnet add package Pulumi.Eks

References

Contributing

If you are interested in contributing, please see the contributing docs.

Code of Conduct

Please follow the code of conduct.

pulumi-eks's People

Contributors

alex-hunt-materialize avatar blampe avatar cyrusnajmabadi avatar danielrbradley avatar dependabot[bot] avatar ellismg avatar embraser01 avatar eronwright avatar flostadler avatar guineveresaenger avatar houqp avatar iwahbe avatar jaxxstorm avatar jen20 avatar justinvp avatar kpitzen avatar lblackstone avatar lukehoban avatar metral avatar mjeffryes avatar pgavlin avatar pulumi-bot avatar rquitales avatar sean1588 avatar squaremo avatar stack72 avatar t0yv0 avatar thomas11 avatar viveklak avatar vranga000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pulumi-eks's Issues

Create `instanceRole` per-nodegroup not per-cluster

Currently we are creating an instance role per cluster which is reused for each NodeGroup. This is not always sufficient, there are cases where different NodeGroups want different Roles. The default policies will be the same for each of these roles, but users may want to attach additional policies themselves to only some NodeGroups.

This also more closely aligns with the documentation and CloudFormation template used by EKS:

[Feature Request] Ability to specify the AZ when creating NodeGroups

TL;DR Proposal

It would be nice to have an availabilityZone?: pulumi.Input<string> in ClusterNodeGroupOptions that does the same thing I did below:

Use case

Because ELB volumes can only be mounted by an instance in the same AZ, it makes sense to be able to place all nodes of an EKS cluster in a specific AZ when using ELB volumes, as you're expecting a pod should be able to mount an ELB volume regardless of the node it's scheduled on.

Current Behavior

When creating a cluster without specifying a VPC, the default VPC is used, together with the default subnets in that VPC, and we end up with nodes scattered throughout the whole region, placed in as many AZ as are default subnets.

Workaround

An ES2 instance is placed in the same AZ its subnet is placed in, so the way we can place a NodeGroup's nodes in a specific AZ is to set nodeSubnetIds in ClusterNodeGroupOptions to a subnet in that AZ. To be able to specify the literal AZ name (e.g. eu-central-1c), I've come up with a function that given an eks.Cluster and an AZ name, returns the subnet id placed in that AZ:

export function getSubnetIdInAZ(cluster: eks.Cluster, az: string): Output<string> {
  const { subnetIds } = cluster.eksCluster.vpcConfig;
  return subnetIds.apply(async ids => {
    const subnets = await Promise.all(ids.map(id => aws.ec2.getSubnet({ id })));
    const subnet = subnets.find(subnet => subnet.availabilityZone === az);
    if (!subnet) {
      throw new Error(`No subnet found in ${az} zone`);
    }
    return subnet.id;
  });
}

, which I then used like this:

const cluster = new eks.Cluster('cluster', {
  skipDefaultNodeGroup: true,
});
cluster.createNodeGroup('worker', {
  /* ... */
  nodeSubnetIds: [getSubnetIdInAZ(cluster, 'eu-central-1c')],
});

Errors out when `~/.kube/config` exists

When the kubeconfig file exists in ~/.kube/config and $KUBECONFIG is not set, and I run pulumi up on a simple hello-world EKS cluster, I get a bunch of errors like this:

$ pulumi up --yes
Updating (aws-infra):

     Type                                  Name                      Status                  Info
     pulumi:pulumi:Stack                   infrastructure-aws-infra                          5 messages
     └─ eks:index:Cluster                  eksCluster
 +      ├─ pulumi-nodejs:dynamic:Resource  eksCluster-vpc-cni        **creating failed**     1 error
 +      └─ kubernetes:core:ConfigMap       eksCluster-nodeAccess     **creating failed**     1 error

Diagnostics:
  pulumi:pulumi:Stack (infrastructure-aws-infra):
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized

  kubernetes:core:ConfigMap (eksCluster-nodeAccess):
    error: Plan apply failed: Unauthorized

  pulumi-nodejs:dynamic:Resource (eksCluster-vpc-cni):
    error: Plan apply failed: Command failed: kubectl apply -f /var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized
    unable to recognize "/var/folders/w5/n_5_kc696xxb8h9rd8d3px2r0000gp/T/tmp-15444QTFwfDso2DP3.tmp": Unauthorized

Resources:
    30 unchanged

Duration: 11s

When I remove my kubeconfig file, this all suddenly starts working.

I'm not 100% sure what's going on, yet.

Document aws-iam-authenticator

Diagnostics:
  global: global
    error: Unable to fetch schema: Get https://1FB43E2460DC2D8F3C017D77161E847E.yl4.us-west-2.eks.amazonaws.com/openapi/v2: getting token: exec: exec: "aws-iam-authenticator": executable file not found in $PATH

This should be called out in the README.

Cluster name with underscores

If the cluster has an underscore in its name:

return new eks.Cluster('eks-cluster', {...})

An error is thrown. Cluster name should be probably normalized internally

Plan apply failed: StorageClass.storage.k8s.io "eks_cluster-gp2-whmfn08q" is invalid: metadata.name: 
Invalid value: "eks_cluster-gp2-whmfn08q": 
a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character 
(e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

ClusterOptions and ClusterNodeGroupOptions inconsistency

I've noticed an inconsistency between ClusterOptions and CluserNodeGroupOptions regarding keyName - that only exists in the latter.

As you should be able to specify any ClusterNodeGroupOptions as part of ClusterOptions , doesn't it make sense for ClusterOptions to have a nodeGroupOptions propery, instead of copying all ClusterNodeGroupOptions properties to it?

Test failure: Cluster limit exceeded for account

I think we're out of EKS clusters in whatever region we run the tests in:

Plan apply failed: error creating EKS Cluster (cluster-eksCluster-e284a91):
ResourceLimitExceededException: Cluster limit exceeded for account.

[Feature Request] Ability to specify additional node security group rules

Currently the node security group rules are hardcoded in securitygroup.ts:35, and the only way to alter them is to create the security group ourselves, copy-paste those rules, add additional ones, and send it in nodeSecurityGroup when creating a cluster or node group.

My specific use case for this is allowing SSH to a cluster / node group's nodes.

In the name of convenience, I can think of 3 ways to improve this:

  1. Be able to specify additional node security group rules when creating clusters / node groups.
  2. Be able to specify a callback that gets the default rules and can alter them (same mechanism as transformations callback in ConfigGroup, for example).
  3. Have an allowSSH property in ClusterOptions / ClusterNodeGroupOptions.

Failures to kubectl apply

We periodically see CI failures due to the below, and have also heard user reports of seeing similar. We should figure out what is needed to be robust against this:

[ ks/nodejs/eks/examples/cluster ]   pulumi-nodejs:dynamic:Resource (cluster2-vpc-cni):
[ ks/nodejs/eks/examples/cluster ]     error: Plan apply failed: Command failed: kubectl apply -f /tmp/tmp-12841ugyvSLxyVH46.tmp
[ ks/nodejs/eks/examples/cluster ]     error: unable to recognize "/tmp/tmp-12841ugyvSLxyVH46.tmp": Get https://AA28F18B717D67BED6FEC1BFB8B2C64F.sk1.eu-west-1.eks.amazonaws.com/api?timeout=32s: dial tcp 54.154.206.100:443: i/o timeout

feat req: allow parameterizing the worker node LaunchConfiguration

Currently, we create the worker node LaunchConfiguration on behalf of the user when a cluster is created.

In certain scenarios it makes sense to be able to pass in the full LaunchConfiguration, especially for params that we don't directly expose.

We should be able to take in a fully-defined LaunchConfiguration to avoid these kinds of limits.

@pulumi/eks acquired from NPM doesn't have index.js

Not sure what the deal is here, but the @pulumi/eks package I got from NPM doesn't have an index.js in it.
screen shot 2018-08-23 at 9 46 50 am

Without an index.js, I can't import * as eks from "@pulumi/eks". I can import * as eks from "@pulumi/eks/cluster", though.

New clusters are failing to create due to nodes booting up with an arm64 AMI on x86_64 arch

The nodegroup AMI filter is returning an arm64 AMI for a node instance on x86_64 architecture:

error: Plan apply failed: 1 error occurred:

* creating urn:pulumi:dev5::kx-eks-cluster::eks:index:Cluster$aws:cloudformation/stack:Stack::kx-eks-cluster-nodes: ROLLBACK_COMPLETE: ["The following resource(s) failed to create: [NodeGroup]. . Rollback requested by user." "The requeste
d instance type's architecture (x86_64) does not match the architecture in the manifest for ami-0ef72f9111fdc5f97 (arm64). Launching EC2 instance failed."]

This results in a cluster that is unable of being created due to the arch mismatch, and the CloudFormation stack being put into a ROLLBACK_COMPLETE state.

Support multiple worker groups

Currently the worker node ASG and LaunchConfiguration are defined as a single pool. There are at least two scenarios that would benefit from having support for multiple worker groups:

  1. GPU + CPU instances (or other required heterogeneous cluster use cases)
  2. Blue/green worker pool deployments for worker upgrades

Failure to serialize YAML

A user reported that they started seeing the error below in CI (they believe with no code changes).

 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating YAMLException: unacceptable kind of an object to dump [object Undefined]
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at writeNode (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:756:13)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at writeBlockMapping (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:634:10)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at writeNode (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:727:9)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at writeBlockSequence (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:521:9)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at writeNode (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:740:9)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at dump (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:817:7)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at Object.safeDump (/builds/org/app/example/pulumi/eks-cluster/node_modules/js-yaml/lib/js-yaml/dumper.js:823:10)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at pulumi.all.apply (/builds/org/app/example/pulumi/eks-cluster/node_modules/@pulumi/cluster.ts:216:27)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at Output.<anonymous> (/builds/org/app/example/pulumi/eks-cluster/node_modules/@pulumi/pulumi/resource.js:279:47)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating     at Generator.next (<anonymous>)
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating error: Running program '/builds/org/app/example/pulumi/eks-cluster' failed with an unhandled exception:
 +  pulumi:pulumi:Stack eks-cluster-eks-cluster-cd-test creating error: YAMLException: unacceptable kind of an object to dump [object Undefined]

Expose names for created storage classes

The EKS component has options to create Kubernetes storage classes, but the names of these storage classes are not exposed. This makes it difficult to use the created storage classes explicitly. We should expose these names.

Failing CloudInit and Kubelet setup during worker node startup

Recently in the past day all EKS clusters I create via @pulumi/eks's new eks.Cluster are failing to start up worker nodes. One symptom is that some or all of the nodes fail to join the EKS cluster.

A snippet of the log is pasted further below. The code used to create the cluster, which was working properly last week, is here:

import * as fs from "fs";
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as eks from "@pulumi/eks";
import * as k8s from "@pulumi/kubernetes";
import * as YAML from "js-yaml";

//
// Configuration
//
let config = new pulumi.Config("eks-cluster");
const environment = config.require("environment");
const vpcSubnetIds = config.require("subnet-ids").split(",");

let splunkConfig = new pulumi.Config("splunk");

//
// Actions to perform at the startup of any worker node in the cluster
//
const userData = `set -o xtrace
echo "Starting UserData for the EKS worker nodes"
yum install -y epel-release
yum update -y
yum install -y \
  jq \
  yq \
  cifs-utils \
  nfs-utils \
  nfs-common \
  nfs4-acl-tools \
  portmap \
  screen \
  vim \
  mountpoint \
  base64 \
  bind-utils \
  coreutils \
  util-linux
VOLUME_PLUGIN_DIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec"
mkdir -p "$VOLUME_PLUGIN_DIR/fstab~cifs"
cd "$VOLUME_PLUGIN_DIR/fstab~cifs"
curl -L -O https://raw.githubusercontent.com/fstab/cifs/master/cifs
chmod 755 cifs\
`


// Create the Kubernetes cluster and all associated artifacts and resources
const cluster = new eks.Cluster(environment+"-eks-cluster", {
    vpcId             : config.require("vpc-id"),
    subnetIds         : vpcSubnetIds,
    instanceType      : "m4.4xlarge",
    nodePublicKey     : config.require("worker-ssh-key"),
    nodeRootVolumeSize: 250,
    desiredCapacity   : 3,
    maxSize           : 5,
    minSize           : 1,
    nodeUserData      : userData,
    deployDashboard   : false,
});

// Write out the kubeconfig file
export const kubeconfig = cluster.kubeconfig;
cluster.kubeconfig.apply(kc => fs.writeFileSync("./config-"+environment+".kubeconfig.json", JSON.stringify(kc)))

// Write the kubeconfig to a location in S3 for safe keeping
cluster.kubeconfig.apply(kc => {
  let s3Kubeconfig = new aws.s3.BucketObject("kube_config/config-"+environment+".json", {
    bucket: "sdp-helm",
    content: JSON.stringify(kc),
  })}
);

// Allow SSH ingress to worker nodes
new aws.ec2.SecurityGroupRule("ssh-ingress", {
    type: "ingress",
    fromPort: 22,
    toPort: 22,
    protocol: "tcp",
    securityGroupId: cluster.nodeSecurityGroup.id,
    cidrBlocks: [ "0.0.0.0/0" ],
});

The journalctl output on one of the hosts shows the following problems:

failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0f016b7e718a12801

Failed to start Apply the settings specified in cloud-config.

Unit cloud-config.service entered failed state.


Nov 06 01:06:13 ip-10-10-3-24.tableausandbox.com systemd[1]: Starting yum locked retry of update-motd...
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com update-motd[5484]: Yum database was locked, so we couldn't get fresh package info.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: Started yum locked retry of update-motd.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: Starting yum locked retry of update-motd.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: One of the configured repositories failed (Unknown),
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: and yum doesn't have enough cached data to continue. At this point the only
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: safe thing yum can do is fail. There are a few ways to work "fix" this:
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 1. Contact the upstream for the repository and get them to fix the problem.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 2. Reconfigure the baseurl/etc. for the repository, to point to a working
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: upstream. This is most often useful if you are using a newer
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: distribution release than is supported by the repository (and the
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: packages for the previous distribution release still work).
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 3. Run the command with the repository temporarily disabled
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: yum --disablerepo=<repoid> ...
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 4. Disable the repository permanently, so yum won't use it by default. Yum
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: will then just ignore the repository until you permanently enable it
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: again or use --enablerepo for temporary usage:
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: yum-config-manager --disable <repoid>
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: or
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: subscription-manager repos --disable=<repoid>
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 5. Configure the failing repository to be skipped, if it is unavailable.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Note that yum will try to contact the repo. when it runs most commands,
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: so will have to try and fail each time (and thus. yum will be be much
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: slower). If it is a very temporary problem though, this is often a nice
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: compromise:
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Could not retrieve mirrorlist http://amazonlinux.us-west-2.amazonaws.com/2/core/latest/x86_64/mirror.list error was
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: 12: Timeout on http://amazonlinux.us-west-2.amazonaws.com/2/core/latest/x86_64/mirror.list: (28, 'Resolving timed out after 5515 milliseconds')
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Nov 06 01:06:18 cloud-init[4927]: util.py[WARNING]: Package upgrade failed
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Nov 06 01:06:18 cloud-init[4927]: cc_package_update_upgrade_install.py[WARNING]: 1 failed with exceptions, re-raising the last one
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[4927]: Nov 06 01:06:18 cloud-init[4927]: util.py[WARNING]: Running module package-update-upgrade-install (<module 'cloudinit.config.cc_package_update_u
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: cloud-config.service: main process exited, code=exited, status=1/FAILURE
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: Failed to start Apply the settings specified in cloud-config.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: Unit cloud-config.service entered failed state.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: cloud-config.service failed.
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com systemd[1]: Starting Execute cloud user/final scripts...
Nov 06 01:06:18 ip-10-10-3-24.tableausandbox.com cloud-init[5517]: Cloud-init v. 18.2-72.amzn2.0.6 running 'modules:final' at Tue, 06 Nov 2018 01:06:18 +0000. Up 155.71 seconds.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com cloud-init[5517]: Cluster "kubernetes" set.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com systemd[1]: Reloading.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com cloud-init[5517]: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com systemd[1]: Reloading.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com systemd[1]: Started Kubernetes Kubelet.
Nov 06 01:06:20 ip-10-10-3-24.tableausandbox.com systemd[1]: Starting Kubernetes Kubelet...

...

Nov 06 01:08:22 ip-10-10-3-24.tableausandbox.com kubelet[5645]: F1106 01:08:22.657450    5645 server.go:233] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-0f016b7e718a12801
Nov 06 01:08:22 ip-10-10-3-24.tableausandbox.com systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 06 01:08:22 ip-10-10-3-24.tableausandbox.com systemd[1]: Unit kubelet.service entered failed state.
Nov 06 01:08:22 ip-10-10-3-24.tableausandbox.com systemd[1]: kubelet.service failed.
Nov 06 01:08:27 ip-10-10-3-24.tableausandbox.com systemd[1]: kubelet.service holdoff time over, scheduling restart.

Destroy fails when AWS resources cannot be found

When destroying a k8s cluster, if for some reason an AWS resource gets removed and Pulumi isn't aware of it, it will ultimately cause the destroy to fail if it cannot find the resource:

$ pul destroy
Previewing destroy (eks-demo):

     Type                                  Name                      Plan
 -   pulumi:pulumi:Stack                   eks-hello-world-eks-demo  delete
 -   ├─ eks:index:Cluster                  helloWorld                delete
 -   │  ├─ pulumi-nodejs:dynamic:Resource  helloWorld-cfnStackName   delete
 -   │  ├─ eks:index:ServiceRole           helloWorld-eksRole        delete
 -   │  └─ eks:index:ServiceRole           helloWorld-instanceRole   delete
 -   └─ aws-infra:network:Network          vpc                       delete
 -      ├─ aws:ec2:InternetGateway         vpc                       delete
 -      ├─ aws:ec2:Eip                     vpc-nat-1                 delete
 -      ├─ aws:ec2:Eip                     vpc-nat-0                 delete
 -      └─ aws:ec2:Vpc                     vpc                       delete

Resources:
    - 10 to delete

Do you want to perform this destroy? yes
Destroying (eks-demo):

     Type                        Name                      Status                  Info
     pulumi:pulumi:Stack         eks-hello-world-eks-demo
 -   └─ aws:ec2:InternetGateway  vpc                       **deleting failed**     1 error

Diagnostics:
  aws:ec2:InternetGateway (vpc):
    error: Plan apply failed: deleting urn:pulumi:eks-demo::eks-hello-world::aws-infra:network:Network$aws:ec2/internetGateway:InternetGateway::vpc: Error waiting for internet gateway (igw-0305d3fc89e01176e) to detach: couldn't find resource (31 retries)

Expose node instance role

Hey! Love the work you are doing on pulumi. Just started experimenting with this and hit a wall when trying to work with kube2iam. Kube2iam requires that you allow your worker nodes to assume the roles of individual pods. This requires the use of the nodeInstanceRoleARN to create a trust relationship on the roles to be assumed. I guess one could jump through some hoops (or maybe I'm missing something) to get the ARN but I don't see why it shouldn't be exposed from the Cluster class. This probably applies to some other properties as well, resulting in increased usability of this component.

When getting an existing EKS cluster, there isn't a way to pull the kubeconfig

In the current eks object, we can use the cluster props: certificateAuthority, name, and endpoint to fill in the blanks in the kubeconfig file https://gist.github.com/b01eeecacccc3e284771463ed626af5e

we only do this right now in the eks object constructor, when we should actually have a kubeconfig() method on the object.

see community slack: https://pulumi-community.slack.com/archives/C84L4E3N1/p1550869077185900?thread_ts=1550862777.160400&cid=C84L4E3N1

Allow custom worker AMI's to be used

The company i work for has a requirement to use it's own ami's for worker notes with some customizations over the AWS AMI. After looking at the code that is not currently possible, given the public api. Could you provide an option to pass the ami though the ClusterOptions and have the default behaviour work as for the custom user data?

Regards,
Cosmin

Move from secgroup in-line rules to standalone secgroup rules

TF has a long, outstanding bug with the use of in-line rules in secgroups and how they're gathered (hashicorp/terraform-provider-aws#4416), which still exists though the PR suggests otherwise.

TF also states that the conjunction of secgroups with in-line rules and standalone rules do not play well together, and we use both in pulumi/eks.

Moving to standalone secgroup rules is a better approach long-term that we should adopt.

ref: #69

aws-auth configmap does not get re-created after a cluster replacement, preventing nodes from joining the cluster

During a replacement of an EKS cluster, though the replacement succeeds, the aws-auth configmap used for user / role mappings does not get recreated. This in turn prevents the new worker nodes from joining the cluster.

The aws-auth configmap gets created here. Because none of the IAM resources it depends on get replaced or updated during the cluster replacement, the aws-auth does not need to be replaced or updated either. However, during the tear down of the old cluster, the configMap goes away with the cluster, and the pulumi/kube provider does not seem to notice the need to recreate the aws-auth configMap for the new cluster.

Per discussions offline w/ Luke, the thought was that the kube provider kx-eks-cluster-eks-k8s should have been replaced instead of updated. Since the provider is the only dependency of aws-auth, if the provider were replaced, it would have created aws-auth.

Changes:
 
    Type                           Name                                   Operation
-+  aws:eks:Cluster                kx-eks-cluster-eksCluster              replaced
~   pulumi:providers:kubernetes    kx-eks-cluster-eks-k8s                 updated
~   aws:ec2:SecurityGroup          kx-eks-cluster-nodeSecurityGroup       updated
~   pulumi-nodejs:dynamic:Resource kx-eks-cluster-vpc-cni                 updated
-+  aws:ec2:LaunchConfiguration    kx-eks-cluster-nodeLaunchConfiguration replaced
~   aws:cloudformation:Stack       kx-eks-cluster-nodes                   updated
~   pulumi:providers:kubernetes    kx-eks-cluster-provider                updated
 
Resources:
    +-replaced 2
    ~ updated 5
    18 unchanged

To repro this, we'll use the same code from #69 (comment).

Steps:

  1. Download pulumi-full-aws-eks.zip
  2. Run pulumi up in the unzipped dir
  3. After initial deployment is complete, comment out line #74 subnetIds.pop(), and run another update.
    • This simulates having a VPC with existing subnets, and increasing up from using 2 subnets in the cluster to 3 subnets(#69 (comment)).
  4. After about ~20 min the EKS replacement onto 3 subnets will complete
  5. kubectl cluster-info returns success
  6. kubectl get pods --all-namespaces returns core-dns Pods in Pending, as there aren't any workers to deploy onto.
  7. kubectl get cm aws-auth -n kube-system -o yaml returns nothing
  8. kubectl get nodes -o wide --show-labels returns nothing

/cc @lukehoban @hausdorff

cluster instanceRole does not contain an aws.iam.role object but just a string

The eks cluster creation exposes an instance role which is the default cluster.instanceRole.

When I do

cluster.instanceRole(role => console.log(role.arn));

I expect the role arn to be displayed but it shows up as not defined.

I go a step further and try:

cluster.instanceRole.apply(role => console.log("arn: " + role.arn + " name: " + role.name + " role: " + role));

Output:
arn: undefined name: undefined role: xxx-eks-cluster-instanceRole-role-e3b9ec3

I think it's bug and needs to be fixed.

Unable to replace EKS cluster: Error revoking security group ingress rules

Performing a change that requires an EKS cluster to be replaced fails, and leaves the stack in a state which cannot be destroyed nor updated.

Pulumi version: v0.16.14

I have a stack containing an EKS cluster. Updating the subnetIds causes a "replace" (as expected), however the replace fails.

Diagnostics:
  aws:ec2:SecurityGroup (cluster-nodeSecurityGroup):
    error: Plan apply failed: 1 error occurred:
    
    * updating urn:pulumi:test::eks-test::eks:index:Cluster$aws:ec2/securityGroup:SecurityGroup::cluster-nodeSecurityGroup: Error revoking security group ingress rules: InvalidPermission.NotFound: The specified rule does not exist in this security group.
    	status code: 400, request id: 577f76ae-06d1-4030-8f84-a8844b89a6d3

The stack is then in a bad state, as the EKS cluster was replaced, but other resources were not updated.
Further updates (e.g. trying to revert to the previous subnetIds) fail with the same issue.
If I have deployments on the cluster, I cannot destroy the stack either, as these cannot be deleted (the cluster they belonged to no longer exists).

Expose eks cluster class

Currently, the variable eksCluster is not exposed. Some helm charts require the cluster name, so this is a very useful feature.

Changing `roleMappings` leads to replacements but should not

A user on slack reported that changes to roleMappings lead to repalcements of Kubernetes resources. It is important that this be possible without replacement, and there is no fundmanetal reason this should require replacement at the EKS/Kubernetes layer.

Updated the roleMappings with a new mapping and, run preview:

❯ pulumi preview
Previewing update (org/eks-name):

    Type                                                           Name                     Plan        Info
    pulumi:pulumi:Stack                                            eks-eks-name
>-  ├─ pulumi:pulumi:StackReference                                identityStack            read
    ├─ kubernetes:helm.sh:Chart                                    kube-state-metrics
+-  │  ├─ kubernetes:extensions:Deployment                         kube-state-metrics       replace     [diff: ~provider]
+-  │  ├─ kubernetes:core:Service                                  kube-state-metrics       replace     [diff: ~provider]
+-  │  ├─ kubernetes:rbac.authorization.k8s.io:ClusterRole         kube-state-metrics       replace     [diff: ~provider]
+-  │  ├─ kubernetes:rbac.authorization.k8s.io:ClusterRoleBinding  kube-state-metrics       replace     [diff: ~provider]
+-  │  └─ kubernetes:core:ServiceAccount                           kube-state-metrics       replace     [diff: ~provider]
+-  ├─ kubernetes:core:Namespace                                   dev                      replace     [diff: ~provider]
    └─ eks:index:Cluster                                           cluster
+-     ├─ pulumi:providers:kubernetes                              cluster-provider         replace     [diff: ~kubeconfig]
+-     ├─ aws:cloudformation:Stack                                 cluster-nodes            replace
+-     └─ kubernetes:core:ConfigMap                                cluster-nodeAccess       replace     [diff: ~data]

Resources:
   +-9 to replace
   24 unchanged

Pulumi shouldn't fetch the latest AMI by default

When using pulumi update with an EKS cluster, pulumi's default behavior is to always fetch the latest version of the EKS AMI if nodeAmiId isn't specified.

While this behavior can be convenient, unexpected upgrades to the EKS AMI can cause unexpected downtime or break the cluster entirely.

For production situations, it's strongly recommended that you explicitly pass the nodeAmiId parameter so that you have full control over when your nodes upgrade.

pulumi-eks's default behavior should enforce best practices. I propose the following:

  1. Make nodeAmiId a required parameter (force users to make a conscious decision about the AMI that they want to use).
  2. Provide the existing fetch behavior only as a convenience function for users that don't need fine control over when their cluster upgrades.
  3. Warn people in the documentation for the above helper function about not using this in production.

See https://pulumi-community.slack.com/archives/C84L4E3N1/p1553791785005000 for more discussion.

Expose method to set userdata for creating EKS worker nodes

My use case requires that the EKS worker nodes install some custom drivers and packages (e.g. nfs-utils, flexvolume drivers, etc.) any time a new host is started and joins the cluster.

Currently this is accomplished by explicitly using CloudFormation templates, but would be simpler via pulumi/eks.

My specific use is in python.

[Feature Request] Add ALB controller

Hi! 👋
GCP has an add-on to enable their load balancer ingress controller directly from the API and console. It allows us to create an ingress using multiple paths and hosts, enable health checking, etc.

By default, the Kubernetes LoadBalancer service in EKS creates a classic load balancer, which lacks a lot of great features, like WebSockets and path mapping. It's also possible to enable the Network Load Balacing using a service annotation, but not the application load balancer.

AWS also has it's own ingress controller, which cannot be enabled in cluster creation, it requires some additional steps to be installed. A tutorial can be found here.

Since this package is all about simplifying the experience of using EKS and creating a load balancer is potentially a very common task, i believe this functionality could be added here, maybe behind a flag like enableAlbController, since it adds new resources to the cluster. 😅

Implementation Details

To enable the ALB ingress controller, it's necessary to:

  1. Add a lot of IAM permissions to the node workers IAM role.
  2. Add some RBAC roles and it's service account.
  3. Add the ingress controller.
  4. Create an ingress setting the annotation kubernetes.io/ingress.class to alb. More annotations can be found here.

Most of these steps can be done directly to the cluster through this Helm Chart or by adding it to the cluster using @pulumi/kubernetes, but i've struggled to implement the iam roles. @pulumi/eks already creates it's own instance role, so i'm not sure how i can implement a custom instance role without replacing the one created by the package or forking the package.

Alternatives

An alternative is to add a new option that allows the instance role to be customized, something like instanceRole to replace the default instance role created by the package or something like additionalInstanceRolePolicies to add more policies to the default instance role.

Support worker node draining during ASG updates

When the ASG decides to kill instances, it would ideally drain them before killing the instance so that pods can cleanly come back up on a new node without downtime prior to removing/killing the existing pods.

This library would ideally support something built-in to do this, along the lines of https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-draining-in-amazon-ecs/, but interacting with kubebtl drain instead.

This would be similar to https://github.com/kubernetes/kops/blob/master/docs/cli/kops_rolling-update_cluster.md.

Improve test coverage

  • Runtime validation that cluster is usable under all configurations
  • Validation of most/all configuration options

Expose ASG name from NodeGroup

We use a CloudFormation stack to manage the AutoScalingGroup that manages the instances for a NodeGroup (including he default NodeGroup that a Cluster creates).

We currently expose the cfnStack itself, but not the name/identity of the AutoScalingGroup. Because we don't have an Output on the cfnStack (unlike in the four other places we do this pattern!), users can't extract it either.

In the short term, we should:

  1. Add the Output
  2. Use that to publish an ouput property on NodeGroup with the ASG name
  3. Provide a way to get this same NodeGroup data for the default NodeGroup on a Cluster.

Longer term, we should look at building this on top of https://github.com/pulumi/pulumi-aws-infra/tree/master/nodejs/aws-infra/autoscaling.

Ability to override the name of the cluster to be just the name

For a given cluster name, eg. cluster-abc.
This module transforms the name to cluster-abc-eksCluster-a123456z.
This is not intuitive and adding -eksCluster to an eks cluster is redundant imo.
We should make the name configurable for users as required by them and not add suffixes or even the alphanumeric added due to pulumi auto-naming.

EKS cluster creation does not handle assumed roles

Hi, trying to create an EKS cluster using a VPC (awsx) from the same stack. Here's the code:

module.exports = function(vpc) {
  const allSubnetIds = pulumi.all([vpc.getSubnetIds('private'), vpc.getSubnetIds('public')])
    .apply(([private, public]) => private.concat(public))
  return new eks.Cluster('eks_cluster', {
    deployDashboard: true,
    desiredCapacity: 1,
    instanceType: aws.ec2.M5InstanceLarge,
    maxSize: 2,
    nodeRootVolumeSize: 100,
    subnetIds: allSubnetIds,
    vpcId: vpc.vpc.id
  })
}

Getting the following errors:

     Type                                  Name                 Status                  Info
     pulumi:pulumi:Stack                   account-dev          **failed**              2 errors; 16 messages
     └─ eks:index:Cluster                  eks_cluster                                  
 +      └─ pulumi-nodejs:dynamic:Resource  eks_cluster-vpc-cni  **creating failed**     1 error
 
Diagnostics:
  pulumi:pulumi:Stack (account-dev):
    Method handler checkConfig for /pulumirpc.ResourceProvider/CheckConfig expected but not provided
    Method handler diffConfig for /pulumirpc.ResourceProvider/DiffConfig expected but not provided
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
 
    (node:32400) ExperimentalWarning: queueMicrotask() is experimental.
    Error: invocation of aws:ec2/getSubnet:getSubnet returned an error: grpc: the client connection is closing
     ...

 
    Method handler checkConfig for /pulumirpc.ResourceProvider/CheckConfig expected but not provided
    Method handler diffConfig for /pulumirpc.ResourceProvider/DiffConfig expected but not provided
 
    error: Running program '/Users/igor/work/bootstrap/infra/pulumi/account' failed with an unhandled exception:
    error: Error: invocation of aws:ec2/getSubnet:getSubnet returned an error: grpc: the client connection is closing
        at monitor.invoke ...
 
  pulumi-nodejs:dynamic:Resource (eks_cluster-vpc-cni):
    error: Plan apply failed: Command failed: kubectl apply -f /var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized
    unable to recognize "/var/folders/28/z072xy4d5970vkd6l3vb_r780000gn/T/tmp-32396RlquTqkLiKdX.tmp": Unauthorized

EKS package version: 0.17.0
AWSX version: 0.17.0

For existing AWS VPC & subnets created outside of pulumi, pulumi needs a means to tag any public or private subnets for Kubernetes use, that aren't already used by Workers

To use existing AWS VPC subnets created outside of pulumi with EKS, the user must manually tag the desired subnets on AWS with the required Kubernetes k/v pair.

If they are not manually tagged, then Kubernetes will not be able to discover them when needing to create Public or Private Load Balancers on AWS in those subnets - unless those subnets were already in use by running instances of the Workers.

Specifically, the VPC and subnets that the Workers are running in are automatically tagged for us in AWS by the EKS service with the key: kubernetes.io/cluster/${clusterName}, and the value: shared, where ${clusterName} is the name of the new EKS cluster. The cluster name is not known to the user or Pulumi until after the cluster has been created and auto-named.

However, this tagging is not done for any other public or private subnets that the Workers aren't already running in, as they are 1) not occupied by running Workers and 2) subsequently, are not automatically tagged by the EKS service.

The manual tagging on these other subnets is a required work around to enable a couple of use cases, such as when a user wants to:

  1. Create a Public LoadBalancer in public subnets across AZs, when the cluster is configured to have its Workers run in private subnets.
  2. Create a Private LoadBalancer in an other private subnets across AZs, that the Worker instances are not already running in.

See this gist for a repro of use case # 1, where Workers are in private subnets, and a Public LoadBalancer Service never comes up if you don't have public subnets appropriately tagged in AWS for Kubernetes discovery. Once you properly tag a public subnet in the VPC for the repro, only then does the Public LoadBalancer get provisioned for the cluster.

After my attempt above, I tried going down the path of retrieving the existing public subnet object using a couple of ways listed below, to modify its .tags props, but this does not seem possible:

  • Tried retrieving an existing subnet object using aws.ec2.Subnet.get(...)
    and
  • Tried retrieving an existing Vpc object using awsx.Network.fromVpc(...)
  • Tried using awsx.ec2.Subnet(...) constructor, with Vpc object returned from above as a param.

However, none of my attempts allowed me to modify the .tags prop on the existing subnets as needed.

The Vpc/awsx.Network object returned from awsx.Network.fromVpc(...) captures all private and public subnets already, so defining and leveraging this object IMO feels like its part of the right approach to ultimately: retrieve the existing subnet(s) in question, and tag them as needed after the cluster has been created, and its pulumi auto-generated name is known. I'm certainly open to hear other alternatives if I'm misunderstanding the use of the packages, or how to best integrate this work around into the right package(s).

It doesn't seem like this is necessarily an issue with pulumi/pulumi-eks, but more about better understanding how to leverage and/or improve @pulumi/pulumi, @pulumi/pulumi-aws and @pulumi/pulumi-awsx to auto-tag any existing subnets in AWS needed by the user.

AWS Network resources for a cluster are not distinguishable from cluster to cluster

If one stands up many EKS clusters, all of the network infrastructure is identically named, and this makes it hard to pinpoint which resources belong to which clusters.

For example, we prefix all of the Cluster and IAM resources with the name of the cluster, but do not do the same for the the Networking resources. This ends up creating many VPC's with the name vpc if multiple clusters are created.

Currently, the user is left to either inspect the networking resources in the EKS dashboard, or inspecting the tags of the given resources to pinpoint which resource belongs to which cluster.

$ pulumi up -y
Previewing update (eks-demo):

     Type                                          Name                                Plan
 +   pulumi:pulumi:Stack                           eks-hello-world-eks-demo            create
 +   ├─ eks:index:Cluster                          helloWorld                          create
 +   │  ├─ eks:index:ServiceRole                   helloWorld-eksRole                  create
 +   │  │  ├─ aws:iam:Role                         helloWorld-eksRole-role             create
 +   │  │  ├─ aws:iam:RolePolicyAttachment         helloWorld-eksRole-4b490823         create
 +   │  │  └─ aws:iam:RolePolicyAttachment         helloWorld-eksRole-90eb1c99         create
 +   │  ├─ eks:index:ServiceRole                   helloWorld-instanceRole             create
 +   │  │  ├─ aws:iam:Role                         helloWorld-instanceRole-role        create
 +   │  │  ├─ aws:iam:RolePolicyAttachment         helloWorld-instanceRole-3eb088f2    create
 +   │  │  ├─ aws:iam:RolePolicyAttachment         helloWorld-instanceRole-03516f97    create
 +   │  │  └─ aws:iam:RolePolicyAttachment         helloWorld-instanceRole-e1b295bd    create
 +   │  ├─ pulumi-nodejs:dynamic:Resource          helloWorld-cfnStackName             create
 +   │  ├─ aws:ec2:SecurityGroup                   helloWorld-eksClusterSecurityGroup  create
 +   │  ├─ aws:iam:InstanceProfile                 helloWorld-instanceProfile          create
 +   │  ├─ aws:eks:Cluster                         helloWorld-eksCluster               create
 +   │  ├─ pulumi-nodejs:dynamic:Resource          helloWorld-vpc-cni                  create
 +   │  ├─ pulumi:providers:kubernetes             helloWorld-eks-k8s                  create
 +   │  ├─ aws:ec2:SecurityGroup                   helloWorld-nodeSecurityGroup        create
 +   │  ├─ kubernetes:core:ConfigMap               helloWorld-nodeAccess               create
 +   │  ├─ kubernetes:storage.k8s.io:StorageClass  helloworld-gp2                      create
 +   │  ├─ aws:ec2:SecurityGroupRule               helloWorld-eksClusterIngressRule    create
 +   │  ├─ aws:ec2:LaunchConfiguration             helloWorld-nodeLaunchConfiguration  create
 +   │  ├─ aws:cloudformation:Stack                helloWorld-nodes                    create
 +   │  └─ pulumi:providers:kubernetes             helloWorld-provider                 create
 +   └─ aws-infra:network:Network                  vpc                                 create
 +      ├─ aws:ec2:Vpc                             vpc                                 create
 +      ├─ aws:ec2:Eip                             vpc-nat-0                           create
 +      ├─ aws:ec2:Eip                             vpc-nat-1                           create
 +      ├─ aws:ec2:InternetGateway                 vpc                                 create
 +      ├─ aws:ec2:RouteTable                      vpc                                 create
 +      ├─ aws:ec2:Subnet                          vpc-0                               create
 +      ├─ aws:ec2:Subnet                          vpc-nat-1                           create
 +      ├─ aws:ec2:Subnet                          vpc-nat-0                           create
 +      ├─ aws:ec2:Subnet                          vpc-1                               create
 +      ├─ aws:ec2:NatGateway                      vpc-nat-1                           create
 +      ├─ aws:ec2:RouteTableAssociation           vpc-nat-1                           create
 +      ├─ aws:ec2:NatGateway                      vpc-nat-0                           create
 +      ├─ aws:ec2:RouteTableAssociation           vpc-nat-0                           create
 +      ├─ aws:ec2:RouteTable                      vpc-nat-1                           create
 +      ├─ aws:ec2:RouteTableAssociation           vpc-1                               create
 +      ├─ aws:ec2:RouteTable                      vpc-nat-0                           create
 +      └─ aws:ec2:RouteTableAssociation           vpc-0                               create

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.