Giter VIP home page Giter VIP logo

eks-quickstart-app-dev's Introduction

EKS Quickstart App Dev

This repo contains an initial set of cluster components to be installed and configured by eksctl through GitOps.

Components

Pre-requisites

A running EKS cluster with IAM policies for:

  • ALB ingress
  • auto-scaler
  • CloudWatch

These policies can be added to a nodegroup by including the following iam options in your nodegroup config:

nodeGroups:
  - iam:
      withAddonPolicies:
        albIngress: true
        autoScaler: true
        cloudWatch: true

N.B.: policies are configured at the nodegroup level. Therefore, depending on your use case, you may want to:

  • add these policies to all nodegroups,
  • add node selectors to the ALB ingress, auto-scaler and CloudWatch pods, so that they are deployed on the nodes configured with these policies.

How to access workloads

For security reasons, this quickstart profile does not expose any workload publicly. However, should you want to access one of the workloads, various solutions are possible.

Port-forwarding

You could port-forward into a pod, so that you (and only you) could access it locally.

For example, for demo/podinfo:

Ingress

You could expose a service publicly, at your own risks, via ALB ingress.

N.B.: the ALB ingress controller requires services:

  • to be of NodePort type,
  • to have the following annotations:
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/scheme: internet-facing

NodePort services

For any NodePort service:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ${name}
  namespace: ${namespace}
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: ${service-app-selector}
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: ${service-name}
              servicePort: 80

A few minutes after deploying the above Ingress object, you should be able to see the public URL for the service:

$ kubectl get ingress --namespace demo podinfo
NAME      HOSTS   ADDRESS                                                                     PORTS   AGE
podinfo   *       xxxxxxxx-${namespace}-${name}-xxxx-xxxxxxxxxx.${region}.elb.amazonaws.com   80      1s

HelmRelease objects

For HelmRelease objects, you would have to configure spec.values.service and spec.values.ingress, e.g. for demo/podinfo:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: podinfo
  namespace: demo
spec:
  releaseName: podinfo
  chart:
    git: https://github.com/stefanprodan/podinfo
    ref: 3.0.0
    path: charts/podinfo
  values:
    service:
      enabled: true
      type: NodePort
    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/scheme: internet-facing
      path: /*

N.B.: the above HelmRelease

  • changes the type of podinfo's service from its default value, ClusterIP, to NodePort,
  • adds the annotations required for the ALB ingress controller to expose the service, and
  • exposes all of podinfo's URLs, so that all assets can be served over HTTP.

A few minutes after deploying the above HelmRelease object, you should be able to see the following Ingress object, and the public URL for podinfo:

$ kubectl get ingress --namespace demo podinfo
NAME      HOSTS   ADDRESS                                                             PORTS   AGE
podinfo   *       xxxxxxxx-demo-podinfo-xxxx-xxxxxxxxxx.${region}.elb.amazonaws.com   80      1s

Securing your endpoints

For a production-grade deployment, it's recommended to secure your endpoints with SSL. See Ingress annotations for SSL.

Any sensitive service that needs to be exposed must have some form of authentication. To add authentication to Grafana for e.g., see Grafana configuration. To add authentication to other components, please consult their documentation.

Get in touch

Create an issue, or login to Weave Community Slack (#eksctl) (signup).

Weaveworks follows the CNCF Code of Conduct. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Weaveworks project maintainer, or Alexis Richardson ([email protected]).

eks-quickstart-app-dev's People

Contributors

2opremio avatar callisto13 avatar cpu1 avatar errordeveloper avatar gazal-k avatar ingordigia avatar marccarre avatar martina-if avatar matthewhembree avatar michaelbeaumont avatar stefanprodan avatar stevenroussey-privicy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eks-quickstart-app-dev's Issues

Expose demo/podinfo via the ALB Ingress controller

eks-quickstart-app-dev currently creates an ALB Ingress controller, but we do not do anything with it, and it isn't trivial for the end-user how to leverage it. We should add an Ingress resource for at least demo/podinfo -- given this is our "demo" app of choice -- in order to (1) prove that ingress works, and (2) show end-users how to leverage the ALB in their cluster, to expose services.

@marccarre podinfo can be safely exposed outside the cluster to demo how the ingress works. The podinfo chart comes with ingress, I would enable it in the HelmRelease file.

Here is an example https://github.com/stefanprodan/gitops-helm-workshop/blob/master/cluster/releases/podinfo.yaml#L23

Originally posted by @stefanprodan

Update README

The readme only describes the kubernetes-dashboard component

[docs] Add authentication to grafana

Add authentication to grafana and expose it via ALB ingress.

This is a simple step and it needs to be documented that it is for demonstration purposes only, the production setup will require TLS and a properly secured endpoint.

Prometheus does not start

Logs of helm-operator:

helm-operator ts=2020-05-09T11:32:50.150863936Z caller=release.go:75 component=release release=prometheus-operator targetNamespace=monitoring resource=monitoring:helmrelease/prometheus-operator helmVersion=v3 info="starting sync run"
 helm-operator ts=2020-05-09T11:32:50.16400132Z caller=release.go:268 component=release release=prometheus-operator targetNamespace=monitoring resource=monitoring:helmrelease/prometheus-operator helmVersion=v3 info="running installation"
  phase=install
 helm-operator ts=2020-05-09T11:32:50.17638235Z caller=release.go:266 component=release release=podinfo targetNamespace=demo resource=demo:helmrelease/podinfo helmVersion=v3 info="no changes" phase=dry-run-compare
 helm-operator ts=2020-05-09T11:32:50.496035538Z caller=logwriter.go:28 info="2020/05/09 11:32:50 info: skipping unknown hook: \"crd-install\""
 helm-operator ts=2020-05-09T11:32:50.534884349Z caller=logwriter.go:28 info="2020/05/09 11:32:50 info: skipping unknown hook: \"crd-install\""
 helm-operator ts=2020-05-09T11:32:50.540331723Z caller=logwriter.go:28 info="2020/05/09 11:32:50 info: skipping unknown hook: \"crd-install\""
 helm-operator ts=2020-05-09T11:32:50.544750917Z caller=logwriter.go:28 info="2020/05/09 11:32:50 info: skipping unknown hook: \"crd-install\""
 helm-operator ts=2020-05-09T11:32:50.948416925Z caller=release.go:271 component=release release=prometheus-operator targetNamespace=monitoring resource=monitoring:helmrelease/prometheus-operator helmVersion=v3 error="installation failed
 : unable to build kubernetes objects from release manifest: [unable to recognize \"\": no matches for kind \"Alertmanager\" in version \"monitoring.coreos.com/v1\", unable to recognize \"\": no matches for kind \"Prometheus\" in version
  \"monitoring.coreos.com/v1\", unable to recognize \"\": no matches for kind \"PrometheusRule\" in version \"monitoring.coreos.com/v1\", unable to recognize \"\": no matches for kind \"ServiceMonitor\" in version \"monitoring.coreos.com
 /v1\"]" phase=install
 helm-operator ts=2020-05-09T11:32:50.9575576Z caller=release.go:323 component=release release=prometheus-operator targetNamespace=monitoring resource=monitoring:helmrelease/prometheus-operator helmVersion=v3 warning="uninstall failed: u
 ninstall: Release not loaded: prometheus-operator: release: not found" phase=uninstall

CloudWatch agent panics with panic: /rootfs/proc doesn't exists, please use latest yaml templates to launch cloudwatch-agent

Problem

$ kubectl -n amazon-cloudwatch logs --follow cloudwatch-agent-58t6b
2019/08/23 12:37:20 I! I! Detected the instance is EC2
2019/08/23 12:37:20 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json ...
/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json does not exist or cannot read. Skipping it.
2019/08/23 12:37:20 Reading json config file path: /etc/cwagentconfig/..2019_08_23_12_33_15.083201554/cwagentconfig.json ...
2019/08/23 12:37:20 Find symbolic link /etc/cwagentconfig/..data 
2019/08/23 12:37:20 Find symbolic link /etc/cwagentconfig/cwagentconfig.json 
2019/08/23 12:37:20 Reading json config file path: /etc/cwagentconfig/cwagentconfig.json ...
Valid Json input schema.
No csm configuration found.
No metric configuration found.
Configuration validation first phase succeeded
 
2019/08/23 12:37:20 I! Config has been translated into TOML /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml 
2019/08/23 12:37:20 I! AmazonCloudWatchAgent Version 1.226589.0.
2019-08-23T12:37:20Z I! Starting AmazonCloudWatchAgent (version 1.226589.0)
2019-08-23T12:37:20Z I! Loaded outputs: cloudwatchlogs
2019-08-23T12:37:20Z I! Loaded inputs: cadvisor k8sapiserver
2019-08-23T12:37:20Z I! Tags enabled: 
2019-08-23T12:37:20Z I! Agent Config: Interval:1m0s, Quiet:false, Hostname:"ip-192-168-62-114.ap-northeast-1.compute.internal", Flush Interval:1s 
2019-08-23T12:37:20Z I! k8sapiserver OnStartedLeading: ip-192-168-62-114.ap-northeast-1.compute.internal


panic: /rootfs/proc doesn't exists, please use latest yaml templates to launch cloudwatch-agent

goroutine 42 [running]:
github.com/influxdata/telegraf/plugins/processors/k8sdecorator/stores.newNodeInfo(0xc0009c2ba0)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/plugins/processors/k8sdecorator/stores/nodeinfo.go:105 +0x22e
github.com/influxdata/telegraf/plugins/processors/k8sdecorator/stores.NewPodStore(0xc000342701, 0xe, 0xc00099aa00, 0xffffffffffffffff)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/plugins/processors/k8sdecorator/stores/podstore.go:66 +0x80
github.com/influxdata/telegraf/plugins/processors/k8sdecorator.(*K8sDecorator).start(0xc00040c000)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/plugins/processors/k8sdecorator/k8sdecorator.go:68 +0x7b
github.com/influxdata/telegraf/plugins/processors/k8sdecorator.(*K8sDecorator).Apply(0xc00040c000, 0xc00042e250, 0x1, 0x1, 0xc0009c2b40, 0x1, 0x1)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/plugins/processors/k8sdecorator/k8sdecorator.go:34 +0x325
github.com/influxdata/telegraf/internal/models.(*RunningProcessor).Apply(0xc000156750, 0xc00042e230, 0x1, 0x1, 0xc00042e230, 0x1, 0x1)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/internal/models/running_processor.go:40 +0x131
github.com/influxdata/telegraf/agent.(*Agent).flusher(0xc00000e070, 0xc0000881e0, 0xc00007ede0, 0x0, 0x0)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/agent/agent.go:355 +0x146
github.com/influxdata/telegraf/agent.(*Agent).Run.func1(0xc000345730, 0xc00000e070, 0xc0000881e0, 0xc00007ede0)
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/agent/agent.go:403 +0x6d
created by github.com/influxdata/telegraf/agent.(*Agent).Run
	/local/p4clients/pkgbuild-eAgXi/workspace/src/CWAgent/src/github.com/influxdata/telegraf/agent/agent.go:401 +0x5be

Solution

We probably ought to update our manifests.
I'll go over the documentation again and update our manifests should there be changes:

kubernetesui/metrics-scraper:v1.0.5 fails

When updating to kubernetesui/metrics-scraper:v1.0.5 it fails with the following error:

kubernetes-dashboard   dashboard-metrics-scraper-54755bffcc-8fr9s                0/1     CrashLoopBackOff   8          19m
kubernetes-dashboard   dashboard-metrics-scraper-6ccf7f6cc8-z8zlf                1/1     Running            0          44h

❯❯❯ kubectl logs -n kubernetes-dashboard dashboard-metrics-scraper-54755bffcc-8fr9s               master ◼
W0812 19:32:16.898081       1 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
log: exiting because of error: log: cannot create log: open /tmp/metrics-sidecar.dashboard-metrics-scraper-54755bffcc-8fr9s.unknownuser.log.WARNING.20200812-193216.1: no such file or directory

Upgrading to kubernetesui/metrics-scraper:v1.0.4 is OK.

Add Fluent Bit as a quickstart component

Fluent Bit is a light-weight log processor and forwarder, and can be used as a replacement for Fluentd for these two features. Fluent Bit, however, lacks support for log aggregation and supports fewer input and output plugins.

We can either replace Fluentd with Fluent Bit, or combine the two, where Fluent Bit is deployed as a DaemonSet collecting logs from each node and forwarding them to Fluentd (deployed as a Deployment) for aggregation and routing to output destinations.

Bump HelmRelease resources to stable API version

All HelmReleases are using the beta API version flux.weave.works/v1beta1 but the Flux Helm Operator version (0.10.1) installed as part of eksctl install flux supports the stable API resource helm.fluxcd.io/v1.

Podinfo's helmrelease is broken

I am getting the following error from the Helm Operator when running eksctl gitops apply --quick-profile app-dev (current master version of eksctl):

ts=2019-09-05T06:35:59.539485198Z caller=release.go:217 component=release error="Chart release failed: podinfo: &status.statusError{Code:2, Message:\"YAML parse error on podinfo/templates/ingress.yaml: error converting YAML to JSON: yaml: line 16: did not find expected alphabetic or numeric character\", Details:[]*any.Any(nil), XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}"

This used to work one week ago, so I guess it broke at bb218b7

@marccarre I am going to revert that latest change, since it's blocking me.

Flux was unable to parte the files in the git repo as manifests

Hi,

I tried to put this repo to sync with flux, but it fails with this error:

Flux was unable to parse the files in the git repo as manifests,
giving this error:

    duplicate definition of 'amazon-cloudwatch:daemonset/cloudwatch-agent' (in amazon-cloudwatch/cloudwatch-agent-daemonset.yaml and base/amazon-cloudwatch/cloudwatch-agent-daemonset.yaml)

If I understand correctly it created the base repo with the templetized version of the yaml files, but can't see why it complain with the source one.

any hint ?

thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.