Giter VIP home page Giter VIP logo

k8s-boshrelease's Introduction

k8s BOSH Release

This is a BOSH release for spinning Kubernetes, using BOSH to orchestrate the "physical" nodes that comprise the various roles of a Kubernetes cluster (control and worker).

Rationale

I am aware of other efforts to BOSH-ify Kubernetes, like kubo. This project does not aim to replace those other projects in any way, and if you find joy in using those projects, please continue using them.

Deployments

If you are looking for production-worthy deployment manifests that follow the same pattern as bosh-deployment and cf-deployment,. check out k8s-deployment!

This repository comes with some sample manifests to illustrate how one might configure a k8s deployment in the wild.

  • tinynetes - A single-VM instance, all-in-one k8s "cluster", suitable for experimentation or CI/CD.

  • labernetes - A multi-node cluster of combined control+worker nodes, suitable for shared lab exercises.

  • prodernetes - A proper cluster with control and worker nodes on separate VMs, allowing one to scale the workers separately from the control plane. All aspects of the control plane are co-located (etcd, api, scheduler, and cmgr). Suitable for (possibly) some real-world prod use.

  • hugernetes - A REALLY BIG CLUSTER that splits the etcd component out onto its own multi-node cluster, leaving the control plane VMs to run api, scheduler, and the controller manager. Suitable for (possibly) some real-world prod use.

These are found in the manifests/ directory, and can be deployed without further pre-processing.

Deployment Dependency

In order to perform activites on the pods which require DNS lookups, such as kubectl exec or kubectl pods, BOSH DNS must be deployed. The easiest way of doing this is by adding BOSH DNS to your Runtime Config. An example of a Runtime Config with BOSH DNS can be found here at bosh.io.

Post Deployment

Once Kubernetes is deployed you will likely want to connect to it with kubectl from a jumpbox or laptop but you need a configuration for that. Fortunately there is a jumpbox script which generates the configuration. From one of the control instances run the following as root:

. /var/vcap/jobs/jumpbox/envrc

This will generate a long-lived cluster cert, user client cert and client key and make these available in a kubeconfig. You are now authenticated and kubectl is in your $PATH.

Get the contents of the config, while still logged into the BOSH SSH session, run:

cat $KUBECONFIG

On your jumpbox or anywhere else you need a kubectl configuration file, write out the contents to a file (such as my-bosh-deployed-k8s) and then source the file:

export KUBECONFIG=$PWD/my-bosh-deployed-k8s

Contributing

If you find this, and manage to get it to work for you, great! I'd love to hear from you, of your successes and struggles.

If you find a bug, or something doesn't work quite right, please open an issue in the GitHub tracker! Remember, we have a code of conduct.

If you are thinking of contributing code, please read the Contribution Guidelines and follow them. Particularly, please open GitHub isssues and allow your change to be discussed before submitting a pull request. "Drive-by" PRs may be closed without much discussion.

k8s-boshrelease's People

Contributors

anishp55 avatar cweibel avatar dmolik avatar drnic avatar jhunt avatar obeyler avatar proplex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

k8s-boshrelease's Issues

BOSH Package Blob Access Issues

I want to build a release from source to be able to help

bosh create-release 
Blob download 'cfssl/cfssl_linux-amd64' (10 MB) (id: 26751ef9-2b66-4a3f-633f-9aa9a6dbfe0e sha1: sha256:eb34ab2179e0b67c29fd55f52422a94fe751527b06a403a79325fed7cf0145bd) started

Blob download 'cfssl/cfssl_linux-amd64' (id: 26751ef9-2b66-4a3f-633f-9aa9a6dbfe0e) failed

- Getting blob '26751ef9-2b66-4a3f-633f-9aa9a6dbfe0e' for path 'cfssl/cfssl_linux-amd64':
    Getting blob from inner blobstore:
      Getting blob from inner blobstore:
        AccessDenied: Access Denied
	status code: 403, request id: 693CECDA0586CAA5, host id: JqEjdqH+rZGfPlT6rJV+FeKLdJbs8nGMcjQY3keSgAg9EFmYDdsD+ksFp9Zi8ZUMmMqqp1AkIgw=

Run Kubemark against various configurations

I would like to put some Kubemark tests together for various configurations of k8s. This probably needs to be done in AWS or GCP, since local vSphere lab is a bit taxed at the moment (and has disk / NFS latency issues regardless).

Bundle cert-manager

It would be nice if we could get cert-manager when the cluster starts up, as part of manifest.

If that ends up not being desirable, we should at least make a single property to enable a CA that can be used by a cert-manager issuer - the public certificate should be added to the system bundle on all kubelet VMs, and a post-deploy secret should be created for the private key, so that we can wire it up to a cert-manager issuer.

Support Additional CoreDNS (plugin) Configuration

Feature: give the opportunity to add plugin inside configmap of coredns config
Why : by example some plugin like hosts see https://coredns.io/plugins/hosts/ can be very useful to allows coredns to add hosts file to get ip from bosh topology
or forward see https://coredns.io/plugins/forward/

     forward internal.paas. /etc/resolv.conf {                                                                                                                                         │
         policy sequential                                                                                                                                                             │
     }                                                                                                                                                                                 │
     hosts bosh.  /etc/hostsFromNode                                                                                                                                                   │

We can be inspired by helm template https://github.com/helm/charts/blob/6878cfd32d2240dbd907a77786d28842f6d120a0/stable/coredns/templates/configmap.yaml#L21-L25 to do that

ipvsadm compile fails, _sometimes_.

Had a report, by way of #25 that the make -j$CPU bits of (at least) the ipvsadm compilation script cause issues on certain stemcells. I think it might be more related to the number of cores on any 315+ Xenial stemcell, but we ought to investigate.

vSphere Volumes Won't Mount

We had such high hopes for 1.14.0-build.4, with respect to supporting vSphere volume provisioning.

While the provisioning seems to work (PVs get created in response to PVCs), not all is hunky-dory in the mounting-and-using side of the house:

  Warning  FailedMount  0s  kubelet, 823a4d68-31e7-42f1-8876-28d45fafd5a8.k8s  MountVolume.SetUp failed for volume "pvc-3c404659-f12f-11e9-90ab-0050568b020d" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/vcap/store/kubelet/pods/3c4361fb-f12f-11e9-90ab-0050568b020d/volumes/kubernetes.io~vsphere-volume/pvc-3c404659-f12f-11e9-90ab-0050568b020d --scope -- mount -o bind /var/vcap/store/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DATASTORE-1] kubevols/tinynetes-dynamic-pvc-3c404659-f12f-11e9-90ab-0050568b020d.vmdk /var/vcap/store/kubelet/pods/3c4361fb-f12f-11e9-90ab-0050568b020d/volumes/kubernetes.io~vsphere-volume/pvc-3c404659-f12f-11e9-90ab-0050568b020d
Output: Running scope as unit run-r9e06063f8b014350bdc38e00c38f2a3b.scope.
mount: special device /var/vcap/store/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DATASTORE-1] kubevols/tinynetes-dynamic-pvc-3c404659-f12f-11e9-90ab-0050568b020d.vmdk does not exist

Investigate and fix.

Add the opportunity to use a docker mirror

As we are in airgap env we use a private docker registry such as jcr from jfrog.
It allows to do upload the docker image onto the JCR registry and not on internet.

To do that the containerd.toml file need to be completed by this option:

              [plugins.cri.registry]
                [plugins.cri.registry.mirrors]
                  [plugins.cri.registry.mirrors."docker.io"]
                    endpoint = ["((registry-mirrors))"]

blobs is private

Blob download 'containerd/containerd-1.3.5-linux-amd64.tar.gz' (id: 742cb1fa-2f84-4665-4d82-884886227d3c) failed

  • Getting blob '742cb1fa-2f84-4665-4d82-884886227d3c' for path 'containerd/containerd-1.3.5-linux-amd64.tar.gz':
    Getting blob from inner blobstore:
    Getting blob from inner blobstore:
    AccessDenied: Access Denied
    status code: 403, request id: 784F91B43B9FA398, host id: EAfthPNGLY4THuXdcRqZMXe0Z7E1d2CBQNyBSAJw08wC2BI+oRqfhpQnOVklB2hyduZ3TSIAMko=

Rework Labels

Our labels (for selectors) are a mess. Fix them.

As an example, the master nodes should probably have labels that identify them as such.

Update Trusted CA Bundle with K8s CA

Some things need the Kubernetes CA to be trusted to function properly (i.e. KubeCF) - add it to the system-wide bundle in a pre-deploy script.

This really ought to be able to handle both ubuntu and centos stemcells, equally.

Investigate k8s-in-k8s

My good friend and Kubernetes expert @dmolik wants to try running the components of k8s on top of Kubernetes. The basic gist is that we move as much out from under the control of monit as possible, leaving behind just the kubelet process and whatever the containerd + runtime chosen needs. The kubelet then leverages the containerd + runtime to run Static Pods for control plane components (API server / Controller Manager / Scheduler).

I have started the k8s-in-k8s branch for this.

Upsides

  1. Less stuff under the control of monit
  2. More stuff visible from an admin with nothing but kubectl (i.e. no BOSH access necessary to troubleshoot)
  3. We're already relying on the kubelet to properly start, monitor, and tear down scheduled Pods, so why not leverage it for other things.
  4. It might allow for customization of the runtime-y bits of k8s by letting operators swap out the images that run for control plane components?

Downsides

  1. The approach relies HEAVILY on pre-start / post-deploy hooks in BOSH's lifecycle, with minimal usage of the "meat" of a BOSH release.
  2. Operators accustomed to troubleshooting BOSH deployments will be out of their element.
  3. Requires pulling container images from somewhere, which may prove problematic for air-gapped environments.
  4. Unless we can embed / re-inflate these images to mitigate (3) above, we are only shipping half of the software necessary to make the cluster operate, in the release. This is contrary to what BOSH stands for.

MASQUERADing --random-fully in iptables

This is an interesting read:
https://tech.xing.com/a-reason-for-unexplained-connection-timeouts-on-kubernetes-docker-abd041cf7e02

I found this while looking into why my kubelet logs are full of this:

I1111 18:53:57.189370    1925 kubelet_network_linux.go:111] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
I1111 18:54:57.230925    1925 kubelet_network_linux.go:111] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
I1111 18:55:57.269746    1925 kubelet_network_linux.go:111] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it
I1111 18:56:57.315703    1925 kubelet_network_linux.go:111] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it

Apparently, --random-fully is patched into iptables, and it isn't in our version. The patch (https://git.netfilter.org/iptables/commit/?id=8b0da2130b8af3890ef20afb2305f11224bb39ec) seems simple enough; it's just passing through stuff to the kernel via flags that are now set in response to the new --random-fully CLI option.

Look into applying this patch to the packaged version of iptables and see if the error goes away (and, incidentally, if performance gets any better on the cluster).

Offer storageclass independant of cloudprovider

New feature request:
Like some product as openebs (openebs.io) or rook-ceph, the bosh release should offer a solution for PV independant from cloud provider.
I think it can be made by adding raw disk to the deployment of node and use it as raw device.
If we had it we don't need anymore to configure cloudprovider inside K8S.

Pod status Unknown after recreate

After running bosh with the recreate directive for single VM k8s cluster the coredns pods and any pods deployed outside of the kube-system namespace have a status of "Unknown". Describing the pod shows a Stats of "Running" with a Container state of "Terminated" and an Event message status of "Pod sandbox changed, it will be killed and re-created.".

Terminating the affected pods results in a recovery of those pods.

Flexernetes Deployment Topology

Somewhere between prodernetes and labernetes, there exists a topology with a minimum (i.e. 1) set of etcd / apiserver nodes, and a scalable number of nodes. I call this "flexernetes". Let's make that happen.

Upgrade to etcd 3.3.14 (minimum) to fix gRPC etcd security issues

v3.3.14 fixed some stuff that affects Kubernetes use cases:

v3.3.14 had to include some features from 3.4, while trying to minimize the difference between client balancer implementation. This release fixes "kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102).

(from https://github.com/etcd-io/etcd/releases/tag/v3.3.14)

This is in reference to kubernetes/kubernetes#72102, which I am seeing on one of our larger k8s installations.

My API node logs are filled with these errors:

W1111 19:32:12.443121    3194 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.128.4.18:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 127.0.0.1, 10.128.4.18, not 10.128.4.17". Reconnecting...
W1111 19:32:12.444838    3194 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.128.4.19:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 127.0.0.1, 10.128.4.19, not 10.128.4.17". Reconnecting...
W1111 19:32:12.445557    3194 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://10.128.4.19:2379 0  <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for 127.0.0.1, 10.128.4.19, not 10.128.4.17". Reconnecting...

Upgrade to 3.3.14 and test it out.

Support KubeCF bits on 127.0.0.1:32123

Right now kubelet cannot pull images back from KubeCF blobstore since the service does not listen on loopback.

Adding "127.0.0.1/8" to the array of nodePortAddress in the kube-proxy here fixes this issue.

For example the following manual change results in a usable binding:

nodePortAddresses: ["10.128.57.192/28", "127.0.0.1/8"]

After making the change manually to the configmap and recreating the kube-proxy pods apps can successfully stage in KubeCF.

Bind NodePort on loopback / 127.0.0.1/8

Eirini uses 127.0.0.1:32123 (or a different configurable NodePort) for images that it stages via buildpacks; this is the only endpoint a kubelet can natively access. Unfortunately, with ipvsadm, we are seeing that kube-proxy only binds the node IP for the NodePort traffic forwarding, so kubecf doesn't seem to work with ipvsadm.

Steps to Reproduce

Here's a daemonset that shows off the problem:

---
apiVersion: v1
kind: Namespace
metadata:
  name: gh-issue-29
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: gh-issue-29
  name:      www

data:
  index.html: |
    pong
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: gh-issue-29
  name:      github

spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx

          ports:
            - name: web
              containerPort: 80

          volumeMounts:
            - name: www
              mountPath: /usr/share/nginx/html

      volumes:
        - name: www
          configMap:
            name: www
---
apiVersion: v1
kind: Service
metadata:
  namespace: gh-issue-29
  name:      github
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - name: web
      port: 80
      targetPort: web

With this loaded, you should be able to curl http://<node-ip>:<node-port> and get back the text pong. If you then SSH onto one of the kubelets, and curl http://127.0.0.1:<node-port>, one of the following will happen:

  1. You get back pong, in which case the NodePort is bound on all interfaces and the bug is fixed.
  2. You get a connection refused, in which case the NodePort is only binding on the node IP, and the bug persists.

Fix the Kubernetes such that we get case 1, not case 2.

Check Validity of provided CA certificate

Just ran into a case where credhub had an old TLS CA certificate for a test deployment I use everywhere, and that CA had expired. This lead to some obscure breakage in the API server, which was unable to talk to etcd (because it couldn't / wouldn't validate with an expired CA in the chain).

Unfortunately, this manifested as a low-level gnutls error:

gnutls_handshake() failed: Certificate is bad

While true, this error is pretty ambiguous - is the etcd server cert bad? The api's client cert? Turns out it was the CA.

I eventually got past the issue by deleting the (generated) CA certificate so that a subsequent bosh deploy would re-create it (and I'd be good for another year!), but I would really like to see some validation in pre-start (or, if possible, in the job rendering templates via a raise) that checks:

  1. The CA certificate must have the CA:TRUE constraint
  2. The NotBefore validity start date is in the past
  3. The NotAfter validity end date is in the future

Missing CNI version inside Flannel conf

As say in the specification of cni (https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)

example of config (see mandatory field: "cniVersion": "0.3.1")

{
    "name": "flannel-network",
    "cniVersion": "0.3.1",
    "plugins": [
	    {
		    "type": "flannel",
		    "delegate": {
			    "hairpinMode": true,
			    "isDefaultGateway": true
		    }
	    },
	    {
		    "type": "portmap",
		    "capabilities": {
			    "portMappings": true
		    }
	    }
    ]
}

you can see the effect on missing cniVersion on this issue: containerd/containerd#4012 (comment)

Name KUBECONFIG contexts better

Trying to merge a bunch of these KUBECONFIGs together (from the jumpbox co-located job) and the default contexts are overwriting each other.

Let's name them after the cluster.

Bridge to Netfilter / IPTables

Ran into this issue this morning, with a tinynetes: coredns/coredns#1879 and kubernetes/kubernetes#21613.

We were in fact bypassing netfilter for bridged traffic, causing our DNS responses to come directly from the pod IP, and not get mangled to "originate" from the service IP for DNS.

Fixed it with the following:

# modprobe br_netfilter

Afterwards, the sysctl were set (automagically):

k8s/08ea6ac6-9e85-44d6-81be-0dc94c73ddd7:~# sysctl -a | grep nf-call
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

but we could be forgiven for manually setting those.

Bad kubeproxy configuration

Inside log of kube proxy, 2 items seems to cause trouble on parsing :
max: 0 and resourceContainer: /kube-proxy

Inside the log of kube proxy I see some error,

kubectl logs -n kube-system kube-proxy-89zv6
W0702 17:19:47.678898       1 server.go:439] using lenient decoding as strict decoding failed: strict decoder error for ---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: "0.0.0.0"
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubeconfig.yml
  qps: 5
clusterCIDR: "10.244.0.0/16"
configSyncPeriod: 15m0s
conntrack:
  max: 0
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "iptables"
nodePortAddresses: []
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms: v1alpha1.KubeProxyConfiguration.Conntrack: v1alpha1.KubeProxyConntrackConfiguration.ReadObject: found unknown field: max, error found in #10 byte of ...|ck":{"max":0,"maxPer|..., bigger context ...|/16","configSyncPeriod":"15m0s","conntrack":{"max":0,"maxPerCore":32768,"min":131072,"tcpCloseWaitTi|...
I0702 17:19:47.938182       1 node.go:136] Successfully retrieved node IP: 192.168.244.209
I0702 17:19:47.938219       1 server_others.go:186] Using iptables Proxier.
I0702 17:19:47.938513       1 server.go:583] Version: v1.18.5
I0702 17:19:47.938949       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0702 17:19:47.939074       1 conntrack.go:52] Setting nf_conntrack_max to 131072

Smoke Tests

Write a simple smoke test that allows operators to verify that a cluster is not just spinning, but also functional.

The first version of this smoke test (MVP) should perform the following:

  1. Create a namespace (with a dynamic name)
  2. Create a replica set in that namespace, using a configurable image (see Caveats, below)
  3. Attach to the log stream of the pod, validate presence of a beacon message.
  4. Exec a command inside of the pod, and validate the output of said command.
  5. (cleanup) Delete the replica set.
  6. (cleanup) Delete the namespace.

Adopt the same images/ directory + Makefile approach as we started in #3 's branch.

The Testing Image

The testing image will contain at least two bespoke programs.

The first will loop indefinitely, printing a message to standard output (with a sufficient sleep per interval). This will be used by step (3) of the test.

The second will start up, print a fixed string, and then exit. This will be used by step (4) of the test.

Caveats

For support of air-gapped networks, we need to be able to support pulling the test image from an internal registry. I don't 100% know what that entails right now re: secrets and access. However, the operators will need to ensure that whatever image they pull is the same image as we ship with this release. They can pull / tag / push the image as-is, or they can rebuild it from source themselves, but it has to be the smoke tests image.

Update kube-dns YAMLs

Check the kube-dns.yml files to see if we need to update images, configs, etc.

If possible, automate this via ./utils/update-from-upstream

Support Calico and Flannel out-of-the-box

It would be nice if we could support either calico or flanneld out of the box. Perhaps this is a thing we do in the control VM, via spec:

instance_groups:
  - name: k8s
    release: k8s
    jobs:
      - name: control
        properties:
          networking: calico # or "flannel"

core-dns stay in not ready state

inside my cluster with the coredns :

kube-system   coredns-5d56ff6d95-d9l9p                          0/1     Running   0          22s
kube-system   coredns-5d56ff6d95-k7khg                          0/1     Running   0          22s

log of one of core-dns pod

E0701 17:56:59.978720       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.245.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.245.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0701 17:57:57.774490       1 trace.go:116] Trace[2003272451]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-07-01 17:57:27.774001105 +0000 UTC m=+308.579428679) (total time: 30.00046629s):
Trace[2003272451]: [30.00046629s] [30.00046629s] END
E0701 17:57:57.774511       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.245.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.245.0.1:443: i/o timeout
I0701 17:57:58.209103       1 trace.go:116] Trace[1720252605]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125 (started: 2020-07-01 17:57:28.208673746 +0000 UTC m=+309.014101344) (total time: 30.00039679s):
Trace[1720252605]: [30.00039679s] [30.00039679s] END
E0701 17:57:58.209124       1 reflector.go:178] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.245.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.245.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"

I haven't the same pb with the PR I proposed #47

Need CoreDNS bump to 1.6.9

coredns is set by default in spec to 1.2.2
the coredns image for 1.18 is 1.6.9
in this case the proxy . /etc/resolv.conf inside the Coredns config map should be replace by
forward . /etc/resolv.conf

Add ipvsadm package to all kubelets

Starting in kube-proxy 1.14.0, we're using IPVS for service routing; it would be nice to have ipvsadm for troubleshooting, diagnostic, and verification purposes, on all nodes that might run a pod for the proxy daemonset (i.e. all kubelets)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.