Giter VIP home page Giter VIP logo

Comments (61)

ernestoalejo avatar ernestoalejo commented on May 17, 2024 95

Other use case: This would allow to reproduce & play locally with failover scenarios (health checking configuration, scheduling in new machines, etc.) with high availability apps when a node fails or machine resources get exhausted. VM can be stopped manually to test for example.

from minikube.

pbitty avatar pbitty commented on May 17, 2024 77

Demo here:
asciicast

from minikube.

sharifelgamal avatar sharifelgamal commented on May 17, 2024 22

Experimental multi-node support will be available in the upcoming 1.9 release and will be available in the next 1.9 beta as well.

from minikube.

pbitty avatar pbitty commented on May 17, 2024 21

I made a multi-node prototype in #2539, if anyone is interested in seeing one way it could be implemented, using individual VMs for each node.

from minikube.

sharifelgamal avatar sharifelgamal commented on May 17, 2024 20

@MartinKaburu yes, I'm actively working on this.

from minikube.

afbjorklund avatar afbjorklund commented on May 17, 2024 18

@ghostsquad : I was meaning to resume work on at least resurrecting the old functionality in #2539.
But got side-tracked with some other development, such as other runtimes and other architectures.

However, it is still planned to-do. Running multiple VMs, and running minikube in Docker, is next up.
Hopefully we should have an updated prototype ready in a couple of weeks, as in "November" ('19)

from minikube.

tstromberg avatar tstromberg commented on May 17, 2024 18

This feature is now available experimentally. We even have documentation:

https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

The UX is pretty rough, and there are many issues to resolve, but multi-node has now been added. We're now working off a newer more specific issue to address the usability issues and other bugs:

#7538

We look forward to releasing v1.10 within the next 2 weeks, which will greatly improve the experience.

Thank you for being so patient! This was by far minikube's most popular request for many years.

from minikube.

PerArneng avatar PerArneng commented on May 17, 2024 13

This would be very nice to be able to play with scalability across nodes in an easy way.

from minikube.

natiki avatar natiki commented on May 17, 2024 8

In case anyone is still looking for a solution: https://stackoverflow.com/a/51706547/223742

from minikube.

fabiand avatar fabiand commented on May 17, 2024 7

Using kubeadm would also help to align with other K8s setups which eases debugging.

from minikube.

ccampo avatar ccampo commented on May 17, 2024 7

I am using Mac and I can already bring up a second Minikube with "minikube start --profile=second". (using VirtualBox) So all I am missing is a way to connect the two so that the default minkube can now also deploy to the second (virtual)node.

from minikube.

MichielDeMey avatar MichielDeMey commented on May 17, 2024 7

@ccampo I believe that spins up a second cluster, not a second node?

from minikube.

afbjorklund avatar afbjorklund commented on May 17, 2024 6

You can see an early demo of the feature in the KubeCon NA 2019 talk, so work on it has been resumed (although not by me) and soon out

from minikube.

chinafzy avatar chinafzy commented on May 17, 2024 4

watching this subject.

from minikube.

nukepuppy avatar nukepuppy commented on May 17, 2024 3

definitely thing it should be a target to use minikube for that.. like minikube start nodes=3 etc etc... i dunno haven't looked at backend but it will fill a tremendous gap right now for developing from desktop to production in the same fashion which will pay for itself in adoption faster than other things.

from minikube.

k8s-ci-robot avatar k8s-ci-robot commented on May 17, 2024 3

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

from minikube.

afbjorklund avatar afbjorklund commented on May 17, 2024 3

This feature (multi-node), is going to have two different implementations. One is the straight-forward approach of just starting more than one virtual machine, and running the bootstrapper on each one.
The other approach, which is described in #4389, is where we don't start any virtual machine but just run all the pods and containers locally. The containers are separated by labels or similar, for the "nodes".

Both have their use cases. Current users of minikube are quite used to being able to ssh into the node, and to use various kernel modules on the node etc. But when you are using minikube on Linux (either on the laptop, or by starting a virtual machine just for development desktop or for hosting your container runtime), having additional virtual machines running adds runtime overhead and resource requirements.

Minikube is eventually going to support all four scenarios:

  • VM-less (0 VM, run all pods and containers on the host*)
  • Single-node (1 VM per node, the virtual machine on host) <-- this is what we have today
  • Multi-node (1 VM per node, all virtual machines on host) <-- this is being described here
  • Kata containers (1 VM per pod, or even one per container)

* Some people are trying to do this today, by using the none driver to run the bootstrapper on localhost. Due to the total lack of isolation, this is not going to work for multi-node (and is not recommended for single-node either, unless you give it a dedicated virtual machine to run "locally" on - like in a CI environment or similar). At least on the Kubernetes level, it needs to give the appearance of actually having multiple nodes.


This describes the scenarios of Minikube, which is all about providing a local development experience. πŸ’»
However if you want to use Kubernetes, you have several other deployment options available as well... As long as everything is using the standard distribution (kubelet) and the standard bootstrapper (kubeadm) it should provided a seamless transition and similar experience. But it is not supported or described here.

Instead see:

from minikube.

tstromberg avatar tstromberg commented on May 17, 2024 3

@sharifelgamal is busy hacking away on this feature as we speak. Should land next month.

from minikube.

foobarbecue avatar foobarbecue commented on May 17, 2024 3

Does this allow you to run on multiple physical machines?

from minikube.

jellonek avatar jellonek commented on May 17, 2024 2

https://github.com/Mirantis/kubeadm-dind-cluster solves this case. It also solves other cases for multi node setup needed during development process, listed in https://github.com/ivan4th/kubeadm/blob/27edb59ba62124b6c2a7de3c75d866068d3ea9ca/docs/proposals/local-cluster.md
Also it does not require any VM during the process.

There is also a demo of virtlet based on it, which shows how in simple steps you can start multi node setup, patch one node with injected image for daemonset for CRI runtime, and then start example pod on it.
All this you can read in https://github.com/Mirantis/virtlet/blob/master/deploy/demo.sh

from minikube.

dankohn avatar dankohn commented on May 17, 2024 2

Note that https://github.com/kubernetes-sigs/kind targets many of the uses described here.

from minikube.

andersthorsen avatar andersthorsen commented on May 17, 2024 2

It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521

A drawback with multipass is that they do not support Windows Server as the OS...

from minikube.

MartinKaburu avatar MartinKaburu commented on May 17, 2024 2

@sharifelgamal do you need a hand on this?

from minikube.

MartinKaburu avatar MartinKaburu commented on May 17, 2024 1

Is this still being developed? I've been waiting and following for ages

from minikube.

sharifelgamal avatar sharifelgamal commented on May 17, 2024 1

@aasmall good catch, #8018 should fix it.

from minikube.

aasmall avatar aasmall commented on May 17, 2024 1

@MatayoshiMariano - I think you need to actually install a CNI. the demo page has a flannel yaml that works. personally I went through calico the hard way...

@sharifelgamal - That's awesome! thank you. For now I think I'll have to use a different cluster tech for multi-node development, but I can't wait until minikube is ready.

from minikube.

pgray avatar pgray commented on May 17, 2024

This would be awesome as a feature in minikube, but for anyone looking for something passable in the meantime, this might help.

Is the scope of a feature like this large because minikube supports 5 virtualization types? I see the priority of P3 but I'm not sure if that means it's already being worked on or that there's enough work to do on other stuff that it's not worth trying to do yet.

from minikube.

marun avatar marun commented on May 17, 2024

I don't think it's large. It could be as simple as running nodes running docker-in-docker as pods on the master node.

from minikube.

marun avatar marun commented on May 17, 2024

It would be nice if minikube could be updated to use kubeadm to make adding new nodes easier. Any plans for that?

from minikube.

aaron-prindle avatar aaron-prindle commented on May 17, 2024

This might be something to look into regarding this:
https://github.com/marun/nkube

from minikube.

fabiand avatar fabiand commented on May 17, 2024

/me also thought about the kubeadm style of adding additional nodes

from minikube.

MichielDeMey avatar MichielDeMey commented on May 17, 2024

@pgray I've used that setup for a long time but it looks like they won't support K8s 1.6+ 😞
coreos/coreos-kubernetes#881

from minikube.

ccampo avatar ccampo commented on May 17, 2024

So the difference is basically that both Minikube instances have their own master (API server etc). So if the second minikube could use the master of the first minikube, that would get me closer to my goal right ?

from minikube.

MichielDeMey avatar MichielDeMey commented on May 17, 2024

Yes, basically. You can however use kubefed (https://kubernetes.io/docs/concepts/cluster-administration/federation/) to manage multiple clusters since k8s 1.6.

from minikube.

ccampo avatar ccampo commented on May 17, 2024

ok I will look at federation thanks. Is there an easy way that you know off to make the second cluster or node use the api of the first cluster ?

from minikube.

fabiand avatar fabiand commented on May 17, 2024

Kube fed manages independent clusters, right?
But isn't the goal here to create a single cluster with multiple VMs?

from minikube.

MichielDeMey avatar MichielDeMey commented on May 17, 2024

@fabiand Correct, but it seems I've derailed it a bit, apologies. :)
@ccampo I'm not very familiar with the internals of Kubernetes (or Minikube) but I know for a fact that it's possible to have multiple master nodes in a cluster setup.

You might want to look at https://github.com/kelseyhightower/kubernetes-the-hard-way if you're interested in the internals and want to get something working.

from minikube.

YiannisGkoufas avatar YiannisGkoufas commented on May 17, 2024

Hi there @pbitty , great job!
I build it, start the master but when adding 1 worker it fails with:

~/go/src/k8s.io/minikube$ out/minikube node start
Starting nodes...
Starting node: node-1
Moving assets into node...
Setting up certs...
Joining node to cluster...
E0510 13:03:34.368403    3605 start.go:63] Error bootstrapping node:  Error joining node to cluster: kubeadm init error running command: sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443: Process exited with status 2

Any idea how I can debug it?
Thanks!

from minikube.

pbitty avatar pbitty commented on May 17, 2024

Hi @YiannisGkoufas, you can ssh into the node with

out/minikube node ssh node-1

and then try to run the same comment from the shell:

sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443

(It would be great if the log message contained the command output. I can't remember why it doesn't. I think it would have required some refactoring and the PoC was a bit of a hack with minimal refactoring done.)

from minikube.

YiannisGkoufas avatar YiannisGkoufas commented on May 17, 2024

Thanks! Didn't realize you could ssh into the node that way.
So I tried:

sudo /usr/bin/kubeadm join --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Then added the --ignore-preflight-errors parameter and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue

Then I added the suggested flag and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443 --discovery-token-unsafe-skip-ca-verification

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.99.100:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:8443"
[discovery] Failed to request cluster info, will try again: [Unauthorized]
[discovery] Failed to request cluster info, will try again: [Unauthorized]
...

Can't figure out what to try next.
Thanks again!

from minikube.

gauthamsunjay avatar gauthamsunjay commented on May 17, 2024

@YiannisGkoufas out/minikube start --kubernetes-version v1.8.0 --bootstrapper kubeadm worked for me. I think I was facing the same issue as you and it looks like by default the bootstrapper used is localkube. Basically kubeadm init was not happening on master. Hence, we were not able to add worker nodes. Hope this helps! Thanks @pbitty

from minikube.

fejta-bot avatar fejta-bot commented on May 17, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from minikube.

fejta-bot avatar fejta-bot commented on May 17, 2024

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

from minikube.

fejta-bot avatar fejta-bot commented on May 17, 2024

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

from minikube.

kamilgregorczyk avatar kamilgregorczyk commented on May 17, 2024

Stale? 363 ppl have upvoted that idea...

from minikube.

afbjorklund avatar afbjorklund commented on May 17, 2024

@kamilgregorczyk:
The implementation (#2539) is rather stale at this point, but the idea isn't (completely).

It is being planned to return for the roadmap of 2019, see: #4 support all k8s features

kubeadm does most of the work (join) for us...

However, it will not make it for the 1.0 release

It will also require some extra resources* to run.

  • the master node currently wants 2 vCPU and 2G RAM, extra nodes at least 1 vCPU / 1G RAM each.
    But it should be (and is already) possible to run a couple of them (like: 4 nodes ?) on a normal laptop.

from minikube.

tuxillo avatar tuxillo commented on May 17, 2024

I wonder if a bounty would help :)

from minikube.

ccampo avatar ccampo commented on May 17, 2024

I think it would be easier to know what the right path (bounty or not bounty) is if we decide on what the solution is. While one of my previous comments aimed at having a minikube with multiple nodes I am no longer sure that this is the optimal solution.
I could also view it as minikube is a good solution in its current scope and we are looking for something else, a multikube with the objective of running kubernetes on multiple nodes on none-linux OS systems. Something that you do on Linux with kubeadm for the Mac and Windows platform. Maybe its possible to reuse part of Minikube for that or maybe not.

from minikube.

tstromberg avatar tstromberg commented on May 17, 2024

FYI - this feature is part of the minikube 2019 roadmap: https://github.com/kubernetes/minikube/blob/master/docs/contributors/roadmap.md

We really want to do this. It's going to be a substantial bit of work to sort out, but If anyone wants to start, I would be very happy to help lead them in the right direction. The prototype in #2539 is definitely worth taking a look at.

Help wanted!

from minikube.

ghostsquad avatar ghostsquad commented on May 17, 2024

2019 is almost over. Any movement on this?

from minikube.

kamilgregorczyk avatar kamilgregorczyk commented on May 17, 2024

It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521

from minikube.

ghostsquad avatar ghostsquad commented on May 17, 2024

@andersthorsen Host or Guest OS?

from minikube.

andersthorsen avatar andersthorsen commented on May 17, 2024

@ghostsquad as host os. They support Windows 10 as host os tough.

from minikube.

yusufharip avatar yusufharip commented on May 17, 2024

Hey @sharifelgamal i'm running minikube v1.9.0 on MacOS Catalina and get this error

$ minikube node add
🀷 This control plane is not running! (state=Stopped)
❗ This is unusual - you may want to investigate using "minikube logs"
πŸ‘‰ To fix this, run: minikube start

first install minikube with this command
$ minikube start --driver=docker

from minikube.

sharifelgamal avatar sharifelgamal commented on May 17, 2024

@yusufharip can you open up a new issue and give us a little more detail so we can debug better?

minikube start --driver=docker -v=3 --alsologtostderr and minikube logs would be helpful.

from minikube.

petersaints avatar petersaints commented on May 17, 2024

I'm interested in this feature. Will this allow us to simulate Cluster Autoscaler (https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) scenarios locally?

from minikube.

aasmall avatar aasmall commented on May 17, 2024

Non-master nodes do not get an InternalAddress:

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube       Ready    master   173m   v1.18.0   192.168.39.83   <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
minikube-m02   Ready    <none>   80m    v1.18.0   <none>          <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
minikube-m03   Ready    <none>   80m    v1.18.0   <none>          <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
$ kubectl describe nodes | grep InternalIP     
  InternalIP:  192.168.39.83

This appears to be because we are specifying the --node-ip as a kubelet argument,

from minikube master vm:

$ hostname
minikube
$ systemctl cat kubelet.service
# /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests

[Install]

from minikube-m02

$ hostname
minikube-m02
$ systemctl cat kubelet.service
# /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests

[Install

Note that the --node-ip arguments are the same in both cases.
This results in an inability to get logs or ssh into pods scheduled on non-master nodes

$ kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
dice-magic-app-86d4bc958-phx6j   2/2     Running   0          76m   10.244.29.196   minikube-m02   <none>           <none>
dice-magic-app-86d4bc958-qfw2t   2/2     Running   0          76m   10.244.23.5     minikube-m03   <none>           <none>
redis-2mvbc                      1/1     Running   0          76m   10.244.23.4     minikube-m03   <none>           <none>
redis-xrh9q                      1/1     Running   0          76m   10.244.29.195   minikube-m02   <none>           <none>
redis-xtgjh                      1/1     Running   0          76m   10.244.39.8     minikube       <none>           <none>
www-c57b7f645-5vwd5              1/1     Running   0          76m   10.244.29.197   minikube-m02   <none>           <none>

scheduled on master(minikube)

$ kubectl logs redis-xtgjh
10:C 06 May 2020 08:47:55.461 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10:C 06 May 2020 08:47:55.461 # Redis version=6.0.1, bits=64, commit=00000000, modified=0, pid=10, just started
10:C 06 May 2020 08:47:55.461 # Configuration loaded
10:M 06 May 2020 08:47:55.462 * No cluster configuration found, I'm 5b67e68d6d6944abce833f7d1a7310fef3cecf85
10:M 06 May 2020 08:47:55.465 * Running mode=cluster, port=6379.
10:M 06 May 2020 08:47:55.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10:M 06 May 2020 08:47:55.465 # Server initialized
10:M 06 May 2020 08:47:55.466 * Ready to accept connections

scheduled on non-master(m02)

$ kubectl logs redis-xrh9q
Error from server: no preferred addresses found; known addresses: []

from minikube.

MatayoshiMariano avatar MatayoshiMariano commented on May 17, 2024

After running minikube start --nodes 2 -p multinode-demo --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 --disk-size 3GB

When describing the nodes kubectl describe nodes in both nodes I get:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Take a look to runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

So when using nodeaffinity i'm getting 0/2 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

Am I missing something?

from minikube.

sharifelgamal avatar sharifelgamal commented on May 17, 2024

The next minikube release (1.10) will automatically apply a CNI for multinode clusters, but for the current latest, you do need to manually apply CNI.

from minikube.

MatayoshiMariano avatar MatayoshiMariano commented on May 17, 2024

@aasmall yeah, that was it! Forgot to install flannel

kubectl  apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

from minikube.

sudoflex avatar sudoflex commented on May 17, 2024

@sharifelgamal how does minikube dashboard work in a multi-nodes environment?

Here's an overview of the cluster I'm dealing with:

kubectl get po -A -o wide -w
NAMESPACE              NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
kube-system            coredns-f9fd979d6-jfkm7                     1/1     Running   0          67m   172.18.0.2   t0       <none>           <none>
kube-system            etcd-t0                                     1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kindnet-j6b6p                               1/1     Running   0          66m   172.17.0.4   t0-m02   <none>           <none>
kube-system            kindnet-rmrzm                               1/1     Running   0          66m   172.17.0.3   t0       <none>           <none>
kube-system            kube-apiserver-t0                           1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-controller-manager-t0                  1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-proxy-8jzh7                            1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-proxy-gbm79                            1/1     Running   0          66m   172.17.0.4   t0-m02   <none>           <none>
kube-system            kube-scheduler-t0                           1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            metrics-server-d9b576748-j97rs              1/1     Running   0          62m   172.18.0.2   t0-m02   <none>           <none>
kube-system            storage-provisioner                         1/1     Running   1          67m   172.17.0.3   t0       <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-c95fcf479-27v7x   1/1     Running   0          61m   172.18.0.4   t0-m02   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-5c448bc4bf-xqkgw       1/1     Running   0          61m   172.18.0.3   t0-m02   <none>           <none>

The following command gets stuck indefinetly:

minikube dashboard --url -p t0
πŸ€”  Verifying dashboard health ...
πŸš€  Launching proxy ...
πŸ€”  Verifying proxy health ...

from minikube.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.