Comments (61)
Other use case: This would allow to reproduce & play locally with failover scenarios (health checking configuration, scheduling in new machines, etc.) with high availability apps when a node fails or machine resources get exhausted. VM can be stopped manually to test for example.
from minikube.
from minikube.
Experimental multi-node support will be available in the upcoming 1.9 release and will be available in the next 1.9 beta as well.
from minikube.
I made a multi-node prototype in #2539, if anyone is interested in seeing one way it could be implemented, using individual VMs for each node.
from minikube.
@MartinKaburu yes, I'm actively working on this.
from minikube.
@ghostsquad : I was meaning to resume work on at least resurrecting the old functionality in #2539.
But got side-tracked with some other development, such as other runtimes and other architectures.
However, it is still planned to-do. Running multiple VMs, and running minikube in Docker, is next up.
Hopefully we should have an updated prototype ready in a couple of weeks, as in "November" ('19)
from minikube.
This feature is now available experimentally. We even have documentation:
https://minikube.sigs.k8s.io/docs/tutorials/multi_node/
The UX is pretty rough, and there are many issues to resolve, but multi-node has now been added. We're now working off a newer more specific issue to address the usability issues and other bugs:
We look forward to releasing v1.10 within the next 2 weeks, which will greatly improve the experience.
Thank you for being so patient! This was by far minikube's most popular request for many years.
from minikube.
This would be very nice to be able to play with scalability across nodes in an easy way.
from minikube.
In case anyone is still looking for a solution: https://stackoverflow.com/a/51706547/223742
from minikube.
Using kubeadm
would also help to align with other K8s setups which eases debugging.
from minikube.
I am using Mac and I can already bring up a second Minikube with "minikube start --profile=second". (using VirtualBox) So all I am missing is a way to connect the two so that the default minkube can now also deploy to the second (virtual)node.
from minikube.
@ccampo I believe that spins up a second cluster, not a second node?
from minikube.
You can see an early demo of the feature in the KubeCon NA 2019 talk, so work on it has been resumed (although not by me) and soon out
from minikube.
watching this subject.
from minikube.
definitely thing it should be a target to use minikube for that.. like minikube start nodes=3 etc etc... i dunno haven't looked at backend but it will fill a tremendous gap right now for developing from desktop to production in the same fashion which will pay for itself in adoption faster than other things.
from minikube.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from minikube.
This feature (multi-node), is going to have two different implementations. One is the straight-forward approach of just starting more than one virtual machine, and running the bootstrapper on each one.
The other approach, which is described in #4389, is where we don't start any virtual machine but just run all the pods and containers locally. The containers are separated by labels or similar, for the "nodes".
Both have their use cases. Current users of minikube are quite used to being able to ssh into the node, and to use various kernel modules on the node etc. But when you are using minikube on Linux (either on the laptop, or by starting a virtual machine just for development desktop or for hosting your container runtime), having additional virtual machines running adds runtime overhead and resource requirements.
Minikube is eventually going to support all four scenarios:
- VM-less (0 VM, run all pods and containers on the host*)
- Single-node (1 VM per node, the virtual machine on host) <-- this is what we have today
- Multi-node (1 VM per node, all virtual machines on host) <-- this is being described here
- Kata containers (1 VM per pod, or even one per container)
* Some people are trying to do this today, by using the none
driver to run the bootstrapper on localhost. Due to the total lack of isolation, this is not going to work for multi-node (and is not recommended for single-node either, unless you give it a dedicated virtual machine to run "locally" on - like in a CI environment or similar). At least on the Kubernetes level, it needs to give the appearance of actually having multiple nodes.
This describes the scenarios of Minikube, which is all about providing a local development experience. π»
However if you want to use Kubernetes, you have several other deployment options available as well... As long as everything is using the standard distribution (kubelet
) and the standard bootstrapper (kubeadm
) it should provided a seamless transition and similar experience. But it is not supported or described here.
Instead see:
from minikube.
@sharifelgamal is busy hacking away on this feature as we speak. Should land next month.
from minikube.
Does this allow you to run on multiple physical machines?
from minikube.
https://github.com/Mirantis/kubeadm-dind-cluster solves this case. It also solves other cases for multi node setup needed during development process, listed in https://github.com/ivan4th/kubeadm/blob/27edb59ba62124b6c2a7de3c75d866068d3ea9ca/docs/proposals/local-cluster.md
Also it does not require any VM during the process.
There is also a demo of virtlet based on it, which shows how in simple steps you can start multi node setup, patch one node with injected image for daemonset for CRI runtime, and then start example pod on it.
All this you can read in https://github.com/Mirantis/virtlet/blob/master/deploy/demo.sh
from minikube.
Note that https://github.com/kubernetes-sigs/kind targets many of the uses described here.
from minikube.
It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521
A drawback with multipass is that they do not support Windows Server as the OS...
from minikube.
@sharifelgamal do you need a hand on this?
from minikube.
Is this still being developed? I've been waiting and following for ages
from minikube.
@aasmall good catch, #8018 should fix it.
from minikube.
@MatayoshiMariano - I think you need to actually install a CNI. the demo page has a flannel yaml that works. personally I went through calico the hard way...
@sharifelgamal - That's awesome! thank you. For now I think I'll have to use a different cluster tech for multi-node development, but I can't wait until minikube is ready.
from minikube.
This would be awesome as a feature in minikube, but for anyone looking for something passable in the meantime, this might help.
Is the scope of a feature like this large because minikube supports 5 virtualization types? I see the priority of P3 but I'm not sure if that means it's already being worked on or that there's enough work to do on other stuff that it's not worth trying to do yet.
from minikube.
I don't think it's large. It could be as simple as running nodes running docker-in-docker as pods on the master node.
from minikube.
It would be nice if minikube could be updated to use kubeadm to make adding new nodes easier. Any plans for that?
from minikube.
This might be something to look into regarding this:
https://github.com/marun/nkube
from minikube.
/me also thought about the kubeadm
style of adding additional nodes
from minikube.
@pgray I've used that setup for a long time but it looks like they won't support K8s 1.6+ π
coreos/coreos-kubernetes#881
from minikube.
So the difference is basically that both Minikube instances have their own master (API server etc). So if the second minikube could use the master of the first minikube, that would get me closer to my goal right ?
from minikube.
Yes, basically. You can however use kubefed
(https://kubernetes.io/docs/concepts/cluster-administration/federation/) to manage multiple clusters since k8s 1.6.
from minikube.
ok I will look at federation thanks. Is there an easy way that you know off to make the second cluster or node use the api of the first cluster ?
from minikube.
Kube fed manages independent clusters, right?
But isn't the goal here to create a single cluster with multiple VMs?
from minikube.
@fabiand Correct, but it seems I've derailed it a bit, apologies. :)
@ccampo I'm not very familiar with the internals of Kubernetes (or Minikube) but I know for a fact that it's possible to have multiple master nodes in a cluster setup.
You might want to look at https://github.com/kelseyhightower/kubernetes-the-hard-way if you're interested in the internals and want to get something working.
from minikube.
Hi there @pbitty , great job!
I build it, start the master but when adding 1 worker it fails with:
~/go/src/k8s.io/minikube$ out/minikube node start
Starting nodes...
Starting node: node-1
Moving assets into node...
Setting up certs...
Joining node to cluster...
E0510 13:03:34.368403 3605 start.go:63] Error bootstrapping node: Error joining node to cluster: kubeadm init error running command: sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443: Process exited with status 2
Any idea how I can debug it?
Thanks!
from minikube.
Hi @YiannisGkoufas, you can ssh into the node with
out/minikube node ssh node-1
and then try to run the same comment from the shell:
sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443
(It would be great if the log message contained the command output. I can't remember why it doesn't. I think it would have required some refactoring and the PoC was a bit of a hack with minimal refactoring done.)
from minikube.
Thanks! Didn't realize you could ssh into the node that way.
So I tried:
sudo /usr/bin/kubeadm join --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443
I got:
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Then added the --ignore-preflight-errors parameter and executed:
sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443
I got:
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue
Then I added the suggested flag and executed:
sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443 --discovery-token-unsafe-skip-ca-verification
I got:
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.99.100:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:8443"
[discovery] Failed to request cluster info, will try again: [Unauthorized]
[discovery] Failed to request cluster info, will try again: [Unauthorized]
...
Can't figure out what to try next.
Thanks again!
from minikube.
@YiannisGkoufas out/minikube start --kubernetes-version v1.8.0 --bootstrapper kubeadm
worked for me. I think I was facing the same issue as you and it looks like by default the bootstrapper used is localkube
. Basically kubeadm init
was not happening on master. Hence, we were not able to add worker nodes. Hope this helps! Thanks @pbitty
from minikube.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
from minikube.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
from minikube.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
from minikube.
Stale? 363 ppl have upvoted that idea...
from minikube.
@kamilgregorczyk:
The implementation (#2539) is rather stale at this point, but the idea isn't (completely).
It is being planned to return for the roadmap of 2019, see: #4 support all k8s features
kubeadm
does most of the work (join
) for us...
- https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ (master)
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/ (minion)
However, it will not make it for the 1.0 release
It will also require some extra resources* to run.
- the master node currently wants 2 vCPU and 2G RAM, extra nodes at least 1 vCPU / 1G RAM each.
But it should be (and is already) possible to run a couple of them (like: 4 nodes ?) on a normal laptop.
from minikube.
I wonder if a bounty would help :)
from minikube.
I think it would be easier to know what the right path (bounty or not bounty) is if we decide on what the solution is. While one of my previous comments aimed at having a minikube with multiple nodes I am no longer sure that this is the optimal solution.
I could also view it as minikube is a good solution in its current scope and we are looking for something else, a multikube with the objective of running kubernetes on multiple nodes on none-linux OS systems. Something that you do on Linux with kubeadm for the Mac and Windows platform. Maybe its possible to reuse part of Minikube for that or maybe not.
from minikube.
FYI - this feature is part of the minikube 2019 roadmap: https://github.com/kubernetes/minikube/blob/master/docs/contributors/roadmap.md
We really want to do this. It's going to be a substantial bit of work to sort out, but If anyone wants to start, I would be very happy to help lead them in the right direction. The prototype in #2539 is definitely worth taking a look at.
Help wanted!
from minikube.
2019 is almost over. Any movement on this?
from minikube.
It's also very easy to do it with multipass and k3s https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c?source=userActivityShare-86e09b1d4ec0-1575217521
from minikube.
@andersthorsen Host or Guest OS?
from minikube.
@ghostsquad as host os. They support Windows 10 as host os tough.
from minikube.
Hey @sharifelgamal i'm running minikube v1.9.0 on MacOS Catalina and get this error
$ minikube node add
π€· This control plane is not running! (state=Stopped)
β This is unusual - you may want to investigate using "minikube logs"
π To fix this, run: minikube start
first install minikube with this command
$ minikube start --driver=docker
from minikube.
@yusufharip can you open up a new issue and give us a little more detail so we can debug better?
minikube start --driver=docker -v=3 --alsologtostderr
and minikube logs
would be helpful.
from minikube.
I'm interested in this feature. Will this allow us to simulate Cluster Autoscaler (https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) scenarios locally?
from minikube.
Non-master nodes do not get an InternalAddress:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 173m v1.18.0 192.168.39.83 <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8
minikube-m02 Ready <none> 80m v1.18.0 <none> <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8
minikube-m03 Ready <none> 80m v1.18.0 <none> <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8
$ kubectl describe nodes | grep InternalIP
InternalIP: 192.168.39.83
This appears to be because we are specifying the --node-ip as a kubelet argument,
from minikube master vm:
$ hostname
minikube
$ systemctl cat kubelet.service
# /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
from minikube-m02
$ hostname
minikube-m02
$ systemctl cat kubelet.service
# /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests
[Install
Note that the --node-ip arguments are the same in both cases.
This results in an inability to get logs or ssh into pods scheduled on non-master nodes
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dice-magic-app-86d4bc958-phx6j 2/2 Running 0 76m 10.244.29.196 minikube-m02 <none> <none>
dice-magic-app-86d4bc958-qfw2t 2/2 Running 0 76m 10.244.23.5 minikube-m03 <none> <none>
redis-2mvbc 1/1 Running 0 76m 10.244.23.4 minikube-m03 <none> <none>
redis-xrh9q 1/1 Running 0 76m 10.244.29.195 minikube-m02 <none> <none>
redis-xtgjh 1/1 Running 0 76m 10.244.39.8 minikube <none> <none>
www-c57b7f645-5vwd5 1/1 Running 0 76m 10.244.29.197 minikube-m02 <none> <none>
scheduled on master(minikube)
$ kubectl logs redis-xtgjh
10:C 06 May 2020 08:47:55.461 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10:C 06 May 2020 08:47:55.461 # Redis version=6.0.1, bits=64, commit=00000000, modified=0, pid=10, just started
10:C 06 May 2020 08:47:55.461 # Configuration loaded
10:M 06 May 2020 08:47:55.462 * No cluster configuration found, I'm 5b67e68d6d6944abce833f7d1a7310fef3cecf85
10:M 06 May 2020 08:47:55.465 * Running mode=cluster, port=6379.
10:M 06 May 2020 08:47:55.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10:M 06 May 2020 08:47:55.465 # Server initialized
10:M 06 May 2020 08:47:55.466 * Ready to accept connections
scheduled on non-master(m02)
$ kubectl logs redis-xrh9q
Error from server: no preferred addresses found; known addresses: []
from minikube.
After running minikube start --nodes 2 -p multinode-demo --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 --disk-size 3GB
When describing the nodes kubectl describe nodes
in both nodes I get:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 06 May 2020 10:39:42 -0300 Wed, 06 May 2020 10:29:00 -0300 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 06 May 2020 10:39:42 -0300 Wed, 06 May 2020 10:29:00 -0300 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 06 May 2020 10:39:42 -0300 Wed, 06 May 2020 10:29:00 -0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 06 May 2020 10:39:42 -0300 Wed, 06 May 2020 10:29:00 -0300 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Take a look to runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
So when using nodeaffinity i'm getting 0/2 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Am I missing something?
from minikube.
The next minikube release (1.10) will automatically apply a CNI for multinode clusters, but for the current latest, you do need to manually apply CNI.
from minikube.
@aasmall yeah, that was it! Forgot to install flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
from minikube.
@sharifelgamal how does minikube dashboard
work in a multi-nodes environment?
Here's an overview of the cluster I'm dealing with:
kubectl get po -A -o wide -w
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-jfkm7 1/1 Running 0 67m 172.18.0.2 t0 <none> <none>
kube-system etcd-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kindnet-j6b6p 1/1 Running 0 66m 172.17.0.4 t0-m02 <none> <none>
kube-system kindnet-rmrzm 1/1 Running 0 66m 172.17.0.3 t0 <none> <none>
kube-system kube-apiserver-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-controller-manager-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-proxy-8jzh7 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-proxy-gbm79 1/1 Running 0 66m 172.17.0.4 t0-m02 <none> <none>
kube-system kube-scheduler-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system metrics-server-d9b576748-j97rs 1/1 Running 0 62m 172.18.0.2 t0-m02 <none> <none>
kube-system storage-provisioner 1/1 Running 1 67m 172.17.0.3 t0 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-27v7x 1/1 Running 0 61m 172.18.0.4 t0-m02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-xqkgw 1/1 Running 0 61m 172.18.0.3 t0-m02 <none> <none>
The following command gets stuck indefinetly:
minikube dashboard --url -p t0
π€ Verifying dashboard health ...
π Launching proxy ...
π€ Verifying proxy health ...
from minikube.
Related Issues (20)
- Issue
- Minikube issue
- Coredns and Cilium are getting CrashLoopBackOff
- can not open minikube
- Some addon files are being templated when they shouldn't
- Minikube is not starting on mackbook m1 Pro with docker
- Unable to access Prometheus or Grafana with minikube ip
- Facing problem with Minikube and kubectl
- glusterfs pod does not start for storage-provisioner-gluster addon
- Documentation issue with Powershell alias, with suggested fix
- Erro ao executar minikube start HOT 1
- systemd-resolved δΈθ½θ§£ζεε HOT 1
- minikube docs HOT 1
- Mikikube on Debian VM not starting
- problema al ejecutar minikube service facturador-prueba HOT 1
- Can't initialize ip6tables table? HOT 1
- Minikube commands trigger Docker context error.
- Can't run minikube on Mac M1. failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.5.0-10-generic\n", err: exit status 1 HOT 1
- Bug: Typo in the survey linked in releases HOT 2
- `service` command errors with `kvm2` driver on Debian Testing
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from minikube.