Giter VIP home page Giter VIP logo

Comments (14)

gbrayut avatar gbrayut commented on August 16, 2024 2

I was curious my self too how that works. Looks like it creates a docker0 interface:

root@k8s:~# ip -o -4 a s
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
21: eth0    inet 192.168.4.81/24 brd 192.168.4.255 scope global eth0\       valid_lft forever preferred_lft forever

And while checking the status of the various services using systemctl status snap\* I see snap.microk8s.daemon-apiserver.service has a --service-cluster-ip-range=10.152.183.0/24 parameter

I'm attempting to run this in a privileged LXD container, but it looks like there are still some errors in the logs:

journalctl -u snap.microk8s.daemon-kubelet
journalctl -u snap.microk8s.daemon-apiserver
journalctl -u snap.microk8s.daemon-docker
journalctl -u snap.microk8s.daemon-proxy 

I also saw reference to a cr0 bridge, but I dont see that. So maybe I'm having some issues due to it being in LXD.

Some more details at kubectl get all --all-namespaces definitely show my system isn't working as expected. I also used the docker client to see if it was a docker network, but I don't see that subnet in there either:

docker -H unix:///var/snap/microk8s/104/docker.sock network 
docker -H unix:///var/snap/microk8s/104/docker.sock network inspect bridge host none

from microk8s.

ktsakalozos avatar ktsakalozos commented on August 16, 2024

Hi @dustinkirkland ,

K8s does some fancy networking configurations that I honesty do not fully understand either! The 10.158.183.X are virtual IPs used in DNAT rules. You can start unraveling the routing paths with sudo iptables -t nat -L -n -v. Here is what I have over here:

sudo iptables -t nat -L -n -v
[sudo] password for jackal: 
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1794  545K KUBE-PORTALS-CONTAINER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
    0     0 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    0     0 KUBE-NODEPORT-CONTAINER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
19013 1258K KUBE-PORTALS-HOST  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
17089 1040K KUBE-NODEPORT-HOST  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   16  1573 RETURN     all  --  *      *       10.17.17.0/24        224.0.0.0/24        
    0     0 RETURN     all  --  *      *       10.17.17.0/24        255.255.255.255     
    0     0 MASQUERADE  tcp  --  *      *       10.17.17.0/24       !10.17.17.0/24        masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  *      *       10.17.17.0/24       !10.17.17.0/24        masq ports: 1024-65535
    0     0 MASQUERADE  all  --  *      *       10.17.17.0/24       !10.17.17.0/24       
   16  1627 RETURN     all  --  *      *       10.18.18.0/24        224.0.0.0/24        
    0     0 RETURN     all  --  *      *       10.18.18.0/24        255.255.255.255     
    0     0 MASQUERADE  tcp  --  *      *       10.18.18.0/24       !10.18.18.0/24        masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  *      *       10.18.18.0/24       !10.18.18.0/24        masq ports: 1024-65535
    0     0 MASQUERADE  all  --  *      *       10.18.18.0/24       !10.18.18.0/24       
   94 17149 MASQUERADE  all  --  *      *       10.26.36.0/24       !10.26.36.0/24        /* generated for LXD network lxdbr0 */
   16  1573 RETURN     all  --  *      *       192.168.122.0/24     224.0.0.0/24        
    0     0 RETURN     all  --  *      *       192.168.122.0/24     255.255.255.255     
    0     0 MASQUERADE  tcp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  *      *       192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  all  --  *      *       192.168.122.0/24    !192.168.122.0/24    
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
   15  1533 MASQUERADE  all  --  *      *       10.0.3.0/24         !10.0.3.0/24         

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           

Chain KUBE-NODEPORT-CONTAINER (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-NODEPORT-HOST (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-PORTALS-CONTAINER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.1         /* default/kubernetes:https */ tcp dpt:443 redir ports 41421
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.87        /* kube-system/heapster: */ tcp dpt:80 redir ports 42711
    0     0 REDIRECT   udp  --  *      *       0.0.0.0/0            10.152.183.10        /* kube-system/kube-dns:dns */ udp dpt:53 redir ports 40349
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.10        /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 redir ports 44303
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.8         /* kube-system/kubernetes-dashboard: */ tcp dpt:443 redir ports 39637
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.81        /* kube-system/monitoring-grafana: */ tcp dpt:80 redir ports 41243
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.64        /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 redir ports 35037
    0     0 REDIRECT   tcp  --  *      *       0.0.0.0/0            10.152.183.64        /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 redir ports 38495

Chain KUBE-PORTALS-HOST (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.1         /* default/kubernetes:https */ tcp dpt:443 to:192.168.1.23:41421
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.87        /* kube-system/heapster: */ tcp dpt:80 to:192.168.1.23:42711
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            10.152.183.10        /* kube-system/kube-dns:dns */ udp dpt:53 to:192.168.1.23:40349
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.10        /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 to:192.168.1.23:44303
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.8         /* kube-system/kubernetes-dashboard: */ tcp dpt:443 to:192.168.1.23:39637
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.81        /* kube-system/monitoring-grafana: */ tcp dpt:80 to:192.168.1.23:41243
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.64        /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 to:192.168.1.23:35037
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            10.152.183.64        /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 to:192.168.1.23:38495

You can see here the docker0 interface rule @gbrayut mentioned.

Also note that kube-proxy is configured with in userspace mode (https://stackoverflow.com/questions/36088224/what-does-userspace-mode-means-in-kube-proxys-proxy-mode).

I do not know yet anything about the airplane mode. We will look into this.

from microk8s.

dustinkirkland avatar dustinkirkland commented on August 16, 2024

Thanks, that's helpful.

So I'm struggling with microk8s on my laptop a bit. Whenever I go in and out of suspend, or I move across different wireless networks, the k8s networking stack is screwed up. I often have to reboot.

It would be nice if there was a microk8s.reset-network command that would "do the right thing" without me having to reboot (or, in some cases, purge the snap and start over).

from microk8s.

ktsakalozos avatar ktsakalozos commented on August 16, 2024

@dustinkirkland could you try:
sudo snap disable microk8s followed by sudo snap enable microk8s?

from microk8s.

safderali5 avatar safderali5 commented on August 16, 2024

When I do microk8s.enable dns, the kubedns installed is crash looping. i tried adding rule to ufw, no ufw installed on system. don't know to go farward from here?

k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.152.183.1:443: getsockopt: no route to host

10.152.183.1 is Ip address of service/kubernetes. If i do curl -v --insecure https://10.152.183.1:443 from my host machine, It responds back, but there is no route defined for this subnet in ip route show command output. It is really confusing now.

from microk8s.

ktsakalozos avatar ktsakalozos commented on August 16, 2024

Hi @safderali5,

Do you see anything interesting on the microk8s.kubectl logs pod/kube-dns-<something> -n kube-system and the microk8s.kubectl describe of the Pod? Are you on --beta or on --edge? Did you try to sudo snap disable microk8s and then sudo snap enable microk8s?

There is not much in your error report I can work with. Is this reproducible (at least in you machine)? How?

Thank you

from microk8s.

safderali5 avatar safderali5 commented on August 16, 2024

@ktsakalozos Here are the logs for kubedns. It looks like for reason the kubedns pod liveness and readiness probes are failing.
I have already tried your suggestions they did not work for me.

$ microk8s.kubectl logs -n kube-system pod/kube-dns-864b8bdc77-4q6dx kubedns

I0823 07:16:41.942559 1 dns.go:48] version: 1.14.6-3-gc36cb11
I0823 07:16:41.943457 1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s
I0823 07:16:41.943498 1 server.go:112] FLAG: --alsologtostderr="false"
I0823 07:16:41.943506 1 server.go:112] FLAG: --config-dir="/kube-dns-config"
I0823 07:16:41.943512 1 server.go:112] FLAG: --config-map=""
I0823 07:16:41.943517 1 server.go:112] FLAG: --config-map-namespace="kube-system"
I0823 07:16:41.943520 1 server.go:112] FLAG: --config-period="10s"
I0823 07:16:41.943526 1 server.go:112] FLAG: --dns-bind-address="0.0.0.0"
I0823 07:16:41.943529 1 server.go:112] FLAG: --dns-port="10053"
I0823 07:16:41.943536 1 server.go:112] FLAG: --domain="cluster.local."
I0823 07:16:41.943544 1 server.go:112] FLAG: --federations=""
I0823 07:16:41.943551 1 server.go:112] FLAG: --healthz-port="8081"
I0823 07:16:41.943555 1 server.go:112] FLAG: --initial-sync-timeout="1m0s"
I0823 07:16:41.943558 1 server.go:112] FLAG: --kube-master-url=""
I0823 07:16:41.943571 1 server.go:112] FLAG: --kubecfg-file=""
I0823 07:16:41.943574 1 server.go:112] FLAG: --log-backtrace-at=":0"
I0823 07:16:41.943581 1 server.go:112] FLAG: --log-dir=""
I0823 07:16:41.943585 1 server.go:112] FLAG: --log-flush-frequency="5s"
I0823 07:16:41.943588 1 server.go:112] FLAG: --logtostderr="true"
I0823 07:16:41.943591 1 server.go:112] FLAG: --nameservers=""
I0823 07:16:41.943596 1 server.go:112] FLAG: --stderrthreshold="2"
I0823 07:16:41.943600 1 server.go:112] FLAG: --v="2"
I0823 07:16:41.943603 1 server.go:112] FLAG: --version="false"
I0823 07:16:41.943609 1 server.go:112] FLAG: --vmodule=""
I0823 07:16:41.943656 1 server.go:194] Starting SkyDNS server (0.0.0.0:10053)
I0823 07:16:41.943917 1 server.go:213] Skydns metrics enabled (/metrics:10055)
I0823 07:16:41.943929 1 dns.go:146] Starting endpointsController
I0823 07:16:41.943934 1 dns.go:149] Starting serviceController
I0823 07:16:41.944043 1 sync.go:177] Updated upstreamNameservers to [8.8.8.8 8.8.4.4]
I0823 07:16:41.944127 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0823 07:16:41.944150 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
E0823 07:16:41.944737 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.152.183.1:443: connect: network is unreachable
E0823 07:16:41.945143 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?resourceVersion=0: dial tcp 10.152.183.1:443: connect: network is unreachable
I0823 07:16:42.444395 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0823 07:16:42.944394 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0823 07:16:42.945768 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.152.183.1:443: connect: network is unreachable
E0823 07:16:42.947065 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?resourceVersion=0: dial tcp 10.152.183.1:443: connect: network is unreachable
I0823 07:16:43.444385 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0823 07:16:43.944703 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...

$ microk8s.kubectl describe -n kube-system pod/kube-dns-864b8bdc77-4q6dx
Events:
Type Reason Age From Message


Warning Unhealthy 1m kubelet, Liveness probe failed: Get http://10.1.1.56:10054/healthcheck/dnsmasq: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 1m kubelet, Liveness probe failed: Get http://10.1.1.56:10054/metrics: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 1m (x8 over 2m) kubelet, Readiness probe failed: Get http://10.1.1.56:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

from microk8s.

ktsakalozos avatar ktsakalozos commented on August 16, 2024

Could you also share the output of sudo iptables -t nat -L -n -v , sudo journalctl -u snap.microk8s.daemon-kubelet.service, sudo journalctl -u snap.microk8s.daemon-apiserver.service , sudo journalctl -u snap.microk8s.daemon-proxy.service. Did you also enable traffic forwarding? sudo iptables -P FORWARD ACCEPT.

I see that the dns pod cannot reach the API server to query for pods and services. There must be something in the network configuration in your system blocking the traffic from the pods to the services. Pods and services are on separate subnets. I hope to find a hint on the logs of the kubelet, apiserver and kube-proxy.

from microk8s.

safderali5 avatar safderali5 commented on August 16, 2024

@ktsakalozos Thanks for explanation, Now I understand what real problem is. The iptables rules are missing, somehow they are not created upon installation e.g KUBE-SERVICES, KUBE-FORWARD and other rules. I cannot mess with iptables rules on system, so I have decided to move back to minikube. Thanks a lot for help.

from microk8s.

tennox avatar tennox commented on August 16, 2024

I think I am experiencing the same problem. I cannot access any of the Cluster-IPs.
Using microk8s ('beta' snap 149) on Ubuntu 16.04.5 LTS.

sudo iptables -t nat -L -n -v: http://termbin.com/wj99
sudo journalctl -u snap.microk8s.daemon-kubelet.service: http://termbin.com/qj8y
sudo journalctl -u snap.microk8s.daemon-apiserver.service: http://termbin.com/an8v
sudo journalctl -u snap.microk8s.daemon-proxy.service: http://termbin.com/duh8
sudo iptables -P FORWARD ACCEPT - no output

But wait...

I just read that there is a channel called --edge, and I tried installing that instead - and this seems to work now.

I will still post this comment as it might still help in some way.

from microk8s.

akaihola avatar akaihola commented on August 16, 2024

In addition to sudo iptables -P FORWARD ACCEPT, I needed to do this on Fedora 29:

sudo iptables -D FORWARD 20

since the 20th rule in the FORWARD chain was:

16 1264 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited

See also #316.

from microk8s.

stale avatar stale commented on August 16, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from microk8s.

urbandroid avatar urbandroid commented on August 16, 2024

I have a problem that is something similar if anyone find the root cause if they share with me that will be great.

My problem is that whenever i disconnect wifi, service that is reachable when connected becomes unreachable.

Both of them:

default service/kubernetes ClusterIP 10.152.183.1 443/TCP 7d5h
default service/nginx-deployment ClusterIP 10.152.183.203 80/TCP 6d20h

But kubectl get all --all-namespaces seems to work when in start if wifi connected but without connecting once per boot kubectl can't connect to api server either.

$ ifconfig
cni0      Link encap:Ethernet  <omitted>  
          inet addr:10.1.52.1  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: <omitted> Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:63 errors:0 dropped:0 overruns:0 frame:0
          TX packets:209 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:5584 (5.5 KB)  TX bytes:26445 (26.4 KB)

docker0   Link encap:Ethernet <omitted>  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp1s0f0  Link encap:Ethernet  <omitted>  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Interrupt:18 

flannel.1 Link encap:Ethernet  <omitted>  
          inet addr:10.1.52.0  Bcast:0.0.0.0  Mask:255.255.255.255
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:142 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:172611 errors:0 dropped:0 overruns:0 frame:0
          TX packets:172611 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:36190659 (36.1 MB)  TX bytes:36190659 (36.1 MB)

veth0a8ee294 Link encap:Ethernet <omitted> 
          inet6 addr: <omitted> Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:28 errors:0 dropped:0 overruns:0 frame:0
          TX packets:225 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3464 (3.4 KB)  TX bytes:25737 (25.7 KB)

veth640d969f Link encap:Ethernet  HWaddr <omitted>  
          inet6 addr: <omitted> Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:35 errors:0 dropped:0 overruns:0 frame:0
          TX packets:238 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3002 (3.0 KB)  TX bytes:27960 (27.9 KB)

wlp2s0    Link encap:Ethernet  HWaddr <omitted>  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:28155 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20974 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:35503049 (35.5 MB)  TX bytes:3251973 (3.2 MB)


$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 160 packets, 20392 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  166 21824 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
   40 12880 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 9 packets, 673 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 902 packets, 101K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1110  129K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    2   120 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 902 packets, 101K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 2041  208K KUBE-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
   54  8216 CNI-dafc8e2786623a2794dd6ad0  all  --  *      *       10.1.52.0/24         0.0.0.0/0            /* name: "microk8s-flannel-network" id: "b5d9543c2cf4ac175c0016732db718cb5f1243d936d2fa70e146b17a526e57ab" */
   54  8216 CNI-3af87687ae1907dc46cc67a4  all  --  *      *       10.1.52.0/24         0.0.0.0/0            /* name: "microk8s-flannel-network" id: "5e243a33254e74674a2f897e3203648bc50cb674c692f697d771930d26e6f957" */

Chain CNI-3af87687ae1907dc46cc67a4 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.1.52.0/24         /* name: "microk8s-flannel-network" id: "5e243a33254e74674a2f897e3203648bc50cb674c692f697d771930d26e6f957" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "microk8s-flannel-network" id: "5e243a33254e74674a2f897e3203648bc50cb674c692f697d771930d26e6f957" */

Chain CNI-dafc8e2786623a2794dd6ad0 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.1.52.0/24         /* name: "microk8s-flannel-network" id: "b5d9543c2cf4ac175c0016732db718cb5f1243d936d2fa70e146b17a526e57ab" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "microk8s-flannel-network" id: "b5d9543c2cf4ac175c0016732db718cb5f1243d936d2fa70e146b17a526e57ab" */

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-MARK-DROP (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x8000

Chain KUBE-MARK-MASQ (5 references)
 pkts bytes target     prot opt in     out     source               destination         
   13   780 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    8   480 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-SEP-LPTNYREHBLYCILON (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.1.52.19           0.0.0.0/0            /* default/nginx-deployment: */
    1    60 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-deployment: */ tcp to:10.1.52.19:80

Chain KUBE-SEP-UONMTBFS5QPKYLXU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    5   300 KUBE-MARK-MASQ  all  --  *      *       192.168.1.8          0.0.0.0/0            /* default/kubernetes:https */
    5   300 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:192.168.1.8:16443

Chain KUBE-SEP-XP3RMNYLPIAUZ3ZQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.1.52.18           0.0.0.0/0            /* default/nginx-deployment: */
    2   120 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-deployment: */ tcp to:10.1.52.18:80

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    5   300 KUBE-MARK-MASQ  tcp  --  *      *      !10.152.183.0/24      10.152.183.1         /* default/kubernetes:https cluster IP */ tcp dpt:443
    5   300 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.152.183.1         /* default/kubernetes:https cluster IP */ tcp dpt:443
    3   180 KUBE-MARK-MASQ  tcp  --  *      *      !10.152.183.0/24      10.152.183.203       /* default/nginx-deployment: cluster IP */ tcp dpt:80
    3   180 KUBE-SVC-ECF5TUORC5E2ZCRD  tcp  --  *      *       0.0.0.0/0            10.152.183.203       /* default/nginx-deployment: cluster IP */ tcp dpt:80
  573 44210 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-ECF5TUORC5E2ZCRD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 KUBE-SEP-XP3RMNYLPIAUZ3ZQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-deployment: */ statistic mode random probability 0.50000000000
    1    60 KUBE-SEP-LPTNYREHBLYCILON  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/nginx-deployment: */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    5   300 KUBE-SEP-UONMTBFS5QPKYLXU  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */

Even when wifi is disconnected pods for nginx deployment http://10.1.52.19/ http://10.1.52.18/ are reachable but not the service at http://10.152.183.203/

from microk8s.

stale avatar stale commented on August 16, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

from microk8s.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.