Comments (4)
Can you show the output of kubectl get service -n adguard adguard-remote-udp -o yaml
and kubectl get node -o yaml
? The IPv6 sysctl will be automatically added if the backing service has ipv6 enabled, however it looks like your service only has IPv4 enabled so it should not be necessary:
k3s/pkg/cloudprovider/servicelb.go
Lines 451 to 452 in 06b6444
It looks like you've set externalTrafficPolicy: Local
on this service:
k3s/pkg/cloudprovider/servicelb.go
Lines 565 to 572 in 06b6444
I suspect might cause the klipper-lb script try to send traffic to the node's IPv6 address, even if the service itself only supports IPv4. It shouldn't do that.
from k3s.
Here is the output for kubectl get service -n adguard adguard-remote-udp -o yaml
:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"adguard-remote-udp","namespace":"adguard"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"dns-udp","port":53,"protocol":"UDP","targetPort":53},{"name":"dns-crypt-udp","port":5443,"protocol":"UDP","targetPort":5443}],"selector":{"app.kubernetes.io/instance":"adguard-home"},"type":"LoadBalancer"}}
creationTimestamp: "2024-04-13T14:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
name: adguard-remote-udp
namespace: adguard
resourceVersion: "24320629"
uid: 0c5cf893-71db-4679-a5b3-1b8329dad8fc
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.43.161.206
clusterIPs:
- 10.43.161.206
externalTrafficPolicy: Local
healthCheckNodePort: 31464
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: dns-udp
nodePort: 31693
port: 53
protocol: UDP
targetPort: 53
- name: dns-crypt-udp
nodePort: 30283
port: 5443
protocol: UDP
targetPort: 5443
selector:
app.kubernetes.io/instance: adguard-home
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: <my external ip>
The output for kubectl get node -o yaml
:
apiVersion: v1
items:
- apiVersion: v1
kind: Node
metadata:
annotations:
alpha.kubernetes.io/provided-node-ip: <my ipv4>,<my ipv6>
flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"<>"}'
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: "true"
flannel.alpha.coreos.com/public-ip: <ipv4>
k3s.io/hostname: astra
k3s.io/internal-ip: <my ipv4>,<my ipv6>
k3s.io/node-args: '["server","--disable","traefik"]'
k3s.io/node-config-hash: QZWAV47A5VAKFX3MULWXBH3UEML5TZNT3W63QB5RUOQTNEUM22XA====
k3s.io/node-env: '{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/3fcd4fcf3ae2ba4d577d4ee08ad7092538cd7a7f0da701efa2a8807d44a25f66"}'
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2023-07-06T11:24:57Z"
finalizers:
- wrangler.cattle.io/node
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: k3s
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: astra
kubernetes.io/os: linux
node-role.kubernetes.io/control-plane: "true"
node-role.kubernetes.io/master: "true"
node.kubernetes.io/instance-type: k3s
plan.upgrade.cattle.io/server-plan: fbb298c23beabb26f6297e41bcd3117ffbafac84d9ef95df775e3b14
name: astra
resourceVersion: "24502646"
uid: 17c1b88f-742e-4316-941b-9e87430b90d6
spec:
podCIDR: 10.42.0.0/24
podCIDRs:
- 10.42.0.0/24
providerID: k3s://astra
status:
addresses:
- address: <my ipv4>
type: InternalIP
- address: <my ipv6>
type: InternalIP
- address: astra
type: Hostname
allocatable:
cpu: "8"
ephemeral-storage: "1959802746163"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 65733840Ki
pods: "110"
capacity:
cpu: "8"
ephemeral-storage: 2014599864Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 65733840Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2024-04-16T14:39:24Z"
lastTransitionTime: "2023-07-06T11:24:57Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2024-04-16T14:39:24Z"
lastTransitionTime: "2023-07-06T11:24:57Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2024-04-16T14:39:24Z"
lastTransitionTime: "2023-07-06T11:24:57Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2024-04-16T14:39:24Z"
lastTransitionTime: "2024-04-13T15:31:30Z"
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images: <deployed images>
nodeInfo:
architecture: amd64
bootID: f1afbea7-14c8-4bd1-ac84-dc62ecbb3d82
containerRuntimeVersion: containerd://1.7.11-k3s2
kernelVersion: 6.1.0-20-amd64
kubeProxyVersion: v1.29.3+k3s1
kubeletVersion: v1.29.3+k3s1
machineID: b3d19325a7ee411791c9288a15e00c0c
operatingSystem: linux
osImage: Debian GNU/Linux 12 (bookworm)
systemUUID: 00000000-0000-0000-0000-ac1f6b0065dc
kind: List
metadata:
resourceVersion: ""
from k3s.
OK, I can replicate this:
- Start k3s with
--node-ip=<ipv4>,<ipv6>
- Patch the traefik pod to change the external traffic policy:
kubectl patch service -n kube-system traefik -p '{"spec": {"externalTrafficPolicy": "Local"}}'
- Note that the servicelb pods start crashing:
brandond@dev01:~$ kubectl get pod -n kube-system -l svccontroller.k3s.cattle.io/svcname=traefik
NAME READY STATUS RESTARTS AGE
svclb-traefik-d855c4d4-b9v52 0/2 CrashLoopBackOff 8 (48s ago) 2m12s
brandond@dev01:~$ kubectl get pod -n kube-system -l svccontroller.k3s.cattle.io/svcname=traefik -o yaml | grep -A3 podIP:
podIP: 10.42.0.9
podIPs:
- ip: 10.42.0.9
qosClass: BestEffort
brandond@dev01:~$ kubectl logs -n kube-system -l svccontroller.k3s.cattle.io/svcname=traefik -c lb-tcp-80
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '==' 1 ]
+ iptables -t filter -A FORWARD -d 172.17.0.8/32 -p TCP --dport 31503 -j DROP
+ iptables -t nat -I PREROUTING -p TCP --dport 80 -j DNAT --to 172.17.0.8:31503
+ iptables -t nat -I POSTROUTING -d 172.17.0.8/32 -p TCP -j MASQUERADE
+ echo fd7c:53a5:aef5::242:ac11:8
+ grep -Eq :
+ cat /proc/sys/net/ipv6/conf/all/forwarding
+ '[' 0 '==' 1 ]
+ exit 1
The pod and service are both ipv4 only but the node has an ipv6 address, so it incorrectly tries to set up ipv6 forwarding.
from k3s.
Validated on Version:
-$ k3s version v1.30.1+k3s-df5db28a (df5db28a)
Environment Details
Infrastructure
Cloud EC2 instance
Node(s) CPU architecture, OS, and Version:
Ubuntu
AMD
Cluster Configuration:
-1 nodes
Steps to validate the fix
- start k3s with --node-ip=1pv4,1pv6
- Patch service traefik to -p '{"spec": {"externalTrafficPolicy": "Local"}}'
- Validate nodes, pods,serviceLB
Reproduction Issue:
k3s version v1.30.1+k3s-5cf4d757 (5cf4d757)
kubectl patch service traefik -n kube-system -p '{"spec": {"externalTrafficPolicy": "Local"}}'
service/traefik patched
kubectl get pod -n kube-system -l svccontroller.k3s.cattle.io/svcname=traefik
NAME READY STATUS RESTARTS AGE
svclb-traefik-a436cc37-mwskr 0/2 CrashLoopBackOff 2 (13s ago) 15s
~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-75bb9ff978-bf7fw 1/1 Running 0 4m59s
kube-system coredns-576bfc4dc7-h8nlj 1/1 Running 0 4m59s
kube-system helm-install-traefik-crd-x84bt 0/1 Completed 0 4m59s
kube-system helm-install-traefik-4bbh4 0/1 Completed 1 4m59s
kube-system metrics-server-557ff575fb-sfwzq 1/1 Running 0 4m59s
kube-system traefik-5fb479b77-8nmm8 1/1 Running 0 4m32s
kube-system svclb-traefik-a436cc37-mwskr 0/2 CrashLoopBackOff 10 (12s ago) 2m56s
Validation Results:
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576bfc4dc7-pzfbg 1/1 Running 0 35s
kube-system helm-install-traefik-bnpvl 0/1 Completed 1 35s
kube-system helm-install-traefik-crd-z76g6 0/1 Completed 0 35s
kube-system local-path-provisioner-86f46b7bf7-crlk2 1/1 Running 0 35s
kube-system metrics-server-557ff575fb-9b2mn 1/1 Running 0 35s
kube-system svclb-traefik-8ea8106e-9txhr 2/2 Running 0 9s
kube-system traefik-5fb479b77-cwwvk 1/1 Running 0 9s
ubuntu@:~$ kubectl patch service traefik -n kube-system -p '{"spec": {"externalTrafficPolicy": "Local"}}'
service/traefik patched
ubuntu@:~$ kubectl get pod -n kube-system -l svccontroller.k3s.cattle.io/svcname=traefik
NAME READY STATUS RESTARTS AGE
svclb-traefik-8ea8106e-xthwt 2/2 Running 0 11s
ubuntu@:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576bfc4dc7-pzfbg 1/1 Running 0 64s
kube-system helm-install-traefik-bnpvl 0/1 Completed 1 64s
kube-system helm-install-traefik-crd-z76g6 0/1 Completed 0 64s
kube-system local-path-provisioner-86f46b7bf7-crlk2 1/1 Running 0 64s
kube-system metrics-server-557ff575fb-9b2mn 1/1 Running 0 64s
kube-system svclb-traefik-8ea8106e-xthwt 2/2 Running 0 21s
kube-system traefik-5fb479b77-cwwvk 1/1 Running 0 38s
from k3s.
Related Issues (20)
- Missing log information in Windows HOT 1
- [Release-1.29] - Agent certificate generation retry causes agents to bypass local loadbalancer HOT 1
- [Release-1.28] - Agent certificate generation retry causes agents to bypass local loadbalancer HOT 1
- [Release-1.27] - Agent certificate generation retry causes agents to bypass local loadbalancer HOT 1
- Etcd s3 config secret support
- Snapshot retention does not work with etcd-s3-folder HOT 6
- K3S server doesn't start on RHEL9 HOT 1
- Flannel-external-ip is ignored in cloud environments? HOT 11
- RBAC Authentication for embedded etcd HOT 1
- Remove `DisableCCM` from `CriticalControlArgs` HOT 1
- High CPU and disk read/write, very large (2GB) state.db on k3s 1.22.9 HOT 1
- [Release-1.29] - Snapshot retention does not work with etcd-s3-folder
- [Release-1.28] - Snapshot retention does not work with etcd-s3-folder
- [Release-1.27] - Snapshot retention does not work with etcd-s3-folder
- Loadbalancer may panic due to race condition when selecting a new server HOT 1
- [Release-1.29] - Loadbalancer may panic due to race condition when selecting a new server HOT 1
- [Release-1.28] - Loadbalancer may panic due to race condition when selecting a new server HOT 1
- [Release-1.27] - Loadbalancer may panic due to race condition when selecting a new server HOT 1
- containerd-shim creates many inotify instances on AlmaLinux VM HOT 1
- [Release-1.30] - Executables from k3s get flagged as malware by Azure Defender for Linux HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from k3s.