Giter VIP home page Giter VIP logo

multus-service-archived's Introduction

multus-service (archived)

Current Status of Repository

In the NPWG call at April 18, we decide that this repository is archived. But this does NOT means Kubernetes service for secondary interfaces is impossible. Here is the reason why we decide to close.

  • Insufficient use-cases and feedback

Secondary network is used varous scenarios, such as to connect isolated network (i.e. closed network), to connect high-speed network and so on. Hence multus-service also needs to apply such various use-cases and we need to gather the use-cases and requirements not only from users also from vendors. We have insufficient use-cases and feedback in NPWG meeting hence we cannot disucss how user want to use the service for secondary network. Kubernetes service functionality is not a single functionality, composition of various functions (i.e. NodePort, headless service, load-balancer and so on), so use-cases and feedback were pretty important.

  • Kubernetes community want to use Gateway API, instead of endpoint/endpointslices API

Tim Hockin's LT at KubeCon 2023 NA mentions that Kubernetes Gateway API could be replaced with Kubernetes Service API. This repository uses service API, hence this repository is not aligned with current Kubernetes architecture based on Tim Hockin's presentation. So if we need to implement it, Gateway API should be used to align Kubernetes architecture. It should be implemented from scratch, not from this repository.

  • Several community, including Kubernetes community itself, started the discussion about secondary network service

Currently, Kubernetes sig-network launches Multi-network WG and discusses about multiple network interfaces in Kubernetes Pod, including Service and other Kubernetes functionalities (e.g. NetworkPolicy). So I expects that this working group will provide the design and API for Kubernetes service for secondary network interfaces, not this repository. So if there is someone who really wants this feature, I strongly recommend to join this multi-network WG and provide your use-case scenarios.

Here are the reasons. So this repository archives is not end of development, just restart the design discussion with use-cases in another working group. Please join the community and discuss about your use-cases in the community if you want the feature.


Description

This repository contains various components which provides service abstraction, which similar to Kubernetes services, for Multus network attachment definitions.

Supported Feature

Currently these compoents supports following functionalities:

  • Cluster IP
  • Load-balancing Cluster IP among the service Pods with iptables

Other service related features is to be discussed in Kubernetes Network Plumbing WG. Some of them might be supported in the future but this does not guarantee that all Kubernetes service features are going to be supported.

Limitation

As noted above section, we do not support everything except above yet. We need to discuss how to implement and how to support it. Please keep in mind that these feature should be discussed but it may be concluded NOT to support (due to multus network design perspective or some other reason).

Example (not support it yet):

  • Load balancer
  • Expose multus service to outside cluster
  • kubectl port-forward svc command
  • Headless service (see #22 for the detail)

Requirements

TBD (verified with kubeadm k8s and cri-o as container runtime for now) Note: docker for container runtime is not supported as latest Kubernetes does.

Container Images

Available in Packages. Currently amd64 only.

How to Deploy

Sample deployment is in following:

TODO

TBD

multus-service-archived's People

Contributors

s1061123 avatar dependabot[bot] avatar dougbtv avatar p- avatar

Stargazers

 avatar Feng Ye avatar Mark Taguiad avatar  avatar Volodymyr Tsap avatar Marvin Beckers avatar wang avatar wenwenxiong avatar scanflove avatar Aleksandra Obeso Duque avatar HLVM avatar Ludovico Funari avatar noobexpert avatar  avatar Aaron Brewbaker avatar zengxu avatar John Fulton avatar  avatar Marlinc avatar zfy3000 avatar Cyclinder avatar Marcos Alberto avatar Lin Lin avatar Yan Grunenberger avatar Gabriel Dragomir avatar

Watchers

 avatar James Cloos avatar Moshe Levi avatar Suresh Krishnan avatar  avatar Christopher Adigun avatar Richard Hastie avatar William Zhang avatar noobexpert avatar Sam Massey avatar  avatar  avatar  avatar  avatar

multus-service-archived's Issues

What if the request on 2nd interface detours into 1st interface on the worker nodes?

I have deployed k8s cluster on baremetal with multus-service + sriov(1 controller node+ 2 worker nodes). I have checked the container has two interfaces (default CNI and additional sriov network) and multus-service up and running on each worker node.

When requesting the service via 2nd interface on the controller, the controller received the request and detours the request into 1st interface.

From the controller (IP: 10.10.10.123, 2nd interface, IP:192.168.1.123, 1st interface):

kubectl --namespace minio-tenant port-forward svc/minio-multus-service-2 9002:9000 --address=10.10.10.123

From the client (out of k8s cluster), the request to 10.10.10.123:9002 detours into the 1st interface when checking each worker node. What do you think I miss, as it doesn't go through the 2nd interface for the request?

containerd support

Multus service currently can be configured with docker and crio, is it possible to support containerd runtime ?

thanks for Multus-service

I have not tried multus-service yet. But my requirement as follows

I have a dedicated server setup with dual nic (1Gbits and 10Gbits)

use 10Gbit for all my pods

Why I need 10Gbit?

a) I want CEPH-ROOK to have faster connection
b) I want mariadb galera to communicate faster in replication

I found that, it is not possible to use 10Gbit as my primary interface in pods other than have it as secondary interface unless I know a lot about CNI

ROOK-CEPH provided an option to use multus network and it make use of it. So requirement A solved

Now, I want to solve B, unless I found a way to use secondary network as primary this cannot be solved, then I am looking for some thing which helps me to use my secondary NIC for service and I found multus-service

now my requirements are met, please believe that, this will help many users to solve complex issue, so please contribute it. If possible add donation link to show support.

DPDK Support over multus-service

I was trying to add the NAD which makes use of the user-space network (Driver: vfio-pci); however, it doesn't seem to work/support. Are we expecting this to be supported in the near future?

IP table rules are not added when creating service using NAD

KubeAdm cluster
Kubernetes v1.23.5
Containerd as the CRI.
Host OS image ubuntu 18.04.4 LTS.

Multus CNI is already installed, I created the Multus Service using the instructions as indicated in this link

Pods are in a running state, however, I cannot find the IPtable rules after creating the service for the NAD
So when I curl to the IP of the service it returns nothing.

image

Logs of the pods :
image

No Iptable rules are created
image

Support for headless service

Hello,
We are currently working on deploying RDMA-based applications in the Kubernetes cluster. RDMA runs over secondary-network interfaces, which makes multus-service very useful for us. The problem is that we must use headless type service for RDMA communication (because RDMA doesn't work well with NAT). I see that multus-service doesn't yet support headless services. Do you plan to support it? If yes, do you have an estimation of when this will be supported?

Leverage the kubernetes built-in kube-proxy

Hello,

I am following up on the discussion k8snetworkplumbingwg/multus-cni#466.

If I understand correctly, you intend to develop a tool similar to kube-proxy to put the network rules on the different Pods. I would like to know how you intend to solve the problem of CNI plugins that do not allow the host to communicate with the interface created on the Pod (e.g. MACVLAN).

For me, I saw another method which consists in implementing only a controller which looks at a new type of object (CR). Depending on the content of CR, the controller creates a K8s service without label selector + endpoints resource. The controller continues to look, for each CR, at the Pods that have the labels defined in that CR and their additional interfaces update the endpoints resource. Load balancing and other features will be ensured then by kube-proxy. The problem with this method is that it doesn't work (I think) for example with MACVLAN interfaces where the host can't use the master interface to reach the interface on the Pod.

Best regards,
Abderaouf

Support for IPv6 and dual-stack

The is a previous asked about it here

I am wondering if there has been any update around it ?. I tried it with IPv6 Single stack cluster and it didn't work.

multus-service not working - service endpoint always get default network IP address

Running this demo https://github.com/redhat-nfvpe/multus-service-demo/blob/main/multus-service-demo1.yaml

Multus-Service:
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-service/main/deploy.yml

Can see all pods - nginx and fedora came with two network interfaces, and managed to ping each other.

However, the service has no response from ping on fedora pod:
[root@fedora-net1 /]# ping multus-nginx-macvlan
PING multus-nginx-macvlan.default.svc.cluster.local (10.233.5.253) 56(84) bytes of data.

The multus-nginx-macvlan's endpoints are eth0 IP instead of net1 IP:

=================================================

root@focal01:~# kubectl describe svc multus-nginx-macvlan
Name: multus-nginx-macvlan
Namespace: default
Labels: service.kubernetes.io/service-proxy-name=multus-proxy
Annotations: k8s.v1.cni.cncf.io/service-network: macvlan1
Selector: app=multus-nginx-macvlan
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.233.5.253
IPs: 10.233.5.253
Port: 80/TCP
TargetPort: 80/TCP
Endpoints: 10.233.65.165:80,10.233.66.235:80 <-this is eth0 IP
Session Affinity: None
Events:

===================================================

Please help.

Headless service is not supported (Original: Seeing it load balance between client interfaces)

I have a set up with two a src and dst pod. Each pod has two interfaces, the default k8s interface and a secondary interface created using multus. The dst pod is listening on all interfaces on a specific port, i.e., :8080.

The src pod continuing creates a GRPC connection to the dst pod, makes a RPC call, and then closes the connection at 1 second intervals.

The dst pod is using the GPRC peer package to get the peer IP from which the request comes. What I am seeing is that the peer is sometimes reporting the default address and sometime reporting the secondary address, when I would have expected it only to report the secondary address. This is leading me to believe the the request is not always proxied via the multus-proxy and sometimes it is using the default proxy.

thoughts?

on a side note, what if the mutus-service-controller created a "second" service without a selector and created endpoints that matched that service. this would allow (i think) the default proxies to work. to make this really work it means we would have to have a MultusService (which isn't great) or use additional annotations to specify the selector.

multus-proxy has an error in CreateContainerError

I have existing cluster (1 controller + 2 worker nodes) with docker runtime + sriov-network-operator + Multus CNI in order to use sriov network connection as 2nd network interface in a container. (I haven't attach the 2nd interface yet though). I have deployed multus-service with DaemonSet based on deploy.yml:

  1. changed to 'docker' and removed nodeSelector
    spec:
      hostNetwork: true
      # nodeSelector:
        # kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus-service
      containers:
      - name: multus-proxy
        # crio support requires multus:latest for now. support 3.3 or later.
        image: ghcr.io/k8snetworkplumbingwg/multus-service:latest
        imagePullPolicy: Always
        command: ["/usr/bin/multus-proxy"]
        args:
        - "--host-prefix=/host"
        # uncomment this if runtime is docker
        - "--container-runtime=docker"
        # - "--container-runtime=cri"
        # change this if runtime is different that crio default
        # - "--container-runtime-endpoint=/run/crio/crio.sock"
        # uncomment this if you want to store iptables rules
        # - "--pod-iptables=/var/lib/multus-proxy/iptables"
       - "--logtostderr"
       - "-v=1"

multus-service-controller is running OK,

# kubectl logs  multus-service-controller-69d49dc856-b4m9s  -n kube-system
I0527 00:55:57.727991       1 server.go:99] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0527 00:55:57.730353       1 options.go:88] hostname: multus-service-controller-69d49dc856-b4m9s
I0527 00:55:57.730410       1 leaderelection.go:243] attempting to acquire leader lease kube-system/multus-service-controller...
I0527 00:56:14.386868       1 leaderelection.go:253] successfully acquired lease kube-system/multus-service-controller
I0527 00:56:14.387204       1 endpointslice_controller.go:259] Starting endpoint slice controller
I0527 00:56:14.387234       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0527 00:56:14.387253       1 shared_informer.go:247] Caches are synced for endpoint_slice

but, multus-proxy seem to have an issue in creating.

# kubectl logs  multus-proxy-ds-amd64-mf4gv -n kube-system
Error from server (BadRequest): container "multus-proxy" in pod "multus-proxy-ds-amd64-mf4gv" is waiting to start: CreateContainerError

I was gonna see the log so that enabled but don't see the detail. What would be better to troubleshoot and what would be problematic in my cluster setup?

Thanks,

does it work across different macvlans?

I have two nodes of different NICs where one node is using macvlan1 and the other is using macvlan2.

Building the svc with the same macvlan will work, but if I create a client pod with macvlan1 and another pod/svc with macvlan2, then client pod won't be able to connect with cluster ip or svc name (directly connect with pod macvlan ips will work though).

Can anyone educate me if it's supported yet or some direction where I can look into?
Thanks

Not able to access Multus Service

Hi,
I've a following multus service setup for iperf udp server:

======================================================

//network attachment
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: yockgen-network
spec:
config: '{
"cniVersion": "0.3.1",
"name": "yockgen-network",
"type": "macvlan",
"master": "ens8",
"mode": "bridge",
"isDefaultGateway": false,
"forceAddress": false,
"ipMasq": false,
"hairpinMode": false,
"ipam": {
"type": "whereabouts",
"range": "10.0.0.225/28" }
}'

//service
kind: Service
apiVersion: v1
metadata:
name: yockgen-test-multus
labels:
service.kubernetes.io/service-proxy-name: multus-proxy
annotations:
k8s.v1.cni.cncf.io/service-network: yockgen-network
spec:
selector:
app: test01pod
ports:
protocol: UDP
port: 5001

//endpointslices
root@focal01:~# kubectl get endpointslices.discovery.k8s.io
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
kubernetes IPv4 6443 192.168.222.40 116m
yockgen-test-multus-d5zpw IPv4 5001 10.233.65.5,10.233.66.38,10.233.65.60 12m
yockgen-test-multus-multus-xblzr IPv4 5001 10.0.0.225,10.0.0.226,10.0.0.227 18m

==========================================================

Try multiple attempts per below:
iperf -c yockgen-test-multus.default.svc.cluster.local -l 1024 -u -t 300s <- NO, not able to connect
iperf -c 10-0-0-227.yockgen-test-multus.default.svc.cluster.local -l 1024 -u -t 300s <-YES, able to connect via added IP address as prefix

Why need to add target IP address prefix (10-0-0-227.yockgen-test-multus.default.svc.cluster.local) to make the service work? Why not like a regular service only via service name or cluster IP?

Please advise whether this is a bug or expected behavior.

Thanks a lot!

Multus proxy keep crashing

Multus proxy service keeps restarting every minute. And the logs says

E0423 20:21:26.983615       1 pod.go:373] failed to get cri client: failed to connect: failed to connect to unix:///host/run/crio/crio.sock, make sure you are running as root and the runtime has been started: context deadline exceeded
F0423 20:21:26.984069       1 main.go:65] cannot create pod change tracker

multus-service-controller restarted

This will be taken care of myself, but to track it, open the issue.

I0126 05:53:57.786517       1 server.go:97] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0126 05:53:57.971249       1 options.go:88] hostname: multus-service-controller-5685cb6f65-6kfjg
I0126 05:53:57.971428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/multus-service-controller...
I0126 05:54:16.086651       1 leaderelection.go:253] successfully acquired lease kube-system/multus-service-controller
I0126 05:54:16.086914       1 endpointslice_controller.go:259] Starting endpoint slice controller
I0126 05:54:16.086932       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0126 05:54:16.086942       1 shared_informer.go:247] Caches are synced for endpoint_slice



E0126 06:19:25.325444       1 leaderelection.go:325] error retrieving resource lock kube-system/multus-service-controller: Get "https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/multus-service-controller": context deadline exceeded
I0126 06:19:25.325501       1 leaderelection.go:278] failed to renew lease kube-system/multus-service-controller: timed out waiting for the condition
E0126 06:19:25.325560       1 leaderelection.go:301] Failed to release lock: resource name may not be empty
F0126 06:19:25.325573       1 options.go:113] leaderelection lost

Will this work for OVS CNI?

How exactly does this work, for example isnt it dependent on the secondary CNI and kube-proxy/the host having reachability to the Multus IP and VLAN?

We use OVS CNI. I'm trying to establish connectivity between host and OVS but not able to.

Also, whats the status on Loadbalancer support? We would like to expose Multus Endpoint/IP with a public IP via MetalLB in BGP mode.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.