Giter VIP home page Giter VIP logo

danm's Introduction

DANM

Build Status Unit Test Coverage

Join our community!

Want to hang-out with us? Join our Slack under https://danmws.slack.com/!

Feel yourself officially invited by clicking on this link!

Want to get more bang for the buck? Check out DANM Utils too!

DANM Utils is the home to independet Operators built on top of the DANM network management platform, providing value added services to your cluster! Interested in adding outage resiliency to your IPAM, or universal network policy support? Look no further and hop over to https://github.com/nokia/danm-utils today!

Table of Contents

Introduction

DANM is Nokia's solution to bring TelCo grade network management into a Kubernetes cluster! DANM has more than 4 years of history inside the company, is currently deployed into production, and it is finally available for everyone, here on GitHub.

The name stands for "Damn, Another Network Manager!", because yes, we know: the last thing the K8s world needed is another TelCo company "revolutionizing" networking in Kubernetes. But still we hope that potential users checking out our project will involuntarily proclaim "DANM, that's some good networking stuff!" :)

Please consider for a moment that there is a whole other world out there, with special requirements, and DANM is the result of those needs! We are certainly not saying DANM is THE network solution, but we think it is a damn good one! Want to learn more about this brave new world? Don't hesitate to contact us, we are always quite happy to share the special requirements we need to satisfy each and every day.

In any case, DANM is more than just a plugin, it is an End-To-End solution to a whole problem domain. It is:

  • a CNI plugin capable of provisioning IPVLAN interfaces with advanced features
  • an in-built IPAM module with the capability of managing multiple, cluster-wide, discontinuous L3 networks with managing up to 8M allocations per network! plus providing dynamic, static, or no IP allocation scheme on-demand for both IPv4, and IPv6
  • a CNI metaplugin capable of attaching multiple network interfaces to a container, either through its own CNI, or through delegating the job to any of the popular CNI solution e.g. SR-IOV, Calico, Flannel etc. in parallel
  • a Kubernetes controller capable of centrally managing both VxLAN and VLAN interfaces of all Kubernetes hosts
  • another Kubernetes controller extending Kubernetes' Service-based service discovery concept to work over all network interfaces of a Pod
  • a standard Kubernetes Validating and Mutating Webhook responsible for making you adhere to the schemas, and also automating network resource management for tenant users in a production-grade environment

Install an Akraino REC and get DANM for free!

Just kidding as DANM is always free, but if you want to install a production grade, open-source Kubernetes-based bare metal CaaS infrastructure by default equipped with DANM and with a single click of a button nonetheless; just head over to Linux Foundation Akraino Radio Edge Cloud (REC) wiki for the Akraino REC Architecture and the Akraino REC Installation Guide Not just for TelCo!

The above functionalities are implemented by the following components:

  • danm is the CNI plugin which can be directly integrated with kubelet. Internally it consists of the CNI metaplugin, the CNI plugin responsible for managing IPVLAN interfaces, and the in-built IPAM plugin. Danm binary is integrated to kubelet as any other CNI plugin.

  • fakeipam is a little program used in natively integrating 3rd party CNI plugins into the DANM ecosystem. It is basically used to echo the result of DANM's in-built IPAM to CNIs DANM delegates operations to. Fakeipam binary should be placed into kubelet's configured CNI plugin directory, next to danm. Fakeipam is a temporary solution, the long-term aim is to separate DANM's IPAM component into a full-fledged, standalone IPAM solution.

  • netwatcher is a Kubernetes Controller watching the Kubernetes API for changes in the DANM related CRD network management APIs. This component is responsible for validating the semantics of network objects, and also for maintaining VxLAN and VLAN host interfaces of all Kubernetes nodes. Netwatcher binary is deployed in Kubernetes as a DaemonSet, running on all nodes.

  • svcwatcher is another Kubernetes Controller monitoring Pod, Service, Endpoint, and DanmEp API paths. This Controller is responsible for extending Kubernetes native Service Discovery to work even for the non-primary networks of the Pod. Svcwatcher binary is deployed in Kubernetes as a DaemonSet, running only on the Kubernetes master nodes in a clustered setup.

  • webhook is a standard Kubernetes Validating and Mutating Webhook. It has multiple, crucial responsibilities:

  • it validates all DANM introduced CRD APIs both syntactically, and semantically both during creation, and modification

  • it automatically mutates parameters only relevant to the internal implementation of DANM into the API objects

  • it automatically assigns physical network resources to the logical networks of tenant users in a production-grade infrastructure

Our philosophy and motivation behind DANM

It is undeniable that TelCo products- even in containerized format- must own physically separated network interfaces, but we have always felt other projects put too much emphasis on this lone fact, and entirely ignored -or were afraid to tackle- the larger issue with Kubernetes. That is: capability to provision multiple network interfaces to Pods is a very limited enhancement if the cloud native feature of Kubernetes cannot be used with those extra interfaces.

This is the very big misconception our solution aims to rectify - we strongly believe that all network interfaces shall be natively supported by K8s, and there are no such things as "primary", or "secondary" network interfaces. Why couldn't NetworkPolicies, Services, LoadBalancers, all of these existing and proven Kubernetes constructs work with all network interfaces? Why couldn't network administrators freely decide which physical networks are reachable by a Pod? In our opinion the answer is quite simple: because networks are not first-class citizens in Kubernetes.

This is the historical reason why DANM's CRD based, abstract network management APIs were born, and why is the whole ecosystem built around the concept of promoting networks to first-class Kubernetes API objects.

This approach opens-up a plethora of possibilities, even with today's Kubernetes core code!

The following chapters will guide you through the description of these features, and will show you how you can leverage them in your Kubernetes cluster.

Scope of the project

You will see at the end of this README that we really went above and beyond what "networks" are in vanilla Kubernetes.

But, DANM core project never did, and will break one core concept: DANM is first and foremost a run-time agnostic standard CNI system for Kubernetes, 100% adhering to the Kubernetes life-cycle management principles.

It is important to state this, because the features DANM provides open up a couple of very enticing, but also very dangerous avenues:

  • what if we would monitor the run-time and provide added high-availability feature based on events happening on that level?
  • what if we could change the networks of existing Pods?

We strongly feel that all such scenarios incompatible with the life-cycle of a standard CNI plugin firmly fall outside the responsibility of the core DANM project. That being said, tell us about your Kubernetes breaking ideas! We are open to accept such plugins into the wider umbrella of the existing eco-system: outside of the core project, but still loosely linked to suite as optional, external components. Just because something doesn't fit into core DANM, it does not mean it can't fit into your cloud! Please visit DANM utils repository for more info.

Deployment

See Deployment Guide.

User guide

See User Guide.

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Authors

  • Robert Springer (@rospring) - Initial work (V1 Python), IPAM, Netwatcher, Svcwatcher Nokia
  • Levente Kale (@Levovar) - Initial work (V2 Golang), Documentation, Integration, SCM, UTs, Metaplugin, V4 work Nokia

Special thanks to the original author who started the whole project in 2015 by putting a proprietary network management plugin between Kubelet and Docker; and also for coining the DANM acronym: Peter Braun (@peter-braun)

License

This project is licensed under the 3-Clause BSD License - see the LICENSE

danm's People

Contributors

alejandrojnm avatar antalm avatar carstenkoester avatar clivez avatar csatarigergely avatar emgabriel avatar fillamug avatar janosi avatar kokan avatar levovar avatar libesz avatar maxcaresywwforever avatar mjace avatar odidev avatar peterszilagyi avatar rospring avatar szilvesztersimicza avatar tmielika avatar tothferenc avatar visnyei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

danm's Issues

Failed to list *v1.DanmNet

I have compiled latest danm plugin from github and installed on Kubernetes 1.11 using instruction provided on github.

The netwatcher started but log error

reflector.go:205] github.com/nokia/danm/pkg/netwatcher/netwatcher.go:22: Failed to list *v1.DanmNet: Get https://10.254.0.1:443/apis/danm.k8s.io/v1/danmnets?limit=500&resourceVersion=0: Gateway Timeout

I see that CustomResourceDefinition returns danmnets.danm.k8s.io but netwatcher tries to use danm.k8s.io/v1/danmnets. Can you confirm if CustomResourceDefinition should be changed?

kubectl get CustomResourceDefinition

NAME CREATED AT
brpolices.cbur.bcmt.local 2019-02-12T14:47:04Z
danmeps.danm.k8s.io 2019-02-20T13:44:08Z
danmnets.danm.k8s.io 2019-02-20T13:44:08Z
networks.kubernetes.com 2019-02-12T14:35:27Z

I have created 2 DanmNet in namespace sbc

kubectl get DanmNet -n sbc

NAME AGE
trusted-sig-net 1h
untrusted-sig-net 2h

kubectl version

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

cat /etc/os-release

NAME="Red Hat Enterprise Linux Server"
VERSION="7.5 (Maipo)"

uname -a

Linux sbc-loko-483-12-all-in-one-01 4.18.16-1.el7.elrepo.x86_64 #1 SMP Sat Oct 20 12:52:50 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

Proposal: default network

The use-case I would like to raise here is to have a chance to successfully deploy applications without danm related annotations. This pretty much involves all common reusable components out there (i.e. community maintained helm charts, kubernetes addons like coredns or dashboard, etc.) and usually solved with other (more simple) CNI plugins.
If we could enhance danm to have some configuration option as a "default" network connectivity in certain namespaces, than it could be used for the Pods in the same ns which does not require any specific network access. Default danmnets could have some restrictions as needed.

Interface missing from annotations error

Hi,

When the interface definition is missing from a pod's annotations, I think danm CNI plugin should return with an error indicating such issue.

spec:
  template:
    metadata:
      annotations:
        danm.k8s.io/interfaces: '[{"network": "flannel", "ip": "dynamic"}]'

Currently only the following error written by danmep's deleteDockerIface can be seen in kubelet logs, which is quite misleading.

failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod \"my-pod-bj75q_default\": Unexpected command output ip: can't find device 'eth0'"

Br,
Matyas

Failed to create interface for pod when delegating weave network plugin

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug

feature

What happened:
Failed to create interface for pod when delegating weave network plugin. The warning is as below,
Warning FailedCreatePodSandBox 11m kubelet, clive-mdev-control-01 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "05aa3b1f32e941400f97fb4f94e6484c48d9146ff92b4b38133fc01ce7656a7e" network for pod "tiller-deploy-5995dd8dff-h4s2p": NetworkPlugin cni failed to set up pod "tiller-deploy-5995dd8dff-h4s2p_kube-system" network: CNI network could not be set up: CNI operation for network:default failed with:CNI delegation failed due to error:Error delegating ADD to CNI plugin:weave-net because:OS exec call failed:initializing veth: error setting up interface: exec: "iptables": executable file not found in $PATH

What you expected to happen:
Weave delegated by DANM should work properly.

How to reproduce it:
Use weave as the delegated CNI plugin of DANM

Anything else we need to know?:

Environment:

  • danm version
  • Kubernetes version (use kubectl version):
  • danm configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

Static IP allocation_pool range check does not work as expected

Is this a BUG REPORT or FEATURE REQUEST?:
Bug

What happened:
From the following DanmNet:
apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
name: macv
namespace: default
spec:
NetworkID: macv
NetworkType: macvlan
Options:
host_device: ens4
container_prefix: eth0
rt_tables: 300
cidr: 10.100.20.0/24
allocation_pool:
start: 10.100.20.50
end: 10.100.20.60
vxlan: 300

Requested static IP:
danm.k8s.io/interfaces: |
[
{"network":"macv", "ip":"10.100.20.62/24"}
]

is assigned without issue:
[cloudadmin@controller-1 ~]$ kubectl exec mactest-6d44c66f89-rx8jg ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 7e:30:6b:14:b5:3b brd ff:ff:ff:ff:ff:ff
inet 10.100.20.62/24 scope global eth0
valid_lft forever preferred_lft forever

What you expected to happen:
Static IP should be validated that it is not outside of the allocation pool. AFAI can tell currently it is only validated against the CIDR.

Github integration

You can create issue template, pull request templates those helps contributors

Refactor "container_prefix" DanmNet attribute

It should not be mandatory.

Instead, we should revert to the original solution, where an invocation ID parameter was passed to network setup functions.
If container_prefix is not provided, interfaces are set-up with "eth"+INVOCATION_ID

Calico not work properly after the invoking changed from DelegateAdd to ExecPlugin

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug

feature

What happened:
When using calico as the delegated plugin of DANM, calico created workloadendpoints with empty name, and the pods that using calico network could not reach each other.

What you expected to happen:
Calico delegated by DANM should work properly.

How to reproduce it:
Use the DANM binary compiled from recent codes.

Anything else we need to know?:

Environment:

  • danm version
  • Kubernetes version (use kubectl version):
  • danm configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

None type IP allocation fails for allowed delegates

Which basically only concerns SR-IOV: it works with IPVLAN, it already wasn't allowed for MACVLAN because it would not work at all, and all the static delegates handle their IPAM on their own; they are not integrated to DANM IPAM.
(Hmm, now that I think about it, they actually could be. How awesome that would be :) Anyway, other thread)

The problem is that when we want no IPs on the interface, we create an "ipam" section with only "type: fakeipam" attribute set.
However, that prompts the CNI to call the CNI IPAM package, at which point we will expect some IPs to be allocated.
Based on upstream:
https://github.com/intel/sriov-cni/blob/master/cmd/sriov/main.go#L77
the correct behaviour is to forego adding the whole IPAM section

Integrate IPAM of all CNI standard delegate to DANM IPAM

Investigation of #107 sparked the idea: why not overwrite the IPAM section of static CNI delegates too with "fakeipam"?
This would basically enable the integration of all CNI standard plugins to the IP management attributes of all DANM network management APIs: a privilege which was exclusively reserved to dynamic delegates until now!

The only caveat is that non-CNI standard plugins, like Flannel would continue to ignore that, but that's something we anyway need to live with.
The functionality should be also configurable: we should only do an overwrite, if CIDR/Net6 parameters are configured for the network, and IP was requested from DANM.
Otherwise the IPAM section coming from the static file would continue to be used.

DanmNets delegated to other CNI plugins are always named "eth0"

I have 2 DamnNets delegating to other CNI plugins;

apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
  name: default
spec:
  NetworkID: default
  NetworkType: bridge
---
apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
  name: ipvlan-orig
spec:
  NetworkID: ipvlan-orig
  NetworkType: ipvlan-orig
  container_prefix: ipv

Where "bridge" is the main k8s network that should be "eth0" and the "ipvlan-orig" is a renamed CNI "standard" plugin (since I can't use the built-in with cri-o #24 ).

I start a alpine pod with;

      annotations:
        danm.k8s.io/interfaces: |
          [
            {
              "network":"default",
              "network":"ipvlan-orig"
            }
          ]

and enter an alpine pod with;

kubectl exec -it alpine-deployment-d4c99889-6znws sh

I expect to see the eth0 interface connected to the bridge network and an extra interface named ipv0 (or something similar) connected to the ipvlan but instead there is only a eth0 interface and that is the ipvlan network. So it seems all delegated CNI plugins compete for the eth0 name and the latest wins.

Danm installer and popularization

Is this a BUG REPORT or FEATURE REQUEST?:
feature

What happened:
Installation is currently a manual procedure, which might include also the compilation of the components.

What you expected to happen:
Since the installation steps are not depending on too much decisions or mandatory admin interactions, it could and should be automated similarly to other CNI plugins.

Draft specs:
Install and start up all danm components with a single (or maybe a few) kubectl apply command. Controller/watcher manifests and RBAC are already available, but the CNI plugin itself has to be distributed also with a DaemonSet+host mounts. I would assume the kubelet is either running natively or if in container, than it has the relevant CNI directories mounted from host.
The installation optionally should cover a default network provisioning for the kube-system namespace in order to allow CoreDNS and other basic addons to work.
The feature depends on #18. The closing task should be to contribute the installation steps to the K8S Kubeadm documentation here along with the integration to KubeSpray.

Network Management API separation umbrella ticket

It just hit me that the "umbrella" kind of issues common in Kubernetes are the perfect way to track the implementation of major features, requiring multiple PRs.
Better late than never (?)

So, what is happening with the recent DANM 4.0 titled tickets, you could ask? Well, we are only completely re-working the network management APIs of DANM (in a 100% backward compatible way, no worries) so the project becomes the perfect fit for production grade, bare metal, multi-tenant data center solutions!
We think the end result will be quite unique in the whole Kubernetes ecosystem - something we always aspire to with each and every feature we do.
Somebody needs to push the boundaries, eh? :)

Background
The problem DANM, and literally (off: yes I can literally use literally in the place of figuratively and still speak perfect English :) ) every network management project has is the problem of finding the perfect balance between roles and responsibilities, and operability.
Namely: who is the right guy, or gal to administer the network management API in a Kubernetes cluster, and/or in a Kubernetes tenant?

Is it the application deployment engineer, constricted to a tenant?
But how would an app developer even know what physical interfaces the data center's machines have, what VLANs are configured in the switches for flat L3 networks (something we specialize in). I hope you are not allowing direct SSH access to your host machines!
And we haven't even touched name of CNI config files, or the knowledge of which physical interfaces are even allowed to be touched by tenant, and which are dedicated to some other purpose, e.g. infrastructure, storage, law enforcement etc.

Is it the cluster's network administrator, having complete control over the networks of all tenants?
Sounds like a better fit right? On paper at least.
But don't be surprised to see the resignation letter of your netadmin after she arrives to the office only to see there are 654 "please give me an internal network for tenant XYZ" requests waiting for her in her mailbox.

So, what's the solution here? Fortunately the folks at Openstack already figured out the almost perfect solution: different APIs for different purposes
But then we might as well make them entirely dynamic, and ease to use, right?

DANM 4.0 APIs
So, going forward we will introduce 3 new CRD-based APIs to DANM, in addition to DanmNet: TenantConfigs, TenantNetworks, and ClusterNetworks.
ClusterNetworks are like DanmNets: you, as the cluster's network administrator can configure any of their attributes - but they are namespaceless, cluster-wide resources.
Want to provide external network connectivity to multiple tenant users, without them having the ability to create one for themselves? This is your API!

TenantNetworks are also like DanmNets in a sense, because they too are namespaced objects. You can create them for your own needs inside your tenant, no need to pester your most probably overwhelmed netadmin.
But, you don't have control over attributes which are related to the physical properties of your logical network, like physical devices, and VNIs.

Wait, but if the TenantNetworks still need to be manually modified by the cluster's netadmins, what did we gain?
Here is where TenantConfigs enter the picture! Cluster netadmins only need to configure the physical resources usable by the the TenantNetworks once, and also in the Kubernetes API.
DANM will take care of automatically assigning all the physical details for your user's TenantNetworks, while you can enjoy your margaritas with their little umbrellas :)
It doesn't matter if your users want to use dynamic or static backends, SR-IOV or IPVLAN, DANM has got you covered!

Implementation
In order to have 100% feature parity with both the "simplified", and the "production grade" network manager APIs, we have a lot to do.
1: we need a Mutating Webhook capable of validating the existing API
#82

2: we need to introduce the new APIs
#89

3: we need to validate the new APIs
#91

4: we need to introduce mutating logic for the TenantNetworks, based on the TenantConfig API
#94

5: we need to adapt netwatcher so it recognizes the new network management APIs too
#97

6: we need to adapt the CNI code so it can work with the new APIs with 100% feature parity
#99

7: we need to adapt svcwatcher component so multi-network Service Discovery works with the new APIs too
#101

How do we cleanup the static ip incase of blade H/W failure of worker node.

Once docker&kubelet is stopped on a host, the POD's which has static ip (from danm) associated with it does not move until the docker and kubelet is running on the same host on which the POD was initially brought up.

static ip assignment is not fully supported?? Can there be separate kubernetes controller(operator) to cleanup the danm endpoints along with releasing the static ip associated with that endpoint.

Danm kubeconfig RBAC template

Hi,

Can someone provide a sample kubeconfig file for the danm user with the necessary RBAC parameters (this will be referenced in the 00-danm.conf as stated in the doc?

This was not clearly stated in the docs (unless am missing something)..

example directory missing

A demo about Multi-domain service discovery is mentioned in the README, but the example directory does not exist in current project, will someone please upload the related files into the project? Thank you.

"IPv6 only" support for external CNIs and fakeipam

Is this a BUG REPORT or FEATURE REQUEST?:
bug

What happened:
If only IPv6 address is requested for an external CNI managed interface (e.g. sriov & macvlan), CNI returns with invalid CIDR address: <nil> error message.

What you expected to happen:
"IPv6 only" request should work.

How to reproduce it:
Only IPv6 address is requested for external CNI managed interface in DANM interface annotation:

  annotations:
    danm.k8s.io/interfaces: |
      [
        {"network":"flannel", "ip":"dynamic"},
        {"network":"sriov", "ip6":"dynamic"},
      ]

Explain the need for an eth0 interface in the docs and more...

If I have no network attachment for a Pod with an interface name eth0, kubelet keeps killing the Pod with the following log pattern:

Nov 14 21:38:18 master kubelet[1522]: W1114 21:38:18.528487    1522 docker_sandbox.go:372] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "nginx-deployment-79788fb896-d5fxs_example-vnf": Unexpected command output Device "eth0" does not exist.
Nov 14 21:38:18 master kubelet[1522]: I1114 21:38:18.528679    1522 kuberuntime_manager.go:757] checking backoff for container "nginx" in pod "nginx-deployment-79788fb896-d5fxs_example-vnf(775dc9a2-e84a-11e8-bf08-00505603091a)"
Nov 14 21:38:18 master kubelet[1522]: I1114 21:38:18.528745    1522 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=nginx pod=nginx-deployment-79788fb896-d5fxs_example-vnf(775dc9a2-e84a-11e8-bf08-00505603091a)
Nov 14 21:38:18 master kubelet[1522]: E1114 21:38:18.528767    1522 pod_workers.go:186] Error syncing pod 775dc9a2-e84a-11e8-bf08-00505603091a ("nginx-deployment-79788fb896-d5fxs_example-vnf(775dc9a2-e84a-

Otherwise the Pod setup would be fine.

I know that danm is not guilty here. But still, maybe the docs could be enhanced to reveal the trick that one of the danmnet must have "Container _ Prefix: eth0" to work around this issue. If this is told in the video, than sorry ๐Ÿ˜„ I did not watch it so far.

new fresh deployment

feature, support and help

i face difficulty in deployment the DANM cni i try once and i have failed so i re-deploy new cluster and i am now in step 2 in deployment section especially for kubeconfig.yaml and RBAC file so would you please provide some explanation for creating those files

also can i have some example for creating pod with cluster wide ipvlan

Multi-Host Network Across Worker Nodes

Hello,

I am trying to create a set of network subnets that will be available to all the worker nodes. An example:

net1 with IP 10.10.10.1/24 in POD1 (running in worker 1) should be able to ping net1 with IP 10.10.10.2/24 in POD1 (running in worker 2).

net2 with IP 10.10.20.1/24 in POD1 (running in worker 1) should be able to ping net2 with IP 10.10.20.2/24 in POD1 (running in worker 2).

N.B -- I will not be using SR-IOV.

A sample architecture (from the knitter project):

image

DANM IP is not released from the pool (danmeps) when a pod restarts

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug
feature

What happened:
Config 2 IPs in the allocation pool, and 2 Pods up and running.
When one Pod restarts, the new Pod gets stuck in โ€˜ContainerCreatingโ€™ state and the old Pod has terminated successfully. But the DANM IP is not released.
What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • danm version 3.2.0
  • Kubernetes version (use kubectl version): v1.13.4
  • danm configuration:
  • OS (e.g. from /etc/os-release): CentOS Linux 7
  • Kernel (e.g. uname -a):5.0.2-1.el7.elrepo.x86_64
  • Others:

Need help update webhook.yaml

What happened:

Need instructions / pointers on how to fill these entries in integration/manifests/webhook/webhook.yaml: caBundle, certs

k8s is 1.15.2, built using kubeadm

  # Configure your pre-generated certificate matching the details of your environment
  caBundle: <CA_BUNDLE>

spec:
  serviceAccountName: danm-webhook
  containers:
    - name: danm-webhook
      image: 10.10.56.59:5000/my-danm:webhook-4-08052019
      command: [ "/usr/local/bin/webhook", "-tls-cert-bundle=/etc/webhook/certs/danm_webhook.crt", "-tls-private-key-file=/etc/webhook/certs/danm_webhook.key", "bind-port=8443" ]
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - name: webhook-certs
          mountPath: /etc/webhook/certs
          readOnly: true
 # Configure the directory holding the Webhook's server certificates
  volumes:
    - name: webhook-certs
      hostPath:
        path: /etc/kubernetes/ssl/

$ kubectl apply -f integration/manifests/webhook/
serviceaccount/danm-webhook created
clusterrole.rbac.authorization.k8s.io/caas:danm-webhook created
clusterrolebinding.rbac.authorization.k8s.io/caas:danm-webhook created
service/danm-webhook-svc created
deployment.apps/danm-webhook-deployment created
Error from server (BadRequest): error when creating "integration/manifests/webhook/webhook.yaml": MutatingWebhookConfiguration in version "v1beta1" cannot be handled as a MutatingWebhookConfiguration: v1beta1.MutatingWebhookConfiguration.Webhooks: []v1beta1.MutatingWebhook: v1beta1.MutatingWebhook.ClientConfig: v1beta1.WebhookClientConfig.Service: CABundle: decode base64: illegal base64 data at input byte 0, error found in #10 byte of ...|DLE\u003e","service"|..., bigger context ...|"clientConfig":{"caBundle":"\u003cCA_BUNDLE\u003e","service":{"name":"danm-webhook-svc","namespace":|...

Environment:

  • danm version

master branch as of 08/05/2019

  • Kubernetes version (use kubectl version):

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

  • danm configuration:

the only change made to integration/manifests/webhook/webhook.yaml is
"network":"default"

  • OS (e.g. from /etc/os-release):

$ cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

  • Kernel (e.g. uname -a):

$ uname -a
Linux mtx-huawei2-bld08 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Others:

Thanks. -Jessica

VLAN tagging is not cleared on VFs after Pod termination

Is this a BUG REPORT or FEATURE REQUEST?:
bug

What happened:
vlan, cidr and allocation_pool attributes are set in a NetworkType: sriov DanmNet definition.
If a Pod requests such network(s), the VLAN tagging is set properly on VFs selected by SR-IOV Device Plugin. The problem is that VLAN tagging remains enabled on these VFs after Pod termination.
Subsequent Pods cannot run DPDK applications on these VFs (no runtime error, but packages are not leaving the NIC). We need to remove VLAN tagging manually.

What you expected to happen:
VLAN tagging is cleared on VFs after Pod termination.

How to reproduce it:

$ ip link show ens1f0
8: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:6b:a9:14 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
$ kubectl create -f testpod-sriov-6-double.yaml
pod/testpod-sriov-6-double created
$ ip link show ens1f0
8: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:6b:a9:14 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 3 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 4 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 5 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
$ kubectl delete -f testpod-sriov-6-double.yaml
pod "testpod-sriov-6-double" deleted
$ ip link show ens1f0
8: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:6b:a9:14 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 3 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 4 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 5 MAC 00:00:00:00:00:00, vlan 100, spoof checking off, link-state auto, trust off, query_rss off
    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off
    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

Anything else we need to know?:
I believe that this behavior is not introduced by #50 , so it does not seem to be a new bug.
Currently it is not clear whether it is a Danm or netlink responsibility. What do you think?

Buildah/podman support in danm build procedure

Is this a BUG REPORT or FEATURE REQUEST?:

feature

What happened:
The current build procedure of danm supports only docker.

What you expected to happen:
The purpose of this issue is to add the support of buildah/podman as an alternative.

Istio not work with DANM

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug
feature

What happened:
I tested istio in my cluster with DANM as the network plugin, and the istio functions could not work properly.
I give the pods two interfaces, eth0 from calico (invoked by DANM) to communicate with k8s, eth1 from ipvlan backended danmnet, and create the service in DANM schema (indicating the service is working on eth1).
But seems istio could not work with this kind of "headless and selectorless Services", its traffic management functions do not take effort in this scene, it only works when there is selector in the spec of service, but when there is selector in the spec, traffic will go to eth0, not eth1 that I want.
So I want to know, is there anybody that has ever tried istio with DANM? Need there to be some special configuration? Or they do not compatible at all?

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • danm version
  • Kubernetes version (use kubectl version):
  • danm configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

Generated REST client code for the DANM CRDs should be stored in the repository

Is this a BUG REPORT or FEATURE REQUEST?:
Bug

What happened:
I tried to build a standalone application that imports the k8s client library generated from github.com/nokia/danm/pkg/crd/apis/danm/v1, that is:

import (
  ...
  danmtypes "github.com/nokia/danm/pkg/crd/apis/danm/v1"
  danmclientset "github.com/nokia/danm/pkg/crd/client/clientset/versioned"
)

golang's dep init / ensure fails with:

init failed: unable to solve the dependency graph: Solving failure: No versions of github.com/nokia/danm met constraints:
	v3.1.0: Could not introduce github.com/nokia/[email protected], as its subpackage github.com/nokia/danm/pkg/crd/client/clientset/versioned is missing. (Package is required by (root).)
	v3.0.0: Could not introduce github.com/nokia/[email protected], as its subpackage github.com/nokia/danm/pkg/crd/client/clientset/versioned is missing. (Package is required by (root).)
	master: Could not introduce github.com/nokia/danm@master, as its subpackage github.com/nokia/danm/pkg/crd/client/clientset/versioned is missing. (Package is required by (root).)

Unfortunately I found no way to generate the k8s client on my own and then successfully use dep init even when playing with Gopkg.toml ignored parameter (first generating dependencies ignoring the generated code, then removing the ignore for generated code and updating dependencies).

I believe the generated informers (github.com/nokia/danm/pkg/crd/client/informers/externalversions) and listers (github.com/nokia/danm/pkg/crd/client/listers/danm/v1) are also missing from the repo.

What you expected to happen:
golang dep init / ensure should be able to gather all dependencies. According to this ticket (golang/dep#1077) go dep currently assumes all generated code is committed to source repository.

How to reproduce it:
Write any code that tries to import github.com/nokia/danm/pkg/crd/client/clientset/versioned then run "dep init" or "dep ensure".

Environment:

  • danm version: 3.0.0
  • Kubernetes version (use kubectl version): 1.12.4
  • danm configuration: not relevant
  • OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)
  • Kernel (e.g. uname -a): Linux antal 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

"build_danm.sh" does not work out of box

Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

Uncomment only one, leave it on its own line:

bug

feature

What happened: build_danm.sh report error can't cd to /go/src/github.com/nokia/danm

Executing busybox-1.30.1-r2.trigger
OK: 418 MiB in 34 packages
Removing intermediate container d64597039510
 ---> 55165d816276
Step 7/7 : ENTRYPOINT /build.sh
 ---> Running in ad30f065b0c6
Removing intermediate container ad30f065b0c6
 ---> 7af856ae36ca
Successfully built 7af856ae36ca
Successfully tagged danm_builder:1.0
Running DANM build
+ export 'GOOS=linux'
+ cd /go/src/github.com/nokia/danm
/build.sh: cd: line 3: can't cd to /go/src/github.com/nokia/danm: No such file or directory
build_danm.sh error on line : 12 command was: docker
Terminating with error!

What you expected to happen: 6 danm binaries generated under $GOPATH/bin.

How to reproduce it:

go get -d github.com/nokia/danm
cd $GOPATH/src/github.com/nokia/danm
./build_danm.sh

Anything else we need to know?:

Environment:

  • danm version
    master, v4.0.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.8", GitCommit:"0c6d31a99f81476dfc9871ba3cf3f597bec29b58", GitTreeState:"clean", BuildDate:"2019-07-08T08:38:54Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
  • danm configuration:
  • OS (e.g. from /etc/os-release):
NAME=Fedora
VERSION="29 (Twenty Nine)"
ID=fedora
VERSION_ID=29
VERSION_CODENAME=""
PLATFORM_ID="platform:f29"
PRETTY_NAME="Fedora 29 (Twenty Nine)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:29"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f29/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=29
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=29
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
  • Kernel (e.g. uname -a):
Linux cdcinfra1-clouddev.dyn.nesc.nokia.net 4.14.114-1.wf30.x86_64 #1 SMP Mon May 6 15:14:16 EEST 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

IP HA support using keepAlived on DANM

Hi,
I understand that i have a bit weird requirement,
Back Ground:
In IMS volte from UE to GM interface does not work behind NAT(legacy implementation), So we are using DANM CNI plugin for direct connectivity for external connection. I use the multus to achieve the multiple interfaces with either(calico/flannel..) .

Since We need a highly available ip address for UE connectivity , I use the keepAlived as a standard software to achieve this.I achieved this by keeping keepAlived container inside a pod, which takes care of plumbing the HA ipaddress inside the pod on danm network interface.(NET_ADMIN capability required for keepalived container). With current DANM behavior 2 drawbacks
* 2 extra ip address is wasted on actual interface inside the pod
* HA ip is not known to DANM ipam(bitarray)

Since DANM is going to be standard solution for some of the volte products. My proposal is introduce the one more entry in Annotation block to understand the HA So that we can address those 2 drawbacks.

Any suggestion would also be helpful to achieve common solution.
Br,
Anand

go get failing for danm package

go get -v github.com/nokia/danm/pkg/ipam
github.com/nokia/danm (download)

cd /home/anannaya/data/STUDIES/PROGRAMMING/Go_programs/go/src/github.com/nokia/danm; git pull --ff-only

fatal: unable to access 'https://github.com/nokia/danm/': Failed to connect to 10.158.100.6 port 8080: Connection timed out
package github.com/nokia/danm/pkg/crd/client/clientset/versioned: exit status 1
package github.com/nokia/danm/pkg/crd/client/informers/externalversions: cannot find package "github.com/nokia/danm/pkg/crd/client/informers/externalversions" in any of:
/usr/lib/golang/src/github.com/nokia/danm/pkg/crd/client/informers/externalversions (from $GOROOT)
/home/anannaya/data/STUDIES/PROGRAMMING/Go_programs/go/src/github.com/nokia/danm/pkg/crd/client/informers/externalversions (from $GOPATH)

The container build instructions builds un-tagged containers

The instruction says;

docker build integration/docker/netwatcher

But this build set repository and tag to <none>. I built both containers and could not see which was which.

I extracted the tag with;

grep image: integration/manifests/netwatcher/netwatcher_ds.yaml

Then build with;

docker build -t library/netwatcher:3.0.0 integration/docker/netwatcher

and pushed to a private registry.

Does danm work with calico?

Hello,

First attempt to try out DANM. Followed readme to build, started netwatcher. Modified / simplified example from project, but test pod failed to start.

apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
name: cali-mgmt
namespace: example-sriov
spec:
NetworkID: calico-mgmt
NetworkType: calico

$ kubectl get dn -n example-sriov
NAME AGE
cali-mgmt 29m

[root@mtx-bld08 net.d]# cat calico-mgmt.conf
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "mtx-huawei2-bld08",
"mtu": 1440,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}

apiVersion: v1
kind: Pod
metadata:
name: sriov-pod
namespace: example-sriov
labels:
env: test
annotations:
danm.k8s.io/interfaces: |
[
{"network":"calico-mgmt", "ip":"dynamic"}
]
spec:
containers:

  • name: sriov-pod
    image: busybox:latest
    args:
    • sleep
    • "1000"

Events:
Type Reason Age From Message


Normal Scheduled 3s default-scheduler Successfully assigned example-sriov/sriov-pod to mtx-huawei2-bld04
Warning FailedCreatePodSandBox 2s kubelet, mtx-huawei2-bld04 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6831107ccc88762a8d717bbcaf0ee5cd62c83576076adcfd62cb156d4ac31732" network for pod "sriov-pod": NetworkPlugin cni failed to set up pod "sriov-pod_example-sriov" network: CNI network could not be set up: CNI operation for network: failed with:failed to get network object for Pod:sriov-pod's connection no.:0 due to:requested network:calico-mgmt of type:DanmNet in namespace:example-sriov does not exist

Environment:

  • danm version

Where to find this?

  • Kubernetes version (use kubectl version):

[mtx@mtx-bld08 danm]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

  • danm configuration:

[root@mtx-bld08 net.d]# cat 00-danm.conf
{
"name": "meta_cni",
"name_comment": "Mandatory parameter, but can be anything",
"type": "danm",
"type_comment": "Mandatory parameter according to CNI spec, MUST be set to danm",
"kubeconfig": "/etc/cni/net.d/danm-kubeconfig",
"kubeconfig_comment": "Mandatory parameter, must point to a valid kubeconfig file containing the necessary RBAC setting for DANM's user",
"cniDir": "/etc/cni/net.d",
"cniDir_comment": "Optional parameter, if defined CNI config files for static delegates are searched here. Default value is /etc/cni/net.d",
"namingScheme": "awesome",
"namingScheme_comment": "Optional parameter, if it is set to legacy container network interface names are set exactly to DanmNet.Spec.Options.container_prefix, otherwise prefix simply behaves as a prefix and is suffixed with a sequence ID. Default value is empty (e.g. not legacy)"
}


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: caas:danm
rules:

  • apiGroups:
    • danm.k8s.io
      resources:
    • danmnets
    • danmeps
    • tenantnetworks
    • clusternetworks
      verbs: [ "*" ]
  • apiGroups: [ "" ]
    resources: [ "pods" ]
    verbs: [ "get","watch","list"]

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: caas:danm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: caas:danm
subjects:

  • kind: ServiceAccount
    namespace: kube-system
    name: danm

  • OS (e.g. from /etc/os-release):

NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

  • Kernel (e.g. uname -a):

Linux mtx-huawei2-bld08 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Thanks. -Jessica

Unable to use DANM with calico

Is this a BUG REPORT or FEATURE REQUEST?:

bug

What happened:
Created a DANMnet object for calico

Name: calico-danm
Namespace: default
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"danm.k8s.io/v1","kind":"DanmNet","metadata":{"annotations":{},"name":"calico-danm","namespace":"default"},"spec":{"NetworkID":"calico-da...
API Version: danm.k8s.io/v1
Kind: DanmNet
Metadata:
Creation Timestamp: 2018-12-06T12:15:48Z
Generation: 1
Resource Version: 1656462
Self Link: /apis/danm.k8s.io/v1/namespaces/default/danmnets/calico-danm
UID: aa30e343-f950-11e8-9437-fa163ebccd59
Spec:
Network ID: calico-danm
Network Type: calico
Options:
Allocation _ Pool:
End: 192.168.1.255
Start: 192.168.1.2
Cidr: 192.168.1.0/24
Container _ Prefix: eth0
Host _ Device: eth0
Rt _ Tables: 202
Validation: True
Events:

Also, created the following calico.conf file in all the nodes in /etc/cnt/net.d path

{
"name": "calico-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"mtu": 1500,
"policy": {
"type": "k8s"
},
"ipam": {
"type": "calico-ipam",
"assign_ipv6": "true",
"assign_ipv4": "true"
},
"etcd_endpoints": "https://172.16.1.3:2379",
"etcd_key_file": "/etc/etcd/ssl/etcd-client-key.pem",
"etcd_cert_file": "/etc/etcd/ssl/etcd-client.pem",
"etcd_ca_cert_file": "/etc/etcd/ssl/ca.pem",
"kubernetes": {
"kubeconfig": "/etc/kubernetes/cluster-admin.kubeconfig"
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}

Now, when use the calico-danm danmnet object in my deployment.yaml as
annotations:
danm.k8s.io/interfaces: |
[
{
"network": "calico-danm",
"ip": "192.168.1.3/24"
}
]

I am getting the error that
Error delegating ADD to CNI plugin:calico because:no etcd endpoints specified

But as you can see in the calico.conf the etcd endpoints are mentioned. Not sure what I am missing.

Environment:

  • danm version
  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • danm configuration:
  • OS (e.g. from /etc/os-release): RHEL 7.5
  • Kernel (e.g. uname -a): 4.18
  • Others:

IPv6 does not work because IPv6 is disabled in Pod's network namespace

Is this a BUG REPORT or FEATURE REQUEST?:

feature request in a sense that IPv6 sysctl support is missing,
but it can be a bug, because IPv6 support is broken

What happened:

CNIs fail to set IPv6 address(es) on interfaces, because IPv6 is disabled (sysctl: disable_ipv6=1) in the Pod's network namespace.

What you expected to happen:

IPv6 should work as desired.

How to reproduce it:

Simple DanmNet+Pod definition which requests IPv6 on at least one interface.

Anything else we need to know?:

DANM should take care of IPv6 related sysctls before CNIs are invoked.

"kubectl get all" should list danmnet resources

Is this a BUG REPORT or FEATURE REQUEST?:
FEATURE REQUEST

Uncomment only one, leave it on its own line:

feature

What happened:
"kubectl get all " command cannot list danmnet resource.

What you expected to happen:
"kubectl get all" should list danmnet resource.

How to reproduce it:
inevitable

Anything else we need to know?:
add "categories: - all" to danmnet crd could fix this issue.
image

Environment: kubernetes

  • danm version: v3.3.0
  • Kubernetes version (use kubectl version): v1.15.0
  • danm configuration: not needed
  • OS (e.g. from /etc/os-release): CentOS Linux 7
  • Kernel (e.g. uname -a):
  • Others:

Factory docker images?

Maybe it would be worth to push the released netwatcher and the svcwatcher docker images to docker hub, so that the sample manifests could work ootb for the lucky people who has internet access in their environment ๐Ÿ˜„.

Pods get IP from flannel instead of Danm

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug

feature

What happened:
I tried to set up the svcwatcher demo, with small modifications to the danmnet yaml files to fit my environment. The danmnets succeeded in setting up the interfaces as shown in the demo video:

67: external.300@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 08:00:27:c5:bc:64 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fec5:bc64/64 scope link 
       valid_lft forever preferred_lft forever
68: vx_internal: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 06:33:f3:e7:b3:92 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::433:f3ff:fee7:b392/64 scope link 
       valid_lft forever preferred_lft forever

However when creating the deployments, the pods got their IP adresses from my container networking provider (which is flannel at the moment, but I've tried with calico and weavenet as well and ran into the same issue), instead from the cidr specified in the danmnets' yaml files:

NAMESPACE         NAME                                  READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
example-vnf       internal-processor-5b7854d89f-958sg   1/1     Running   7          124m   10.244.1.40     node-1       <none>           <none>
example-vnf       internal-processor-5b7854d89f-fh5dn   1/1     Running   7          124m   10.244.1.43     node-1       <none>           <none>
example-vnf       internal-processor-5b7854d89f-n4zbk   1/1     Running   7          124m   10.244.2.42     node-2       <none>           <none>
example-vnf       internal-processor-5b7854d89f-nqd5j   1/1     Running   7          124m   10.244.1.42     node-1       <none>           <none>
example-vnf       internal-processor-5b7854d89f-w5n5f   1/1     Running   7          124m   10.244.2.40     node-2       <none>           <none>
example-vnf       internal-processor-5b7854d89f-wm4ps   1/1     Running   7          124m   10.244.2.41     node-2       <none>           <none>
example-vnf       loadbalancer-5c4fcf5cd8-d8v2l         1/1     Running   7          124m   10.244.1.41     node-1       <none>           <none>
example-vnf       loadbalancer-5c4fcf5cd8-rbbnw         1/1     Running   7          124m   10.244.2.39     node-2       <none>           <none>
external-client   external-client-db5c8847f-gm4fl       1/1     Running   7          124m   10.244.2.38     node-2       <none>           <none>

Which, I assume also causes the following problem, that when I step into one of the load-balancer pods for example, the interfaces are not correctly connected as opposed to how they are shown in the example video:

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if135: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether fa:91:12:98:e7:7d brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.41/24 scope global eth0
       valid_lft forever preferred_lft forever

Nor do the services have their correct endpoints:

Name:              vnf-internal-processor
Namespace:         example-vnf
Labels:            <none>
Annotations:       danm.k8s.io/network: internal
                   danm.k8s.io/selector: {"app":"internal-processor"}
Selector:          <none>
Type:              ClusterIP
IP:                None
Port:              zeromq  5555/TCP
TargetPort:        5555/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

What you expected to happen:
I expected to have the same outcome as is shown in the example video, because I believe I followed all steps correctly, but it seems I probably did not.

How to reproduce it:
I have set up a kubernetes system in vagrant with three nodes, one of them is the master and the other two are workers:

NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master   Ready    master   22h   v1.15.1   192.168.50.10   <none>        Ubuntu 18.04.2 LTS   4.15.0-51-generic   docker://19.3.0
node-1       Ready    <none>   22h   v1.15.1   192.168.50.11   <none>        Ubuntu 18.04.2 LTS   4.15.0-51-generic   docker://19.3.0
node-2       Ready    <none>   22h   v1.15.1   192.168.50.12   <none>        Ubuntu 18.04.2 LTS   4.15.0-51-generic   docker://19.3.0

I also made the following modifications to the danmnets' yaml files found in the demo:
external_net.yaml

kind: DanmNet
metadata:
  name: external
  namespace: external-client
spec:
  NetworkID: external
  NetworkType: ipvlan
  Options:
    host_device: eth0
    container_prefix: eth0
    rt_tables: 150
    vlan: 300
    cidr: 10.100.20.0/24
    allocation_pool:
      start: 10.100.20.50
      end: 10.100.20.60

vnf_external_net.yaml:

apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
  name: external
  namespace: example-vnf
spec:
  NetworkID: external
  NetworkType: ipvlan
  Options:
    host_device: eth0
    container_prefix: ext
    rt_tables: 250
    vlan: 300
    cidr: 10.100.20.0/24
    allocation_pool:
      start: 10.100.20.10
      end: 10.100.20.30

vnf_internal_net.yaml:

apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
  name: internal
  namespace: example-vnf
spec:
  NetworkID: internal
  NetworkType: ipvlan
  Options:
    host_device: eth1
    container_prefix: int
    rt_tables: 200
    vxlan: 600
    cidr: 10.100.1.0/24
    allocation_pool:
      start: 10.100.1.100
      end: 10.100.1.200

Environment:

  • danm version:
    v3.3.0
    I use this version, because in v4.0.0 I ran into errors with webhook as well, but since for now I don't need the extra features offered by v4.0.0, I decided to avoid this problem alltogether by using a pervious version, without webhook.
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • danm configuration:
    00-danm.conf:
{
"name": "meta_cni",
"name_comment": "Mandatory parameter, but can be anything",
"type": "danm",
"type_comment": "danm",
"kubeconfig": "/etc/cni/net.d/danmc.yml",
"kubeconfig_comment": "/etc/cni/net.d/danmc.yml",
"cniDir": "/etc/cni/net.d",
"cniDir_comment": "/etc/cni/net.d",
"namingScheme": "awesome",
"namingScheme_comment": "\_/ <-- Ebbe sirj"
}

danmc.yml:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <Copied from /etc/kubernetes/kubelet.conf>
    server: https://10.96.0.1:443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: danm
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: danm
  user:
    client-certificate-data: <Copied from /etc/kubernetes/kubelet.conf>
    client-key-data: <Copied from /etc/kubernetes/kubelet.conf>
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a):
Linux k8s-master 4.15.0-51-generic #55-Ubuntu SMP Wed May 15 14:27:21 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Fail to build webhook image in v4.0.0 release

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

bug

feature

What happened: Fail to generate webhook docker image .

Step 9/10 : RUN apk add --no-cache --virtual .tools ca-certificates gcc musl-dev go glide git tar curl && mkdir -p $GOPATH/src/github.com/nokia/danm && git clone -b 'webhook' --depth 1 https://github.com/nokia/danm.git $GOPATH/src/github.com/nokia/danm && cd $GOPATH/src/github.com/nokia/danm && glide install --strip-vendor && go get -d github.com/vishvananda/netlink && go get github.com/containernetworking/plugins/pkg/ns && go get github.com/golang/groupcache/lru && rm -rf $GOPATH/src/k8s.io/code-generator && git clone -b 'kubernetes-1.13.4' --depth 1 https://github.com/kubernetes/code-generator.git $GOPATH/src/k8s.io/code-generator && go install k8s.io/code-generator/cmd/deepcopy-gen && go install k8s.io/code-generator/cmd/client-gen && go install k8s.io/code-generator/cmd/lister-gen && go install k8s.io/code-generator/cmd/informer-gen && deepcopy-gen --alsologtostderr --input-dirs github.com/nokia/danm/crd/apis/danm/v1 -O zz_generated.deepcopy --bounding-dirs github.com/nokia/danm/crd/apis && client-gen --alsologtostderr --clientset-name versioned --input-base "" --input github.com/nokia/danm/crd/apis/danm/v1 --clientset-path github.com/nokia/danm/crd/client/clientset && lister-gen --alsologtostderr --input-dirs github.com/nokia/danm/crd/apis/danm/v1 --output-package github.com/nokia/danm/crd/client/listers && informer-gen --alsologtostderr --input-dirs github.com/nokia/danm/crd/apis/danm/v1 --versioned-clientset-package github.com/nokia/danm/crd/client/clientset/versioned --listers-package github.com/nokia/danm/crd/client/listers --output-package github.com/nokia/danm/crd/client/informers && go install -a -ldflags '-extldflags "-static"' github.com/nokia/danm/cmd/webhook && cp $GOPATH/bin/webhook /usr/local/bin/webhook && rm -rf $GOPATH/src && rm -rf $GOPATH/bin && apk del .tools && rm -rf /var/cache/apk/* && rm -rf /var/lib/apt/lists/* && rm -rf /tmp/* && rm -rf ~/.glide
 ---> Running in 8be13f8d2b8b
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/25) Upgrading musl (1.1.20-r4 -> 1.1.20-r5)
(2/25) Installing ca-certificates (20190108-r0)
(3/25) Installing binutils (2.31.1-r2)
(4/25) Installing gmp (6.1.2-r1)
(5/25) Installing isl (0.18-r0)
(6/25) Installing libgomp (8.3.0-r0)
(7/25) Installing libatomic (8.3.0-r0)
(8/25) Installing libgcc (8.3.0-r0)
(9/25) Installing mpfr3 (3.1.5-r1)
(10/25) Installing mpc1 (1.0.3-r1)
(11/25) Installing libstdc++ (8.3.0-r0)
(12/25) Installing gcc (8.3.0-r0)
(13/25) Installing musl-dev (1.1.20-r5)
(14/25) Installing go (1.11.5-r0)
(15/25) Installing glide (0.13.2-r0)
(16/25) Installing nghttp2-libs (1.35.1-r0)
(17/25) Installing libssh2 (1.8.2-r0)
(18/25) Installing libcurl (7.64.0-r2)
(19/25) Installing expat (2.2.7-r0)
(20/25) Installing pcre2 (10.32-r1)
(21/25) Installing git (2.20.1-r0)
(22/25) Installing tar (1.32-r0)
(23/25) Installing curl (7.64.0-r2)
(24/25) Installing .tools (0)
(25/25) Upgrading musl-utils (1.1.20-r4 -> 1.1.20-r5)
Executing busybox-1.29.3-r10.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 383 MiB in 39 packages
Cloning into '/go/src/github.com/nokia/danm'...
warning: Could not find remote branch webhook to clone.
fatal: Remote branch webhook not found in upstream origin

What you expected to happen: webhook docker image is generated successfully using webhook dockerfile

How to reproduce it:
docker build -t webhook:latest integration/docker/webhook

Anything else we need to know?:

Error happens in command git clone -b 'webhook' --depth 1 https://github.com/nokia/danm.git $GOPATH/src/github.com/nokia/danm . Trying to clone webhook branch but it is not exist.

Environment:

  • danm version
  • Kubernetes version (use kubectl version):
  • danm configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

Document the current status of IPv6 support

Is this a BUG REPORT or FEATURE REQUEST?:

bug

What happened:
Unable to configure a danmnet object with an IPv6 CIDR. If I try with the below yaml

apiVersion: danm.k8s.io/v1
kind: DanmNet
metadata:
name: ipvlan-danm6
namespace: default
spec:
NetworkID: ipvlan-danm6
NetworkType: ipvlan
Options:
host_device: eth0
cidr: 2a00:8a00:a000:1193:2::e80/122
allocation_pool:
start: 2a00:8a00:a000:1193:2::e84
end: 2a00:8a00:a000:1193:2::e87
container_prefix: eth0
rt_tables: 201
routes:
::0/0: 2a00:8a00:a000:1193:2::e81

Then I get the following error on parsing

The DanmNet "ipvlan-danm6" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"danm.k8s.io/v1", "kind":"DanmNet", "metadata":map[string]interface {}{"annotations":map[string]interface {}{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"danm.k8s.io/v1","kind":"DanmNet","metadata":{"annotations":{},"name":"ipvlan-danm6","namespace":"default"},"spec":{"NetworkID":"ipvlan-danm6","NetworkType":"ipvlan","Options":{"allocation_pool":{"end":"2a00:8a00:a000:1193:2::e87","start":"2a00:8a00:a000:1193:2::e84"},"cidr":"2a00:8a00:a000:1193:2::e80/122","container_prefix":"eth0","host_device":"eth0","routes":{"::0/0":"2a00:8a00:a000:1193:2::e81"},"rt_tables":201}}}\n"}, "generation":1, "uid":"783b5c37-f90d-11e8-9437-fa163ebccd59", "selfLink":"", "clusterName":"", "name":"ipvlan-danm6", "namespace":"default", "creationTimestamp":"2018-12-06T04:14:48Z"}, "spec":map[string]interface {}{"NetworkID":"ipvlan-danm6", "NetworkType":"ipvlan", "Options":map[string]interface {}{"allocation_pool":map[string]interface {}{"end":"2a00:8a00:a000:1193:2::e87", "start":"2a00:8a00:a000:1193:2::e84"}, "cidr":"2a00:8a00:a000:1193:2::e80/122", "container_prefix":"eth0", "host_device":"eth0", "routes":map[string]interface {}{"::0/0":"2a00:8a00:a000:1193:2::e81"}, "rt_tables":201}}}: validation failure list:
spec.Options.cidr in body should match '^([0-9]{1,3}.){3}[0-9]{1,3}(/([0-9]|[1-2][0-9]|3[0-2]))$'

The last error shows that the cidr supports IPv4 only. This should be corrected.

Actually, I want to use DANM with IPv6 so not sure if this is the only error. I donno if we may need to fix at some more places to support IPv6.

Environment:

  • danm version 0.3
  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • danm configuration:
  • OS (e.g. from /etc/os-release): RHEL 7.5 (Maipo)
  • Kernel (e.g. uname -a): 4.18.16-1.el7.elrepo.x86
  • Others:

Support bonding on the DANM API level

Why?
Link redundancy, in a limited format.
LAGs cannot really be used in a containerized, non-overlay environment considering multiple Pods would use the same physical function.
Active-standby bonds can be used over a PF, but not all kernel networking features involve connecting a child to a master interface in the hostns

How?
Plain and simple active-standby bonds can be provisioned into the Pods netns over two network interfaces. We won't validate if the container interfaces are really coming from / connected to different physical interfaces, or if the different physical interfaces are really connected to different switches.
That's up to you :)

Through which API?
Pod-level, not DanmNet.
The existing "danm.k8s.io/interfaces" records will be expanded with one more parameter, called "id". This is to uniquely identify an interface ("network" is not a unique key, because you can ask multiple network interfaces to be provisioned based on the same DanmNet)
Bond connections will need to be defined as a separate interface. The bond connection is explicitly identified by having an additional, optional parameter called "slaves". In the slaves section two, existing connection IDs shall be defined.

Restrictions?
Will be only supported for dynamic backends, aka. IPVLAN, MACVLAN, and SR-IOV.
Need to figure out inter-working with VLANs, probably would support bonding only untagged interfaces at the beginning

Comments are welcomed!

Add ValidationAdmissionHooks to the project

Using ValidationAdmissionHooks would enable us to treat DANM related API objects as "real", API-server managed core objects all over the project from user perspective.
This would be very much inline with what we are trying to achieve, and would be very beneficial for users :)

Hooks could be injected to three places:
1: DanmNet: all DanmNet validation rules could be extracted from netwatcher, and put into a validation webhook. This would fail DanmNet creation at creation time, rather than in run-time
2: Pod: Pod admission could be rejected if the network connection annotation field is not proper (badly formatted JSON, non-existing networks).
3: Service: DANM related annotations could be validated here too, and Service creation rejected if the referenced network does not even exist in the user's namespace

gARP is not needed for IPv6

Is this a BUG REPORT or FEATURE REQUEST?:
bug

What happened:
Got warning message in plugin.log:

2019/04/15 15:45:43 WARNING: sending gARP Reply failed with error:gARP update for IP address: fc00:caa5:1:a:f816:3eff:fe63:a6c0 was unsuccessful:exit status 2  , but we will ignore that for now!

What you expected to happen:
Avoid unnecessary warning messages.

How to reproduce it:
Request IPv6 address on ipvlan interface.

Why danm CNI k8s namespace and pods namespace should be same??

Hi ,
I have 2 questions , here

  1. How the ipam management is done, If we create a same network(subnet) in 2 different namespaces? Can this be handled using Admission controller?

  2. Since CNI (Danmnets) creation is the Administrator responsibility ,May not have been deployed in different namespace. Can we provide a option in Annotation block to provide a k8S namespace as well along with other details?

Please share your opinion on these 2 questions/Issues.
Br,
Anand

svcwatcher is not working correcly

BUG REPORT:

What happened:
svcwatcher keep in "ContainerCreating" Status.
k describe pods svcwatcher-fczh9

(combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e1d7c633b8b203b3264d839a608a9d028668331621c072a2e7f603e34693f761" network for pod "svcwatcher-fczh9": NetworkPlugin cni failed to set up pod "svcwatcher-fczh9_kube-system" network: CNI network could not be set up: failed to get DanmNet due to:NID:flannel in namespace:kube-system cannot be GET from K8s API server, because of error:danmnets.danm.k8s.io "flannel" not found

I'm using flannel avec my pods are ready :
image

This is my svcwatcher_ds.yml file :

apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
  name: svcwatcher
  namespace: kube-system
spec:
  selector:
    matchLabels:
      danm.k8s.io: svcwatcher
  template:
    metadata:
      annotations:
        danm.k8s.io/interfaces: |
          [
            {
              "network":"flannel"
            }
          ]
      labels:
        danm.k8s.io: svcwatcher
    spec:
      serviceAccount: svcwatcher
      dnsPolicy: ClusterFirst
      nodeSelector:
        "node-role.kubernetes.io/master": ""
      containers:
        - name: svcwatcher
          image: svcwatcher:latest
          args:
            - "--logtostderr"
      tolerations:
       - effect: NoSchedule
         operator: Exists
       - effect: NoExecute
         operator: Exists
      terminationGracePeriodSeconds: 0

What you expected to happen:

I expect svcwatcher to work correcly

How to reproduce it:

Install a standalone master node of kubernetes on last release of debian 9.8 with flannel.
I use the postinstall of DANM in the README.md

Netwatcher seems to work correcly :
image

Anything else we need to know?:

Environment:

  • danm version
    3.0.0
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
  • danm configuration:
    just postinstall
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
  • Kernel (e.g. uname -a):
Linux deb-k8s-02 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
  • Others:

Just to know, is it possible to make works vlan with DANM and centos 7 which use a old kernel ?

Linux 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Thanks for help !! :D

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.