Giter VIP home page Giter VIP logo

plugins's People

Contributors

aaithal avatar anguslees avatar bboreham avatar dcbw avatar dependabot[bot] avatar eyakubovich avatar fedepaol avatar feiskyer avatar freehan avatar jellonek avatar jonboulle avatar maiqueb avatar markstgodard avatar mars1024 avatar matthewdupre avatar mccv1r0 avatar mmorel-35 avatar nagiesek avatar philips avatar rosenhouse avatar s1061123 avatar silverbut avatar squaremo avatar squeed avatar steveej avatar thajeztah avatar thxcode avatar tomdee avatar vadorovsky avatar zachgersh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plugins's Issues

Unable to add custom DNS to container using flannel plugin

My /etc/cni/net.d/10-flannelnet.conf file looks like this :
{
"name": "flannelnet",
"type": "flannel",
"subnetFile": "/var/run/flannel/subnet.env",
"delegate":{ "isDefaultGateway": true},
"dns": {
"nameservers": [ "10.112.10.2" ]
}
}

I tried to create network endpoint using scripts avaliable at : https://github.com/containernetworking/cni/tree/master/scripts

Endpoint inside container gets created but there is no entry of dns server in /etc/resolv.conf inside container

portmap: delete UDP conntrack entries on teardown

As observed in kubernetes/kubernetes#59033, a quick teardown + spinup of portmappings can cause UDP "flows" to be lost, thanks to stale conntrack entries.

From the original issue:

  1. A server pod exposes UDP host port.
  2. A client sends packets to the server pod thru the host port. This creates a conntrack entry.
  3. The server pod's IP changs due to whatever reason, such as pod gets recreated.
  4. Due to the nature of UDP and conntrack, new request from the same client to the host port will keep hitting the stale conntrack entry.
  5. Client observes traffic black hole.

Is using `net.ipv6.conf.all.forwarding` correct?

Enabling ipv6 forwarding has some interesting side-effects. According to the kernel docs:

If local forwarding is enabled, Router behaviour is assumed.
This means exactly the reverse from the above:

  1. IsRouter flag is set in Neighbour Advertisements.
  2. Router Solicitations are not sent unless accept_ra is 2.
  3. Router Advertisements are ignored unless accept_ra is 2.
  4. Redirects are ignored.

We almost certainly do not want to do this globally. As the kernel docs say,

It is recommended to have the same setting on all interfaces; mixed router/host scenarios are rather uncommon.

However, that's exactly the scenario we might find ourselves in.

So, if we want to support a host with a SLAAC uplink and, say, host-local masqueraded link, I think we'll be in trouble. I need to do some experimenting and test this out.

bridge: mac address unstable for v6-only bridges

We set the bridge's mac address (a.k.a. bridge id) based on a deterministic permutation of the bridge's v4 address. When the bridge doesn't have a v4 address, there is no explicit mac set, so it chooses the lowest mac on the bridge. This changes as containers come and go.

We probably should just set the mac to itself in all cases and stop changing it.

Plugins should include their build version

Not knowing the exact plugin version makes debugging tricky, especially when it's not clear where the cni binaries come from.

The plugins should have some kind of a --version argument or equivalent.

ptp plugin + IPVS hairpin issue

My test:

  • The CNI config file looks like,
{
        "name": "mynet",
        "type": "ptp",
        "ipMasq": true,
        "ipam": {
                "type": "host-local",
                "subnet": "10.1.1.0/24",
                "routes": [
                        { "dst": "0.0.0.0/0" }
                ]
        }
}
  • Run a container(nginx) with ptp plugin,
./docker-run.sh -it --rm docker.io/richarvey/nginx-php-fpm
  • Enter the running container and curl its IP + Port, it's reachable.

  • Enter the running container and curl a VIP(its real backend is the container itself) + Port, it's unreachable.
    PS. I created an IPVS virtual server.

[root@SHA1000130405 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  1.2.3.4:80 rr
  -> 10.1.1.16:80                 Route   1      0          0

and bound the VIP(1.2.3.4) to a local dummy device.

551: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
    link/ether 5a:e7:c7:95:e0:82 brd ff:ff:ff:ff:ff:ff
    inet 1.2.3.4/32 scope global kube-ipvs0

@squeed

DHCP: correctly handle route fields

From the RFC:

If the DHCP server returns both a Classless Static Routes option and
a Router option, the DHCP client MUST ignore the Router option.

Similarly, if the DHCP server returns both a Classless Static Routes
option and a Static Routes option, the DHCP client MUST ignore the
Static Routes option.

We do neither: we ignore the Router option and merge the Classless and Static sections.

A home for netlink within CNI?

I spoke with @vishvananda and he's interested in finding a home + growing the contributor base for his netlink (https://github.com/vishvananda/netlink) library which CNI uses along with other networking related projects.

Is there any interest in the CNI maintainer-ship?

At the moment, I don't think CNI has a clear process on accepting new projects unless I'm mistaken, this could be an exercise in defining one.

Add a CNI for Windows

Creating a new issue to get feedback on whether it makes sense to add a Windows CNI in this repository vs a separate repo? Contrib guidelines ask for 3rd party plugins to live in their own repo - however one could possibly argue that a Windows plugin makes sense to round off CNI OS support? Looking for opinions.

WinCNI as it is implemented today is a single EXE that implements the CNI interface and uses the hcsshim GO wrapper to configure Windows HNS Networks and Endpoints (https://github.com/Microsoft/hcsshim).

Host-local: add a capability arg for desired address range

The host-local (and maybe the DHCP) plugin should support range configuration via capability args.

Questions:

  1. Do we support the full pool datastructure? Or just a CIDR?
  2. What do we do if there is both a range specified in configuration and a capability arg?

Re-using of IP address of terminated PODS

If the POD is terminated, IP address of the terminated POD doesn't get re-allocated to a new POD, deployed with a different docker image than the terminated POD. If the new POD is deployed with the same Docker image, IP address of the terminated POD is re-allocated.

Workaround of the problem - once the POD is terminated, delete the /var/lib/cni/networks/.

This issue is redirected from multus forum - https://github.com/Intel-Corp/multus-cni/issues/54.

Bridge plugin assumes system has IPv6 enabled

We run our infrastructure with IPv6 fully disabled via the kernel command line with ipv6.disable=1. When using a recent version of the CNI plugins, this causes the following failure for kubenet:

E1220 23:52:31.981780    7122 remote_runtime.go:91] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = NetworkPlugin kubenet failed to set up pod "node-problem-detector-812sn_kube-system" netw
ork: Error adding container to network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory

Kubernetes is passing a body like:

{
  "cniVersion": "0.1.0",
  "name": "kubenet",
  "type": "bridge",
  "bridge": "cbr0",
  "mtu": 1460,
  "addIf": "eth0",
  "isGateway": true,
  "ipMasq": false,
  "hairpinMode": false,
  "ipam": {
    "type": "host-local",
    "subnet": "172.22.65.128/25",
    "gateway": "172.22.65.129",
    "routes": [
      { "dst": "0.0.0.0/0" }
    ]
  }
}

I see in the code that the accept_dad setting is only touched if the IPAM code returns an IPV6 address here:
https://github.com/containernetworking/plugins/blob/master/plugins/main/bridge/bridge.go#L389-L396

I'm trying to figure out why the IPAM code is returning an IPv6 address at all when kubelet isn't asking for one. Is there a way to disable this?

Add support for cloud sdn solution.

Many SDN solution supports attaching multiple network interfaces to instance which is running on cloud. (e.g. aws,google cloud and so on). And I think it is a good idea to utilize this capability to allocate nic for docker container.

In order to implement this feature, Following new plugins are needed.

a plugin which attaches a new network interface to instance and assign static ip stack for network interface.

I would like to implement this plugin. Any ideas are appreciated.

Creating Vlan tagging for the physical interface in a pod

I like get the suggestion from CNI maintainers on the idea, modifying the vlan plugin to add vlan tagging if the master interface is in the container network namespace.

eth0     Link encap:Ethernet  HWaddr 0e:6b:9a:06:a5:a0
          inet addr:192.168.116.3  Bcast:0.0.0.0  Mask:255.255.252.0
          inet6 addr: fe80::c6b:9aff:fe06:a5a0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:96 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:10232 (10.2 KB)  TX bytes:690 (690.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

net0     Link encap:Ethernet  HWaddr d2:70:b0:fd:ce:5a
          inet addr:10.56.217.131  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::d070:b0ff:fefd:ce5a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11484 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3927528 (3.9 MB)  TX bytes:288 (288.0 B)

net0.33  Link encap:Ethernet  HWaddr d2:70:b0:fd:ce:5a
          inet addr:10.56.217.132  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::d070:b0ff:fefd:ce5a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

Let consider the net0 is the Physical interface or master which is in container network namespace. net0.33 is Vlan tagging for it in the same container network namespace.

Why GetCurrentNS() must give us host's namespace in ns.Do()?

plugins/pkg/ns/ns.go

Lines 127 to 167 in 556e509

func (ns *netNS) Do(toRun func(NetNS) error) error {
if err := ns.errorIfClosed(); err != nil {
return err
}
containedCall := func(hostNS NetNS) error {
threadNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("failed to open current netns: %v", err)
}
defer threadNS.Close()
// switch to target namespace
if err = ns.Set(); err != nil {
return fmt.Errorf("error switching to ns %v: %v", ns.file.Name(), err)
}
defer threadNS.Set() // switch back
return toRun(hostNS)
}
// save a handle to current network namespace
hostNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("Failed to open current namespace: %v", err)
}
defer hostNS.Close()
var wg sync.WaitGroup
wg.Add(1)
var innerError error
go func() {
defer wg.Done()
runtime.LockOSThread()
innerError = containedCall(hostNS)
}()
wg.Wait()
return innerError
}

Hi team, I'm learning the NetNS code then get a small questions here.

L149 hostNS, err := GetCurrentNS()

I think GetCurrentNS() just gives us current thread's namespace, how can we make sure it is host's namespace (considering ns.Do may be possibly invoked in container namespace)?

Golang 1.10 - network namespaces from within a long-lived, multithreaded Go processes

Context

Golang 1.10 changed how m-threads are spawned. They are spawned from a template thread and not cloned from the parent process.

Before this change, there was no safe way to change the namespace within a long lived multi-thread process.

Issue

We wrote a test for this to make sure the commit fixed the issue. It, correctly, eventually fails when run with golang 1.9. It passes with golang 1.10.

However, when we run the test with the untilItFails flag in 1.10. It will eventually error where the test failed to get the network namespace, because the process task directory was missing.

    Expected error:
        <*os.PathError | 0xc42319b590>: {
            Op: "open",
            Path: "/proc/29002/task/29015/ns/net",
            Err: 0x2,
        }
        open /proc/29002/task/29015/ns/net: no such file or directory
    not to have occurred

    /media/sf_dev_go/src/github.com/containernetworking/plugins/pkg/ns/ns_linux_test.go:112

Steps to Reproduce

  1. Run the test with untilItFails with golang 1.10. It consistently took about 300 tries for us.
  2. See it fail.

Expected result

We expected it to succeed indefinitely.

Current result

It eventually fails after running the test too many times.

Possible Fix

We found by adding a runtime.UnlockOSThread() to the Do function prevents the test from failing (even when running with the untilItFails).

We believe unlocking the os thread should be safe in 1.10 due to this fix.

However, this fix causes the test to slow down significantly over time, for reasons that we don't understand.

nit typo in info

Some additional CNI network plugins, matinained by the containernetworking team

maintained?

DHCP Damon crash on Release()

Running CNI IPAM plugin (dhcp) on a network managed by libvirt (NATed)...

Everything looks good except for when I delete my services/pods, I see the following in the DHCP daemon output and it crashes.

Jan 04 22:19:17 sdp dhcp[14769]: panic: close of closed channel
Jan 04 22:19:17 sdp dhcp[14769]: goroutine 38 [running]:
Jan 04 22:19:17 sdp dhcp[14769]: main.(*DHCPLease).Stop(0xc4200bc7e0)
Jan 04 22:19:17 sdp dhcp[14769]: /opt/src/gopath/src/github.com/containernetworking/plugin
Jan 04 22:19:17 sdp dhcp[14769]: main.(*DHCP).Release(0xc42000f400, 0xc420194690, 0xa5c238, 0x0, 0
Jan 04 22:19:17 sdp dhcp[14769]: /opt/src/gopath/src/github.com/containernetworking/plugin
Jan 04 22:19:17 sdp dhcp[14769]: reflect.Value.call(0xc420046360, 0xc42000c140, 0x13, 0x8444a6, 0x
Jan 04 22:19:17 sdp dhcp[14769]: /usr/local/go/src/reflect/value.go:434 +0x91f
Jan 04 22:19:17 sdp dhcp[14769]: reflect.Value.Call(0xc420046360, 0xc42000c140, 0x13, 0xc42001e720
Jan 04 22:19:17 sdp dhcp[14769]: /usr/local/go/src/reflect/value.go:302 +0xa4
Jan 04 22:19:17 sdp dhcp[14769]: net/rpc.(*service).call(0xc42003b980, 0xc42003b8c0, 0xc4201825e0,
Jan 04 22:19:17 sdp dhcp[14769]: /usr/local/go/src/net/rpc/server.go:387 +0x144
Jan 04 22:19:17 sdp dhcp[14769]: created by net/rpc.(*Server).ServeCodec
Jan 04 22:19:17 sdp dhcp[14769]: /usr/local/go/src/net/rpc/server.go:481 +0x404
J

type of `IPConfig.Interface` is changed in `containernetworking/cni`

This PR containernetworking/cni#477 has changed the type of IPConfig.Interface from int to *int.
If containernetworking/plugins updates vendered pkg, it may cause https://github.com/containernetworking/plugins/blob/master/pkg/ipam/ipam.go#L56 not working.

The PR containernetworking/cni#477 is rational, but containernetworking/plugins does not catch up with containernetworking/cni. And if someone develops an ipam plugin and use vendoring in his project which will vendor containernetworking/cni and containernetworking/plugins, he may find that https://github.com/containernetworking/plugins/blob/master/pkg/ipam/ipam.go#L56 cause his ipam plugin couldn't be compiled.
So is it necessay to vendor containernetworking/cni in containernetworking/plugins, or is there a way to catch up with containernetworking/cni?

Sending DHCP requests via an alternative link

I realised that the DHCP ipam plugin can only use the created device to contact the DHCP server. I am currently in a scenario, where the device itself can't reach the DHCP server. There is routing and traffic mirroring happening in-between.

I think that adding the option to talk to the DHCP server via a different device could be a nice addition, and it would fit the spec models separation between ipam and main pretty well.

What do you guys think?

include version number somewhere in the source

I'm looking to do automated rpm builds of master branch of this repo for the containernetworking-cni rpm on Fedora rawhide. Part of it involves grabbing the latest version number. But, afaict, version number is recorded only in git tags which won't quite work for me since I'll be using master. So, I'm hoping you can have the string mentioned in the source somewhere.

/cc @fkluknav

Traffic shaping plugin

A motivating example is the kubenet deprecation, but that certainly isn't the only use. It would be nice to have a chained plugin that allowed for some basic bandwidth limiting / traffic shaping.

This would be a new capability arg.

Build test fails. How can I resolve?

Hi.

I've been trying to install the plugins and the binaries seem to be created just okay, but when I run the test.sh, I see a lot of error messages. I cannot figure out the reasons or any clue about those errors.

OS : Gentoo Linux
uname -a : Linux ns2 4.9.34-gentoo #7 SMP Wed Jul 19 13:04:15 KST 2017 x86_64 Intel(R) Xeon(R) CPU X5680 @ 3.33GHz GenuineIntel GNU/Linux

The below is the entire log messages from test.sh.

ns2 ~/plugins # ./test.sh 
Building plugins
  flannel
  portmap
  tuning
  bridge
  ipvlan
  loopback
  macvlan
  ptp
  vlan
  dhcp
  host-local
  sample
Running tests
ok  	github.com/containernetworking/plugins/plugins/ipam/dhcp	0.009s
ok  	github.com/containernetworking/plugins/plugins/ipam/host-local	0.018s
ok  	github.com/containernetworking/plugins/plugins/ipam/host-local/backend/allocator	0.016s
ok  	github.com/containernetworking/plugins/plugins/main/loopback	3.378s
Running Suite: ipvlan Suite
===========================
Random Seed: 1502093322
Will run 3 of 3 specs

• Failure [0.406 seconds]
ipvlan Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:227
  creates an ipvlan link in a non-default namespace [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:103

  Expected error:
      <*errors.errorString | 0xc420156830>: {
          s: "failed to create ipvlan: operation not supported",
      }
      failed to create ipvlan: operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:87
------------------------------
• Failure [0.483 seconds]
ipvlan Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:227
  configures and deconfigures an iplvan link with ADD/DEL [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:189

  Expected error:
      <*errors.errorString | 0xc420157a50>: {
          s: "failed to create ipvlan: operation not supported",
      }
      failed to create ipvlan: operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:137
------------------------------
•

Summarizing 2 Failures:

[Fail] ipvlan Operations [It] creates an ipvlan link in a non-default namespace 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:87

[Fail] ipvlan Operations [It] configures and deconfigures an iplvan link with ADD/DEL 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ipvlan/ipvlan_test.go:137

Ran 3 of 3 Specs in 2.059 seconds
FAIL! -- 1 Passed | 2 Failed | 0 Pending | 0 Skipped --- FAIL: TestIpvlan (2.06s)
FAIL
FAIL	github.com/containernetworking/plugins/plugins/main/ipvlan	2.069s
ok  	github.com/containernetworking/plugins/plugins/main/macvlan	2.036s
Running Suite: bridge Suite
===========================
Random Seed: 1502093322
Will run 8 of 8 specs

••
------------------------------
• Failure [3.583 seconds]
bridge Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:895
  configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.3.0 config [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:666

  Expected error:
      <*os.PathError | 0xc4201e3920>: {
          Op: "open",
          Path: "/proc/sys/net/ipv6/conf/eth0/accept_dad",
          Err: 0x2,
      }
      open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:253
------------------------------
• Failure [1.553 seconds]
bridge Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:895
  configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.3.1 config [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:696

  Expected error:
      <*os.PathError | 0xc4204cb1a0>: {
          Op: "open",
          Path: "/proc/sys/net/ipv6/conf/eth0/accept_dad",
          Err: 0x2,
      }
      open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:253
------------------------------
•
------------------------------
• Failure [1.183 seconds]
bridge Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:895
  configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.1.0 config [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:744

  Expected error:
      <*os.PathError | 0xc420366360>: {
          Op: "open",
          Path: "/proc/sys/net/ipv6/conf/eth0/accept_dad",
          Err: 0x2,
      }
      open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:425
------------------------------
• Failure [0.110 seconds]
bridge Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:895
  ensure bridge address [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:857

  Expected error:
      <*errors.errorString | 0xc4202e1930>: {
          s: "could not add IP address to \"bridge0\": operation not supported",
      }
      could not add IP address to "bridge0": operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:817
------------------------------
•

Summarizing 4 Failures:

[Fail] bridge Operations [It] configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.3.0 config 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:253

[Fail] bridge Operations [It] configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.3.1 config 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:253

[Fail] bridge Operations [It] configures and deconfigures a bridge and veth with default route with ADD/DEL for 0.1.0 config 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:425

[Fail] bridge Operations [It] ensure bridge address 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/bridge/bridge_test.go:817

Ran 8 of 8 Specs in 7.285 seconds
FAIL! -- 4 Passed | 4 Failed | 0 Pending | 0 Skipped --- FAIL: TestBridge (7.29s)
FAIL
FAIL	github.com/containernetworking/plugins/plugins/main/bridge	7.292s
Running Suite: ptp Suite
========================
Random Seed: 1502093322
Will run 3 of 3 specs

•
------------------------------
• Failure [0.217 seconds]
ptp Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ptp/ptp_test.go:205
  configures and deconfigures a dual-stack ptp link with ADD/DEL [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ptp/ptp_test.go:165

  Expected error:
      <*errors.errorString | 0xc42025e1a0>: {
          s: "Could not enable IP forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory",
      }
      Could not enable IP forwarding: open /proc/sys/net/ipv6/conf/all/forwarding: no such file or directory
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ptp/ptp_test.go:70
------------------------------
•

Summarizing 1 Failure:

[Fail] ptp Operations [It] configures and deconfigures a dual-stack ptp link with ADD/DEL 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/ptp/ptp_test.go:70

Ran 3 of 3 Specs in 2.038 seconds
FAIL! -- 2 Passed | 1 Failed | 0 Pending | 0 Skipped --- FAIL: TestPtp (2.04s)
FAIL
FAIL	github.com/containernetworking/plugins/plugins/main/ptp	2.043s
Running Suite: Flannel Suite
============================
Random Seed: 1502093321
Will run 5 of 5 specs

STEP: calling ADD
• Failure [0.107 seconds]
Flannel
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/flannel/flannel_test.go:216
  CNI lifecycle
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/flannel/flannel_test.go:160
    uses dataDir for storing network configuration [It]
    /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/flannel/flannel_test.go:159

    Expected error:
        <*types.Error | 0xc42022ea50>: {
            Code: 100,
            Msg: "open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory",
            Details: "",
        }
        open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory
    not to have occurred

    /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/flannel/flannel_test.go:109
------------------------------
••••

Summarizing 1 Failure:

[Fail] Flannel CNI lifecycle [It] uses dataDir for storing network configuration 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/flannel/flannel_test.go:109

Ran 5 of 5 Specs in 1.730 seconds
FAIL! -- 4 Passed | 1 Failed | 0 Pending | 0 Skipped --- FAIL: TestFlannel (1.73s)
FAIL
FAIL	github.com/containernetworking/plugins/plugins/meta/flannel	1.738s
Running Suite: vlan Suite
=========================
Random Seed: 1502093322
Will run 3 of 3 specs

• Failure [0.404 seconds]
vlan Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:237
  creates an vlan link in a non-default namespace with given MTU [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:106

  Expected error:
      <*errors.errorString | 0xc420158850>: {
          s: "failed to create vlan: operation not supported",
      }
      failed to create vlan: operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:89
------------------------------
• Failure [0.470 seconds]
vlan Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:237
  creates an vlan link in a non-default namespace with master's MTU [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:150

  Expected error:
      <*errors.errorString | 0xc4201598f0>: {
          s: "failed to create vlan: operation not supported",
      }
      failed to create vlan: operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:133
------------------------------
• Failure [1.183 seconds]
vlan Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:237
  configures and deconfigures an vlan link with ADD/DEL [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:236

  Expected error:
      <*errors.errorString | 0xc42021eb30>: {
          s: "failed to create vlan: operation not supported",
      }
      failed to create vlan: operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:184
------------------------------


Summarizing 3 Failures:

[Fail] vlan Operations [It] creates an vlan link in a non-default namespace with given MTU 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:89

[Fail] vlan Operations [It] creates an vlan link in a non-default namespace with master's MTU 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:133

[Fail] vlan Operations [It] configures and deconfigures an vlan link with ADD/DEL 
/root/plugins/gopath/src/github.com/containernetworking/plugins/plugins/main/vlan/vlan_test.go:184

Ran 3 of 3 Specs in 2.058 seconds
FAIL! -- 0 Passed | 3 Failed | 0 Pending | 0 Skipped --- FAIL: TestVlan (2.06s)
FAIL
FAIL	github.com/containernetworking/plugins/plugins/main/vlan	2.065s
ok  	github.com/containernetworking/plugins/plugins/sample	1.213s
ok  	github.com/containernetworking/plugins/pkg/ip	5.855s
Running Suite: Ipam Suite
=========================
Random Seed: 1502093322
Will run 8 of 8 specs

• Failure [0.359 seconds]
IPAM Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:299
  configures a link with addresses and routes [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:187

  Expected error:
      <*errors.errorString | 0xc420158ce0>: {
          s: "failed to add IP addr {Version:6 Interface:0xc420158b30 Address:{IP:abcd:1234:ffff::cdde Mask:ffffffffffffffff0000000000000000} Gateway:abcd:1234:ffff::1} to \"eth0\": operation not supported",
      }
      failed to add IP addr {Version:6 Interface:0xc420158b30 Address:{IP:abcd:1234:ffff::cdde Mask:ffffffffffffffff0000000000000000} Gateway:abcd:1234:ffff::1} to "eth0": operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:139
------------------------------
• Failure [0.337 seconds]
IPAM Operations
/root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:299
  configures a link with routes using address gateways [It]
  /root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:226

  Expected error:
      <*errors.errorString | 0xc420159910>: {
          s: "failed to add IP addr {Version:6 Interface:0xc420159790 Address:{IP:abcd:1234:ffff::cdde Mask:ffffffffffffffff0000000000000000} Gateway:abcd:1234:ffff::1} to \"eth0\": operation not supported",
      }
      failed to add IP addr {Version:6 Interface:0xc420159790 Address:{IP:abcd:1234:ffff::cdde Mask:ffffffffffffffff0000000000000000} Gateway:abcd:1234:ffff::1} to "eth0": operation not supported
  not to have occurred

  /root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:196
------------------------------
••••••

Summarizing 2 Failures:

[Fail] IPAM Operations [It] configures a link with addresses and routes 
/root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:139

[Fail] IPAM Operations [It] configures a link with routes using address gateways 
/root/plugins/gopath/src/github.com/containernetworking/plugins/pkg/ipam/ipam_test.go:196

Ran 8 of 8 Specs in 3.742 seconds
FAIL! -- 6 Passed | 2 Failed | 0 Pending | 0 Skipped --- FAIL: TestIpam (3.74s)
FAIL
FAIL	github.com/containernetworking/plugins/pkg/ipam	3.748s
ok  	github.com/containernetworking/plugins/pkg/ns	6.037s
ok  	github.com/containernetworking/plugins/pkg/utils	0.006s
ok  	github.com/containernetworking/plugins/pkg/utils/hwaddr	0.005s
?   	github.com/containernetworking/plugins/pkg/utils/sysctl	[no test files]
ok  	github.com/containernetworking/plugins/plugins/meta/portmap	2.952s

PTP plugin has an issue to resolve gateway MAC address in a K8s cluster

PTP plugin is built on top of a veth-pair. Specifically, in the container namespace, IPAM module would assign an IP address to a veth dev (e.g., 10.64.1.2) to it while in the host namespace, it is always assigned with gateway IP (e.g., 10.64.1.1) on the other end.

In container namespace, the route is configured as gateway is the only connected device. The implication is that when crafting outgoing packets, we only need gateway's MAC address to serve as the destination MAC.

Take a specific example, a container owns 10.64.1.2/24 IP address, and its routes would look like:

default via 10.64.1.1 dev eth0
10.64.1.0/24 via 10.64.1.1 dev eth0 src 10.64.1.4
10.64.1.1 dev eth0 src 10.64.2.4

If container namespace has the ARP entry for 10.64.1.1, it would have no problem to communicate with any other K8s pods and nodes.

However, when I use PTP plugin to bring up the K8s cluster by the following CNI spec template:

 {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "ptp",
          "mtu": 1460,
          "ipam": {
              "type": "host-local",
              "subnet": "10.64.1.0/24",
              "routes": [
                {"dst": "0.0.0.0/0"}
              ]
          }
        }
      ]
   }

I observed:

  1. ARP entry incomplete in container namespace as no dev responses to ARP request (by tcpdump);
  2. martain source errors in dmesg;

I can see a couple of solutions:

  1. Relax the RPF check on veth devs in host namespace as all veth devs have the gateway IP confuses kernel.
  2. Remove the gateway IP from veth devs in host namespace and enable Proxy ARP(v4)/NDP(v6).

As I tried, either the above approach would resolve ARP resolution on gateway IP in all container namespaces.

p.s. I tried both approaches and I think 2 is cleaner. However, it somehow could not pass ptp test suite. I will send a PR on 1 shortly.

Support use of .1 address when no gateway is specified

We do not specify a gateway when calling ipam. We also want to use the .1 ip when the gateway is not specified. Currently, if we set rangeStart to .1, it still does not assign the .1 IP because it assumes the gateway is .1.

It seems that in this code, a nil Gateway is assigned to .1, and then the .1 will never be assigned, even if rangeStart is .1.

cc / @adowns01

Enable generating random ports with host port mapping

This is a problem that I found when testing https://github.com/projectcalico/k8s-policy/issues/109 . The problem is that when I use hostport mapping feature in calico, I was always need to specify pods hostPort as following in ports section.

root@k8s001:~/calico2.4# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-host
spec:
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: nginx-host
    ports:
    - containerPort: 80
      hostPort: 80
  restartPolicy: Always
root@k8s001:~/calico2.4# kubectl create -f ./pod.yaml
pod "nginx-host" created
root@k8s001:~/calico2.4# kubectl get pods -owide
NAME                      READY     STATUS    RESTARTS   AGE       IP                NODE
nginx-host                1/1       Running   0          9s        192.168.124.143   k8s004

The problem is that if one node start two such pods, only one pod can be started and another pod will be pending always as it cannot get the same port again.

Enable host port mapping can generate the host port randomly so that end user will not need to specify the host port.

why calico-ipam allocate IP using `network IP`?

for example:
i have a three node k8s cluster using calico network, everything is fine:

$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   calico-kube-controllers-578d98f678-rlwgj   1/1       Running   0          15h       192.168.1.43   192.168.1.43
kube-system   calico-node-828ls                          2/2       Running   0          15h       192.168.1.41   192.168.1.41
kube-system   calico-node-kk4jq                          2/2       Running   0          15h       192.168.1.42   192.168.1.42
kube-system   calico-node-m5j5z                          2/2       Running   0          15h       192.168.1.43   192.168.1.43

then i installed kube-dns:

$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   calico-kube-controllers-578d98f678-rlwgj   1/1       Running   0          15h       192.168.1.43   192.168.1.43
kube-system   calico-node-828ls                          2/2       Running   0          15h       192.168.1.41   192.168.1.41
kube-system   calico-node-kk4jq                          2/2       Running   0          15h       192.168.1.42   192.168.1.42
kube-system   calico-node-m5j5z                          2/2       Running   0          15h       192.168.1.43   192.168.1.43
kube-system   kube-dns-566c7c77d8-lshlt                  3/3       Running   0          15h       172.20.120.0   192.168.1.42

to notice pod kube-dns-566c7c77d8-lshlt IP is 172.20.120.0, which is a Network IP, we usually treate this kind of IPs not availabled in network industry, and it actually raised some problems, can we change this behavior?

ns.GetNS sometimes report error 'unknown FS magic'

My program use ns pkg to set other container route, and my program is run in container with privileged and mount host /var/run dir.

But when my program run a period of time, my program will report error:

unknown FS magic on "/var/run/docker/netns/3b51597626b6": 1021994

so, how do I fix this?

other:

go version

go version go1.8.3 linux/amd64

report error code:

func IsNSorErr(nspath string) error {
	stat := syscall.Statfs_t{}
	if err := syscall.Statfs(nspath, &stat); err != nil {
		if os.IsNotExist(err) {
			err = NSPathNotExistErr{msg: fmt.Sprintf("failed to Statfs %q: %v", nspath, err)}
		} else {
			err = fmt.Errorf("failed to Statfs %q: %v", nspath, err)
		}
		return err
	}

	switch stat.Type {
	case PROCFS_MAGIC, NSFS_MAGIC:
		return nil
	default:
		return NSPathNotNSErr{msg: fmt.Sprintf("unknown FS magic on %q: %x", nspath, stat.Type)}
	}
}

report error stat type

#define TMPFS_MAGIC		0x01021994

[panic] panic happened when use portmap

panic: interface conversion: error is *os.PathError, not *exec.ExitError

goroutine 1 [running]:
panic(0x544a40, 0xc4200cd080)
	/home/travis/.gimme/versions/go1.7.5.linux.amd64/src/runtime/panic.go:500 +0x1a1
github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables.(*IPTables).runWithOutput(0xc420015dc0, 0xc4200ee1a0, 0xe, 0x1a, 0x0, 0x0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables/iptables.go:257 +0x4ad
github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables.(*IPTables).run(0xc420015dc0, 0xc4200f6180, 0xc, 0xc, 0xc4200ce500, 0x8)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables/iptables.go:227 +0x5b
github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables.(*IPTables).Exists(0xc420015dc0, 0x56d519, 0x3, 0xc420010990, 0x1c, 0xc4200ce500, 0x8, 0x8, 0x0, 0x0, ...)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/coreos/go-iptables/iptables/iptables.go:95 +0x1a6
main.prependUnique(0xc420015dc0, 0x56d519, 0x3, 0xc420010990, 0x1c, 0xc4200ce500, 0x8, 0x8, 0xc42000ce08, 0x1)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/chain.go:104 +0x84
main.(*chain).setup(0xc42005bbd0, 0xc420015dc0, 0xc4200c1f60, 0x1, 0x1, 0x10, 0xc4200c1f60)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/chain.go:47 +0x113
main.forwardPorts(0xc4200181e0, 0xc42000d220, 0x10, 0x10, 0x1, 0x0)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/portmap.go:84 +0x428
main.cmdAdd(0xc420026690, 0xc42000c930, 0x5)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/main.go:93 +0x1d8
github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.(*dispatcher).checkVersionAndCall(0xc420016440, 0xc420026690, 0x815820, 0xc420012c30, 0x57f748, 0x0, 0x30)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:162 +0x175
github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.(*dispatcher).pluginMain(0xc420016440, 0x57f748, 0x57f750, 0x815820, 0xc420012c30, 0x50)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:173 +0x372
github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.PluginMainWithError(0x57f748, 0x57f750, 0x815820, 0xc420012c30, 0x489040)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:210 +0xf0
github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel.PluginMain(0x57f748, 0x57f750, 0x815820, 0xc420012c30)
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/vendor/github.com/containernetworking/cni/pkg/skel/skel.go:222 +0x63
main.main()
	/home/travis/gopath/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/main.go:125 +0xc8

@squeed I think we should fix it.
ref: coreos/go-iptables#35, coreos/go-iptables#37

Add firewalld chained plugin

When firewalld is used on the host which is using CNI, the new IP addresses (which are created for containers) are not registered, which means that the network traffic from and to the container is blocked.

There is an attempt to integrate firewalld and CNI, but it needs to be re-implemented as a chained plugin: containernetworking/cni#138

Related issue: rkt/rkt#2206

bridge plugin veth not cleaned up upon ipam address failure

When bridge plugin is told that no more addresses are available, a veth isn't cleaned up.

This can be reproduced with a simple test using rangeStart/rangeEnd to simulate e.g. IPAM address exhaustion:

$ sudo ip netns add test0
$ sudo ip netns add test1  
$ cat test.conf           
{
  "name": "test",
  "cniVersion": "0.3.1",
      "type": "bridge",
      "bridge": "mynet",
      "ipMasq": true,
      "isGateway": true,
      "ipam": {
      "type": "host-local",
      "subnet": "10.244.10.0/24",
      "rangeStart": "10.244.10.2",
      "rangeEnd": "10.244.10.2",
      "routes": [
          { "dst": "0.0.0.0/0"  }
      ]
      }
}

$ ip addr show | grep veth
$ sudo cnitool add test /var/run/netns/test0
{
    "interfaces": [
        {
            "name": "mynet",
            "mac": "0a:58:0a:f4:0a:01"
        },
        {
            "name": "vethcf0a0b90",
            "mac": "42:80:89:85:27:d6"
        },
        {
            "name": "eth0",
            "mac": "0a:58:0a:f4:0a:02",
            "sandbox": "/var/run/netns/test0"
        }
    ],
    "ips": [
        {
            "version": "4",
            "interface": 2,
            "address": "10.244.10.2/24",
            "gateway": "10.244.10.1"
        }
    ],
    "routes": [
        {
            "dst": "0.0.0.0/0"
        }
    ],
    "dns": {}
}
$ ip addr show | grep veth
23: vethcf0a0b90@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mynet state UP group default 
$ sudo cnitool add test /var/run/netns/test1
no IP addresses available in network: test
$ ip addr show | grep veth
23: vethcf0a0b90@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mynet state UP group default 
24: veth2d72b1c3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mynet state UP group default 
$ sudo cnitool del test /var/run/netns/test0
$ ip addr show | grep veth
24: veth2d72b1c3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mynet state UP group default 
$ 

Kubernetes pod's hostIP does not function

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

When specifying hostIP: $IP for a pod, the resulting IP tables rule does not contain -d $IP as expected.

What you expected to happen:

That the generated iptables rule has -d $IP alongside --dport $PORT

How to reproduce it (as minimally and precisely as possible):

Create a deployment: kubectl run my-nginx --image=nginx --port=80

Use kubectl edit deployment my-nginx to set

    ports:
    - containerPort: 80
      hostIP: 192.168.1.101
      hostPort: 18080
      protocol: TCP

Check with iptables -t nat -S and see:

-A CNI-DN-80ae20ae0de6904d5f4b4 -p tcp -m tcp --dport 18080 -j DNAT --to-destination 10.233.64.15:80

There is no -d 192.168.1.101 as I would expect.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.9.5
  • Cloud provider or hardware configuration: Bare metal
  • OS (e.g. from /etc/os-release): Ubuntu 16.04.4 LTS (Xenial Xerus)
  • Kernel (e.g. uname -a): 4.4.0-116-generic
  • Install tools: Kubespray
  • Others: Tested with both Flannel and Calico networking

Taken from kubernetes/kubernetes#62112 (comment)

Host-local should support disjoint sets of ranges.

Hi ,
I was testing the multiple ranges support for the IP and looks like it tries to get the IP from all the subnets at a time rather than one by one range.

Is this the expected behaviour ?

echo '{
"cniVersion": "0.3.1",
"name": "examplenet",
"ipam": {
"type": "host-local",
"ranges": [{
"subnet": "10.10.10.0/24",
"rangeStart": "10.10.10.20",
"rangeEnd": "10.10.10.22"

}, {
"subnet": "20.20.20.0/24",
"rangeStart": "20.20.20.20",
"rangeEnd": "20.20.20.21"

}],
"dataDir": "/tmp/cni-example"
}
}' | CNI_COMMAND=ADD CNI_CONTAINERID=example CNI_NETNS=/dev/null CNI_IFNAME=dummy0 CNI_PATH=. ./host-local

Build Script Fails on Darwin

I am following the directions from containernetworking/cni to build the plugins, but I am hitting this build error:

gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/portmap.go:22:2: no buildable Go source files in /Users/daneyonhansen/code/go/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/pkg/utils/sysctl

@squeed I see the sysctl pkg was added as part of this PR. Do you have any troubleshooting suggestions?

The steps I followed are:

  1. Clone the repo:
$ git clone https://github.com/containernetworking/plugins.git
Cloning into 'plugins'...
remote: Counting objects: 4381, done.
remote: Compressing objects: 100% (78/78), done.
remote: Total 4381 (delta 21), reused 56 (delta 9), pack-reused 4292
Receiving objects: 100% (4381/4381), 1.76 MiB | 1.33 MiB/s, done.
Resolving deltas: 100% (2073/2073), done.
Checking out files: 100% (501/501), done.
  1. Create my cni conf files:
$ cat /etc/cni/net.d/10-mynet.conf 
{
	"cniVersion": "0.2.0",
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}

$ cat /etc/cni/net.d/99-loopback.conf 
{
	"cniVersion": "0.2.0",
	"type": "loopback"
}
  1. Run build script:
$ cd plugins/
DANEHANS-M-C1KP:plugins daneyonhansen$ ./build.sh 
Building plugins
  flannel
  portmap
gopath/src/github.com/containernetworking/plugins/plugins/meta/portmap/portmap.go:22:2: no buildable Go source files in /Users/daneyonhansen/code/go/src/github.com/containernetworking/plugins/gopath/src/github.com/containernetworking/plugins/pkg/utils/sysctl

My dev env details:

$ go version
go version go1.8.3 darwin/amd64

$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/daneyonhansen/code/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/cy/116yp7d9389fg2y_z8mkt8nh0000gn/T/go-build588320520=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"

/cc @leblancd

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.