Giter VIP home page Giter VIP logo

aci-containers's People

Contributors

abhijitherekar avatar abhis2112 avatar akhilamohanan avatar amittbose avatar anmol372 avatar bhavanaashok33 avatar ceridwen avatar gaurav-dalvi avatar gautvenk avatar jayaramsatya avatar jeffinkottaram avatar jojimt avatar kahou82 avatar kishorejonnala avatar mandeepdhami avatar mchalla avatar mpaidipa-aci avatar niteshcha avatar oscarlofwenhamn avatar pariyaashok avatar pkharat avatar readams avatar rupeshsahuoc avatar shastrinator avatar smshareef avatar snaiksat avatar tanyatukade avatar tomflynn avatar vikash082 avatar vlella avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aci-containers's Issues

failing network policy to enable dns traffic in OpenShift

We are runnning OpenShift 4.10 on baremetal servers with the Cisco ACI CNI plugin version 5.2.3.4 and ACI 5.2(5c). When we apply a NetworkPolicy.networking.k8s.io/v1 of type egress, we cannot get the DNS service working as expected.

HOW TO REPRODUCE
Deploy the yamls added to this issue. This will deploy:

  • nginx instance in namespace na-nettest-nginx, exposed on a service on port 8080
  • pod with curl in namespace na-nettest-curlclient
  • egress policies in namespace na-nettest-curlclient that will
  • default deny all egress traffic
  • allow egress traffice to namespace na-nettest-nginx for the nginx pod on port 8080
  • allow egress traffic to namespace openshift-dns on port 5353 and 53 (the dns pods are running 5353, actually we think this should be sufficient)
> oc describe netpol allow-curler
Name: allow-curler
Namespace: na-nettest-curlclient
Created on: 2023-01-25 16:29:42 +0100 CET
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=curl-green
Not affecting ingress traffic
Allowing egress traffic:
To Port: 8080/TCP
To:
NamespaceSelector: kubernetes.io/metadata.name=na-nettest-nginx
PodSelector: app=nginx-green
----------
To Port: 5353/TCP
To Port: 5353/UDP
To Port: 53/TCP
To Port: 53/UDP
To:
NamespaceSelector: kubernetes.io/metadata.name=openshift-dns
Policy Types: Egress

The curl pod will continuously curl nginx-green.na-nettest-nginx.svc.cluster.local:8080.

ACTUAL RESULT
"curl: (28) Resolving timed out after 5000 milliseconds" Dns traffic is not working.

Drop logs on curl pod reveal:
Warning Acc-SEC_GROUP_OUT_TABLE MISS(Security Drop) 17s (x2 over 2m22s) aci-containers-host IPv4 packet from na-nettest-curlclient/curl-green-6b9686457f-vzmlk to 10.122.0.10 was dropped

This is the IP address of the Cluster DNS service.

> oc get svc -n openshift-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dns-default ClusterIP 10.122.0.10 <none> 53/UDP,53/TCP,9154/TCP 89d

In the APIC in the HostProtection profile we see that there are no rules created that target the DNS service.

EXPECTED RESULT
The curl is expected to resolve the address and output the nginx result, http code 200.

It is expected than when we target the namespace of the DNS pods and the ports of the pods, that communication to the DNS service is permitted.

IS THERE A WORKAROUND?
We found that if we explicitly add the ip address of the DNS service as ipBlock to the egress, DNS traffic is working. However, we do not see that this is necessary with other network plugins. As stated earlier, targeting the pods should enable the services as well.

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-curler
  namespace: na-nettest-curlclient
spec:
  policyTypes:
  - Egress
  podSelector: 
    matchLabels:
      app: curl-green
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: na-nettest-nginx
      podSelector:
        matchLabels:
          app: nginx-green
    ports: 
    - protocol: TCP
      port: 8080
  - to:
    - ipBlock:
        cidr: 10.122.0.10/32
    ports: 
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53

What is strange, if we test with the workaround, the egress policy to target the nginx pods for the curl is working flawlessly, even if we have the ports of service and pod deviating. We see in the APIC that a rule is added for the IP address of the nginx service. We really wonder why the same principle is not working for the DNS service.

KUBERNETES RESSOURCES

---
apiVersion: v1
kind: Namespace
metadata:
  name: na-nettest-nginx
---
apiVersion: v1
kind: Namespace
metadata:
  name: na-nettest-curlclient
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-green
  name: nginx-green
  namespace: na-nettest-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-green
  strategy: {}
  template:
    metadata:
      labels:
        app: nginx-green
    spec:
      containers:
      - image: nginxinc/nginx-unprivileged:1.23-alpine-slim
        name: nginx-unprivileged
        resources: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-green
  name: nginx-green
  namespace: na-nettest-nginx
spec:
  ports:
  - name: "8080"
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx-green
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: curl-green
    test: nginx
  name: curl-green
  namespace: na-nettest-curlclient
spec:
  replicas: 1
  selector:
    matchLabels:
      app: curl-green
  strategy: {}
  template:
    metadata:
      labels:
        app: curl-green
    spec:
      containers:
      - image: registry.access.redhat.com/ubi8@sha256:323bb3abab06523d595d133fe82c40c13aea0242445c30c9842d34c81c221dea
        name: curl-green
        command: ["/bin/bash"]
        args:
        - -c 
        - 'while true; do date && printf "\n" && curl -I --connect-timeout 5 http://nginx-green.na-nettest-nginx.svc.cluster.local:8080; sleep 5 && printf "\n\n\n"; done'
        ressources: {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: na-nettest-curlclient
spec:
  podSelector: {}
  policyTypes:
  - Egress
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-curler
  namespace: na-nettest-curlclient
spec:
  policyTypes:
  - Egress
  podSelector: 
    matchLabels:
      app: curl-green
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: na-nettest-nginx
      podSelector:
        matchLabels:
          app: nginx-green
    ports: 
    - protocol: TCP
      port: 8080
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: openshift-dns
    ports: 
    - protocol: TCP
      port: 5353
    - protocol: UDP
      port: 5353
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53

Update image references according to 4.0(2)

Please update image references according to cisco application policy infrastructure controller container plugins release 4.0(2).

From what I see that includes:

aci-containers-controller:1.9r43
aci-conatiners-host:1.9r43
cnideploy:1.9r43
opflex:1.9r80

ACI CNI plugin

We have deployed Openshift cluster with Cisco cni plugin. Our cluster has aproximately 600 pods and 20 nodes. Plugin works fine, but when we make changes in network configuration - change snat policy or increase subnet size - leaf start blocking connections from pods(10.205.0.1/16) to node(10.56.196.1/24). Restart aci-containers-host-**** pod on node fix the problem.
Logs from leaf and plugin from one node
aci-containers-controller-snat-operator.zip

aci-containers-controller.log
aci-containers-host.log
aci-containers-openvswitch.log
mcast-daemon.log
opflex-agent.log
10.56.196.28.txt

1.11 support through pip install acc-provision

Hi,

Version 1.9.5 of acc-provision is the latest on pip, and only supports k8s 1.10. Is there any timeline or guidance on when a new release supporting 1.11 (and possibly experimental 1.12 etc) will be made available?

aci-containers-host-k1jxx in CrashLoopBackOff status

After completing the steps in Cisco ACI and Kubernetes Integration document, certain containers are not running as shown below:

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                         READY     STATUS              RESTARTS   AGE
kube-system   aci-containers-controller-2834261735-7719d   0/1       Pending             0          40m
kube-system   aci-containers-host-k1jxx                    2/3       CrashLoopBackOff    12         40m
kube-system   aci-containers-openvswitch-dmwbf             1/1       Running             0          40m
kube-system   etcd-k8s-master                              1/1       Running             0          4h
kube-system   kube-apiserver-k8s-master                    1/1       Running             0          4h
kube-system   kube-controller-manager-k8s-master           1/1       Running             0          4h
kube-system   kube-dns-2617979913-54372                    0/3       ContainerCreating   0          4h
kube-system   kube-proxy-5fhm4                             1/1       Running             0          4h
kube-system   kube-scheduler-k8s-master                    1/1       Running             0          4h

My environment as below:
Cisco ACI version - 3.1(1i)

[root@k8s-master ~]# cat /proc/version
Linux version 3.10.0-693.21.1.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Wed Mar 7 19:03:37 UTC 2018

[root@k8s-master ~]# docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64
[root@k8s-master ~]# acc-provision -v
1.7.1

[root@k8s-master ~]# kubelet --version
Kubernetes v1.7.15

[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.15", GitCommit:"8c7c1c8b0ce0866d5ded2ce4fa402a716ce6bb6c", GitTreeState:"clean", BuildDate:"2018-03-19T14:15:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.15", GitCommit:"8c7c1c8b0ce0866d5ded2ce4fa402a716ce6bb6c", GitTreeState:"clean", BuildDate:"2018-03-19T14:23:07Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.15", GitCommit:"8c7c1c8b0ce0866d5ded2ce4fa402a716ce6bb6c", GitTreeState:"clean", BuildDate:"2018-03-19T14:15:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Appreciate any advice to move forward.

[Kubernetes] If 'aci_config:vrf:tenant' is different from 'aci_config:system_id', faults appear in the created BDs

Hi,

Regarding the Kubernetes integration:

If, following some guidelines found on Cisco doc (https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Kubernetes_Integration_with_ACI.pdf) I would like to have the L3out and VRF in a Tenant 'KubernetesInfra-Tn', the acc-provisioning tool will still create another tenant with name '{{aci_config:system_id}}', will put the EPGs there, but not properly link the BDs to the VRF in tenant {{aci_config:vrf:tenant}}. This creates a fault in the BDs

I just forked, in a few days I could propose the changes and some improved comments in the doc to indicate that there is a Tenant created per aci_config:system_id (per Kubernetes Cluster) while they will all respect the binding to the indicated VRF in aci_config:vrf:tenant

Documentation update acc-provision

Hi,

It seems that there are quite a few possibilities that have not yet been documented (for instance kube_config). Would it be possible for those to be included in provision/acc_provision/templates/provision-config.yaml

(or somewhere else)?

[K8S-ACC] [Injected model] Stale acc replica set entries on APIC UI.

Two issues.

  1. Stale acc entries observed in APIC UI (APIC UI => VirtualNetworking => Container Domains => Kubernetes => => ReplicaSets => Multiple ACC entries)

Suspect this happens with acc-restart

image

  1. Namespace delete doesn’t reflect in the injected model.
    Create a name space and deploy a service (mem-example) and observe that it is reflected on
    APIC UI. Delete the name space - it doesn’t get removed from injected model.

Ensure acc-provision -d will delete APIC objects irrespective of the input file

When I encountered #54 today, I realized there's a potential problem with its workflow when it comes to switching encap modes. To provision the fabric, sauto creates a noiro/acc_provision_input.yaml on the master node and then runs acc-provision against it. Reinstalling the OS wipes this file, so sauto currently uses an input file set in the target encap mode when it needs to run acc-provision -d to delete the old objects before creating the new ones. In other words, sauto doesn't remember the state a fabric is in. A similar variant that I haven't been able to test yet might occur when switching from Kubernetes to OpenShift or vice versa: the APIC resources will correspond to the old flavor, but sauto will call acc-provision -d with the new flavor. That we need to support multiple clusters on one fabric for nested mode adds complexity here, but fundamentally we need acc-provision to delete the right objects irrespective of the encap mode, the cluster flavor, or any other persistent state on the fabric. It may already do this, in which case this is a non-issue and not the source of #54.

v1.6.0 deployment not complete: acikubectl command and expected volume mounts not there

I am currently doing an ACI Kubernetes integration POC for a customer
and followed the documented installation procedures:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_Kubernetes_Integration_with_ACI.html

Doing so, somehow the "acikubectl" command is not deployed
and also the volumes that are defined in the yaml file seems not to be mounted on my system.

Attached you can find my aci-containers.yaml too...

ts@k8s03:~$ acikubectl
acikubectl: command not found

nts@k8s03:~$ acikubectl --help
acikubectl: command not found

nts@k8s03:~$ which acikubectl
nts@k8s03:~$

nts@k8s03:~$ locate acikubectl
nts@k8s03:~$

nts@k8s03:~$ ac
acc-provision   accessdb        acpi_available  acpi_listen     acpid   
$ ls -la /usr/local/etc/
total 0
drwxr-xr-x  2 root root   6 Aug  1 13:16 .
drwxr-xr-x 10 root root 114 Nov 27 14:27 ..
$ ls -la /usr/local/    
total 0
drwxr-xr-x 10 root root 114 Nov 27 14:27 .
drwxr-xr-x 10 root root 105 Nov 27 14:27 ..
drwxr-xr-x  2 root root  82 Nov 27 21:44 bin
drwxr-xr-x  2 root root   6 Aug  1 13:16 etc
drwxr-xr-x  2 root root   6 Aug  1 13:16 games
drwxr-xr-x  2 root root   6 Aug  1 13:16 include
drwxr-xr-x  4 root root  40 Nov 27 19:36 lib
lrwxrwxrwx  1 root root   9 Nov 27 14:27 man -> share/man
drwxr-xr-x  2 root root   6 Aug  1 13:16 sbin
drwxr-xr-x  6 root root  63 Nov 27 14:33 share
drwxr-xr-x  2 root root   6 Aug  1 13:16 src
$ lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:    xenial
$ acc-provision -v    
1.6.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.13", GitCommit:"14ea65f53cdae4a5657cf38cfc8d7349b75b5512", GitTreeState:"clean", BuildDate:"2017-11-22T20:29:21Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.13", GitCommit:"14ea65f53cdae4a5657cf38cfc8d7349b75b5512", GitTreeState:"clean", BuildDate:"2017-11-22T20:19:06Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME      STATUS    AGE       VERSION
k8s02     Ready     43m       v1.6.13
k8s03     Ready     46m       v1.6.13
$ sudo docker ps
[sudo] password for nts: 
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
9f039566d192        noiro/opflex@sha256:33508e0a146fed9ccbd4fcd6fe31938ddaef3caeaa232bc20ade5494c668a596                                             "/usr/local/bin/la..."   13 hours ago        Up 13 hours                             k8s_opflex-agent_aci-containers-host-63zkt_kube-system_266b492d-d3b7-11e7-a153-70f3951d7240_1
582b91ca81e1        noiro/opflex@sha256:33508e0a146fed9ccbd4fcd6fe31938ddaef3caeaa232bc20ade5494c668a596                                             "/bin/sh /usr/loca..."   13 hours ago        Up 13 hours                             k8s_mcast-daemon_aci-containers-host-63zkt_kube-system_266b492d-d3b7-11e7-a153-70f3951d7240_0
82e911cab419        noiro/openvswitch@sha256:8fac5a2468bcb5bd639478a20d367a99b4ebfd469cb454228228c2a7fc1d0d03                                        "/usr/local/bin/la..."   13 hours ago        Up 13 hours                             k8s_aci-containers-openvswitch_aci-containers-openvswitch-x9551_kube-system_26769337-d3b7-11e7-a153-70f3951d7240_0
1ebe68eb1a0d        noiro/aci-containers-host@sha256:1d1b64233d93226fbd91455a75fc11cbfc50398ab7be42df3b33cc277332f3e1                                "/usr/local/bin/la..."   13 hours ago        Up 13 hours                             k8s_aci-containers-host_aci-containers-host-63zkt_kube-system_266b492d-d3b7-11e7-a153-70f3951d7240_0
8a4ea816782a        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 13 hours ago        Up 13 hours                             k8s_POD_aci-containers-openvswitch-x9551_kube-system_26769337-d3b7-11e7-a153-70f3951d7240_0
a4673b737b82        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 13 hours ago        Up 13 hours                             k8s_POD_aci-containers-host-63zkt_kube-system_266b492d-d3b7-11e7-a153-70f3951d7240_0
65ce6ae75872        gcr.io/google_containers/kube-proxy-amd64@sha256:5fbe0e61d8e4330ddc79c33bea71ae6d35e8f51c802845cc7be10d5ba5780d53                "/usr/local/bin/ku..."   14 hours ago        Up 14 hours                             k8s_kube-proxy_kube-proxy-kh0v6_kube-system_c0d134eb-d3b0-11e7-a153-70f3951d7240_0
bc5769b84862        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 14 hours ago        Up 14 hours                             k8s_POD_kube-proxy-kh0v6_kube-system_c0d134eb-d3b0-11e7-a153-70f3951d7240_0
9c9efb9acba1        gcr.io/google_containers/kube-controller-manager-amd64@sha256:6b759f9952e22380134bc4137a6abd96cf03a9bb5c1a1146ea1f3e5ea8cd190c   "kube-controller-m..."   14 hours ago        Up 14 hours                             k8s_kube-controller-manager_kube-controller-manager-k8s03_kube-system_636d901355de70cd9f1ba6d8957e01e3_0
b5541a96ab07        gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940                      "etcd --listen-cli..."   14 hours ago        Up 14 hours                             k8s_etcd_etcd-k8s03_kube-system_7075157cfd4524dbe0951e00a8e3129e_0
3708de62057e        gcr.io/google_containers/kube-scheduler-amd64@sha256:b1cdab0de39ed1bd5193735112e07e1472c4da2ec09c9c70aea5f7c0483a2438            "kube-scheduler --..."   14 hours ago        Up 14 hours                             k8s_kube-scheduler_kube-scheduler-k8s03_kube-system_7a2ab6d56c08cde46f07fdeabceefcb4_0
38a9f669999f        gcr.io/google_containers/kube-apiserver-amd64@sha256:b29e1f6301ca7f05458d20ef7e31190d534344c3c44a87c57154b180751e24ad            "kube-apiserver --..."   14 hours ago        Up 14 hours                             k8s_kube-apiserver_kube-apiserver-k8s03_kube-system_435405d160a3ad776c00d6c111885c23_0
58fe24d70274        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 14 hours ago        Up 14 hours                             k8s_POD_kube-controller-manager-k8s03_kube-system_636d901355de70cd9f1ba6d8957e01e3_0
58490d738625        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 14 hours ago        Up 14 hours                             k8s_POD_etcd-k8s03_kube-system_7075157cfd4524dbe0951e00a8e3129e_0
4322af08b75b        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 14 hours ago        Up 14 hours                             k8s_POD_kube-scheduler-k8s03_kube-system_7a2ab6d56c08cde46f07fdeabceefcb4_0
3efb575072d6        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 14 hours ago        Up 14 hours                             k8s_POD_kube-apiserver-k8s03_kube-system_435405d160a3ad776c00d6c111885c23_0

aci-containers.yaml.txt

VMM left in VXLAN mode despite sauto calling acc-provision to set the fabric in VLAN mode

Earlier today I started automated tests for the VLAN and VXLAN modes. sauto failed during the reimage/OS installation step due to #17, so the test automation invoked it again to try to install in VLAN mode. This install succeeded, but the external routing tests were failing, and when I went to look at the fabric the kube VMM domain was still set in VXLAN mode, even though sauto runs acc-provision as part of the install and the acc_provision_input.yaml that sauto passed to acc-provision used VLAN mode. I tried manually reprovisioning, deleting the resources and then recreating them using the VLAN input file, but while the service tests could now reach some pods, they couldn't reach all the pods. I don't know how exactly this can happen, but we need to avoid the possibility of a fabric ending up in this broken state.

Supported docker-versions

Hi,

We encountered a strange issue in that kubernetes pods provided through DaemonSets would not have visible IPs in APIC, and wouldn't be reachable in the cluster (either externally or by pinging/telnet/curl otherwise from another pod). The exact same pods provided through a Deployment (even at scale 2x number of nodes) would all work perfectly fine.

Additionally, LoadBalancer IPs would be delegated to services, but those ips would not be reachable either (clusterIP would work, as well as the NodePort created by the LoadBalancer - just that the ACI provided IP would not work).

This is with acc-provision 1.9.9 (also tested 1.9.7) against apic 4.0(2c).

This issue seems to occur with kubernetes 1.12 which now supports newer versions of docker (i.e. 18.06 for instance, rather than the 17.03 which has been the max until then).

The question is then - have you tested this setup with daemon sets, and have you tested it with kubernetes 1.12 with newer versions of docker? Are there any limitations that we should be aware of?

unable to assign LoadBalancerIP to service of type LoadBalancer

We are runnning OpenShift 4.10 on baremetal servers with the Cisco ACI CNI plugin version 5.2.3.4 and ACI 5.2(5c). We want to create a service of type “LoadBalancer” and assign an IP address to it. However, the specification is ignored and a random IP addess is assigned.

HOW TO REPRODUCE
Create a deployment with Nginx. Expose through a service of type loadbalancer. Explicitly set LoadBalancerIP.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-green
  name: nginx-green
  namespace: na-nettest-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-green
  strategy: {}
  template:
    metadata:
      labels:
        app: nginx-green
    spec:
      containers:
      - image: nginxinc/nginx-unprivileged:1.23-alpine-slim
        name: nginx-unprivileged
        resources: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-green
  name: nginx-green
  namespace: na-nettest-nginx
spec:
  loadBalancerIP: 10.168.122.55
  type: LoadBalancer
  ports:
  - name: "8080"
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx-green

EXPECTED RESULT
A service is created that exposes the nginx pod on IP 10.168.122.55.

ACTUAL RESULT
A random IP is chosen by ACI, in this case 10.168.122.25 (see status.loadBalancer.ingress)

oc get svc nginx-green -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx-green"},"name":"nginx-green","namespace":"na-nettest-nginx"},"spec":{"loadBalancerIP":"10.168.122.55","ports":[{"name":"8080","port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"app":"nginx-green"},"type":"LoadBalancer"}}
  creationTimestamp: "2023-02-03T11:18:43Z"
  labels:
    app: nginx-green
  name: nginx-green
  namespace: na-nettest-nginx
  resourceVersion: "228488801"
  uid: 659b6662-72c6-4745-a869-eb8a97d9620a
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.122.215.232
  clusterIPs:
  - 10.122.215.232
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 10.168.122.55
  ports:
  - name: "8080"
    nodePort: 30768
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx-green
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.168.122.25

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.